url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
http://mathhelpforum.com/pre-calculus/73568-coordinate-geometry.html | Substitute into the formula for the distance between two points: $\sqrt{(-2 - a)^2 + (3 - [-1])^2} = 6$.
Simplify and then solve for $a$. | 2017-05-24 20:56:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648249745368958, "perplexity": 80.97890974156849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607860.7/warc/CC-MAIN-20170524192431-20170524212431-00538.warc.gz"} |
http://blog.jpolak.org/?tag=education | # Book Review: Ian Stewart's "From Here to Infinity"
Posted by guest on 05. February 2013 · 1 comment · Categories: books · Tags: , ,
A Guest Post by Emily Shier
From Here to Infinity: A Guide to Today’s Mathematics
By Ian Stewart
1996 edition
"From Here to Infinity" is an enchanting read, which inspires both budding mathematicians, and curious outsiders alike. For mathematicians are mysterious beings to the general population; enshrouded in a cloak of cryptic symbols, they slip into another world, with an aura the ignorant classify as having a residue of chalky smoke, and mundane arithmetic.
Stewart bridges the gap between the uninformed individual and the world of mathematics with friendly, open approach. Several comprehensive chapters discuss intriguing topics, including chaos theory, knots, computer technology, algorithms, fractals, Fermat’s last theorem, and how to increase one’s odds of winning the lottery.
Never speaking down to the reader, Stewart provides many examples to illustrate a concept, which are punctuated with the occasional joke. For the reader with little exposure, the examples are fascinating, and show another side of thinking all together. However, as the examples develop, the level of math increases steeply. But, the initial feeling of frustration with a challenging idea gives way to a feeling of satisfied accomplishment with the completion of each chapter.
More »
# Essay Questions in Mathematics? Sure!
Posted by Jason Polak on 29. January 2013 · 2 comments · Categories: math · Tags: ,
Early one morning in the halls of a typical mathematics department, Katie, a graduate student in the field of higher category theory, walks into her final exam for grad algebra 1. She had enough sleep the previous night, and feels confident about her abilities. The first question is a routine application of Nakayama's lemma, and the next an exercise in computing a $\mathrm{Tor}$ group. After half an hour of deftly dealing out solutions, she comes to the last item:
Explain the importance of module theory in ring theory using a few examples.
What kind of exam is this? Katie thinks. The question is not true, false, a computation, a proof, or undecidable in ZFC + V=L! Madness!
### The Role that Essays Could Have in Math
I made this story up entirely. However, believe incorporating a small amount of such questions would be useful in emphasising intuition and the aesthetic side of mathematics, and this is something that could be used in upper undergraduate and all graduate courses.
More » | 2018-07-16 06:43:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29814067482948303, "perplexity": 3212.2939827331106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589222.18/warc/CC-MAIN-20180716060836-20180716080836-00311.warc.gz"} |
http://cms.math.ca/cmb/msc/11T23 | location: Publications → journals
Search results
Search: MSC category 11T23 ( Exponential sums )
Expand all Collapse all Results 1 - 2 of 2
1. CMB 2015 (vol 58 pp. 774)
Hanson, Brandon
Character Sums over Bohr Sets We prove character sum estimates for additive Bohr subsets modulo a prime. These estimates are analogous to classical character sum bounds of Pólya-Vinogradov and Burgess. These estimates are applied to obtain results on recurrence mod $p$ by special elements. Keywords:character sums, Bohr sets, finite fieldsCategories:11L40, 11T24, 11T23
2. CMB 2001 (vol 44 pp. 87)
Lieman, Daniel; Shparlinski, Igor
On a New Exponential Sum Let $p$ be prime and let $\vartheta\in\Z^*_p$ be of multiplicative order $t$ modulo $p$. We consider exponential sums of the form $$S(a) = \sum_{x =1}^{t} \exp(2\pi i a \vartheta^{x^2}/p)$$ and prove that for any $\varepsilon > 0$ $$\max_{\gcd(a,p) = 1} |S(a)| = O( t^{5/6 + \varepsilon}p^{1/8}) .$$ Categories:11L07, 11T23, 11B50, 11K31, 11K38
top of page | contact us | privacy | site map | | 2016-09-28 05:14:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8819519877433777, "perplexity": 3801.3918179287834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661311.64/warc/CC-MAIN-20160924173741-00143-ip-10-143-35-109.ec2.internal.warc.gz"} |
https://www.transtutors.com/questions/1-value-1-00-points-after-evaluating-zero-company-s-manufacturing-process-management-1092208.htm | # 1.value: 1.00 points After evaluating Zero Company’s manufacturing process, management decides to... 1 answer below »
1.value: 1.00 points After evaluating Zero Company’s manufacturing process, management decides to establish standards of 1.5 hours of direct labor per unit of product and $11 per hour for the labor rate. During October, the company uses 3,780 hours of direct labor at a$45,360 total cost to produce 2,700 units of product. In November, the company uses 4,480 hours of direct labor at a $47,040 total cost to produce 2,800 units of product. (1) Compute the rate variance, the efficiency variance, and the total direct labor cost variance for each of these two months. (Input all amounts as a positive value. Indicate the effect of each variance by selecting "F" for favorable, "U" for unfavorable, and "None" for no effect (i.e., zero variance). Leave no cells blank - be certain to enter "0" wherever required. Round your intermediate calculations to 2 decimal places and round your final answers to the nearest dollar amount. Omit the "$" sign in your response.) October November Rate Variance Efficiency Variance Total Labor Variance 2.value: 1.00 points BTS Company made 5,200 bookshelves using 23,200 board feet of wood costing $292,320. The company’s direct materials standards for one bookshelf are 6 board feet of wood at$15 per board foot. (1) Compute the direct materials variances incurred in manufacturing these bookshelves. (Input all amounts as a positive value. Indicate the effect of each variance by selecting "F" for favorable, "U" for unfavorable, and "None" for no effect (i.e., zero variance). Leave no cells blank - be certain to enter "0" wherever required. Round your intermediate calculations to 2 decimal places and final answers to the nearest dollar amount. Omit the "$" sign in your response.) Price variance$ Quantity variance Total materials variance $3.value: 1.00 points Earth Company expects to operate at 70% of its productive capacity of 48,000 units per month. At this planned level, the company expects to use 24,000 standard hours of direct labor. Overhead is allocated to products using a predetermined standard rate based on direct labor hours. At the 70% capacity level, the total budgeted cost includes$57,600 fixed overhead cost and $271,200 variable overhead cost. In the current month, the company incurred$320,000 actual overhead and 26,696 actual labor hours while producing 37,600 units. (1) Compute its overhead application rate for total overhead. (Round your answers to 2 decimal places. Omit the "$" sign in your response.) Cost per Hour Fixed overhead$ Variable overhead Total overhead $(2) Compute its total overhead variance. (Input the amount as positive value. Indicate the effect of each variance by selecting "F" for favorable, "U" for unfavorable, and "None" for no effect (i.e., zero variance). Leave no cells blank - be certain to enter "0" wherever required. Round your intermediate calculations to two decimal places and final answer to the nearest dollar amount. Omit the "$" sign in your response.) Total overhead variance \$
Attachments: | 2020-01-19 16:23:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3868330717086792, "perplexity": 9774.762210649973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594662.6/warc/CC-MAIN-20200119151736-20200119175736-00402.warc.gz"} |
https://www.zhangjc.site/archives-52/ | You've successfully subscribed to The Daily Awesome
Welcome back! You've successfully signed in.
Success! Your billing info is updated.
Billing info update failed.
# 005.最长回文串-Longest Palindromic Substring
Longest Palindromic Substring
## Analysis: Dynamic Programming
To improve over the brute force solution, we first observe how we can avoid unnecessary re-computation while validating palindromes.
Consider the case "ababa". If we already knew that "bab" is a palindrome, it is obvious that "ababa" must be a palindrome since the two left and right end letters are the same.
Define $P(i,j)$ as following:
$$P(i, j)=\left{ \begin{array}{lr} \text{true,}&\quad\text{if the substring } S_i \dots S_j \text{ is a palindrome} \ \text{false,} &\quad\text{otherwise.} \end{array} \right.$$
Therefore,
$$P(i,j)=(P(i+1,j−1) \land S_i==S_j)$$
The base cases are:
$$P(i, i) = true$$
$$P(i, i+1) = (S_i == S_{i+1})$$
Complexity Analysis
• Time complexity : $O(n^2)$. This gives us a runtime complexity of $O(n^2)$.
• Space complexity : $O(n^2)$. It uses $O(n^2)$ space to store the table.
[scode type="yellow"]
• 这道题中由动态规划的递推公式可知,在矩阵中,当前位置由其右下角的单元及字符串本身决定。
• 因此在自底向顶的设计中,应该优先填充左下角的部分
• i--
• j++
其中 i 表示横,j 表示纵
[/scode]
## My solution: 38ms
class Solution {
public String longestPalindrome(String s) {
int len = s.length();
if (len == 0) return "";
char[] str = s.toCharArray();
boolean[][] dp = new boolean[len][len];
// default: ∀i∀j(dp[i][j] == false)
int maxLen = 0;
int maxLeft = 0;
int maxRight = 0;
/* base case: */
for (int i = 0; i < len; i++) {
dp[i][i] = true;
if (i + 1 < len && str[i+1] == str[i]) {
dp[i][i+1] = true;
maxLen = 1;
maxLeft = i;
maxRight = i + 1;
}
}
/* using dynamic programming */
// enumerate right boundry of the substring
for (int j = 0; j < len; j++) {
// enumerate left boundry of the substring
for (int i = j - 2; i >= 0; i--) {
dp[i][j] = str[i] == str[j] && dp[i+1][j-1];
if(dp[i][j] && j - i + 1 > maxLen){
maxLen = j - i + 1;
maxLeft = i;
maxRight = j;
}
}
}
return s.substring(maxLeft, maxRight + 1);
}
} | 2020-09-20 01:03:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4298645257949829, "perplexity": 10650.854211141432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400193087.0/warc/CC-MAIN-20200920000137-20200920030137-00238.warc.gz"} |
https://rhizzone.net/forum/topic/14062/ | #1
(Note: the rhizzone reader probably won't learn anything from this post, but I felt like writing.)
The post-war order, under US hegemony, saw the "intermeshing" and relative integration of rival imperialist monopoly capitals. An economic bloc was formed in which commodities and capital could flow, and to a lesser extent labor-power. The chapter of MIM's 1997 book that deals with this begins by noting the reversal in FDI flows, which reflect this change:
In 1914, the pattern of foreign direct investment was for the industrial countries to put their capital into the colonies. Hence, 62.8 percent of investment went to the Third World and only 37.2 percent went to other colonial countries. The situation reversed after World War II, and by 1985, 75 percent of investment occurred from one imperialist country into another and only 25 percent went to the Third World.
It makes sense that during the period of direct colonization, capital would directly flow into the colonies because the colonies were where super-profits could be made, because capital investment follows the profit-rate. (This is what allows profit rates to equalize, yadda yadda yadda.) This FDI figure has not changed since. So why did this reverse?
Of course, one says, we shouldn't take FDI figures at face value. Official FDI data does not differentiate between mergers & acquisitions and so-called "greenfield" investments - mergers & acquisitions represent merely the centralization of capital, not profitable (in terms of it employing labor-power to produce new value) investment. (One can still break down FDI further, note some other issues, and today's FDI figures alone disprove the Eurocentric "capital shuns the Third World" thesis. But this does not capture the whole picture.)
Let's set that aside. We are able to see the relevance of this near flip in FDI data by looking at the years in question: 1914 and *1985*. 1985 would have been when the project of globalization and financialization first clearly took shape. Here is some relevant data:
GDP share of US finance industry (i.e. value captured by finance):
US employment by sector as a percentage of employed workers:
Global mergers & acquisitions
Third World manufactured goods in world trade:
Third World manufactured goods imported by imperialist countries:
Thus the shift of post-war neocolonialism from a situation where the Third World exported raw materials to be manufactured in the imperialist core, to a world of globalized production where the economy of the imperialist countries was such that a majority of workers were employed in unproductive occupations, had just begun.
But where does, and where did, the capital for production in the Third World come from, if not from FDI? To answer this, we need to understand what we mean by globalization.
John Smith writes the following in an abstract of his book:
Neoliberal globalization must therefore be recognized as a new, imperialist stage of capitalist development….
The globalization of production has transformed not just the production of commodities but of social relations in general, and especially of the social relation that defines capitalism: the capital-labor relation, which is increasingly a relation between northern capital and southern labor.
Its fundamental driving force is what some economists call “global labor arbitrage”: the efforts by firms in Europe, North America, and Japan to cut costs and boost profits by replacing higher-waged domestic labor with cheaper foreign labor, achieved either through emigration of production (“outsourcing,” as used here) or through immigration of workers.
The result is a highly peculiar structure of world trade, in which northern firms compete with other northern firms, their success hinging on their ability to cut costs by outsourcing production; and firms in low-wage countries fiercely compete with each other, all seeking to exercise the same “comparative advantage,” namely their surfeit of unemployed workers desperate for work. But northern firms do not generally compete with southern firms.
The last line hints at his thesis. One of Smith's main contentions is that imperialist investments are largely made indirectly through "arms-length" contract manufacturing today. This is such that "the only part of Apple’s profits that appear to originate in China are those resulting from the sale of its products in that country" (Smith 22). The latter part of the following quote makes this explicitly clear:
The UNCTAD (United Nations Conference on Trade and Development) "estimates that “about 80 per cent of global trade… is linked to the international production networks of TNCs ”" and "about 60 percent of global trade… consists of trade in intermediate goods and services that are incorporated at various stages in the production process of goods and services for final consumption”" (50).
"In contrast to FDI", Smith writes, "where the production process and associated revenues are offshored but kept in-house, an outsourcing firm may chose to contract some or all of production to an independent supplier while retaining effective control over both the final product and the process of its production" (79). This is favored because (81-82):
• Local capitalists tend to be more exploitive than foreign TNCs because of the fierce South-South competition
• TNCs are able to maintain "clean hands" - "responsibility for pollution, poverty wages, and suppression of trade unions" is outsourced; other 'risks' are outsourced as well, like cyclical fluctuations in the world-market
• No repatriated profits appear in the data
• The lack of direct “N-S capital flows enables Northern firms to divert investment funds into "financial intermediation and speculation”"
(The last point makes clear the connection between globalization and financialization.)
MIM is more explicit in regard to the first two points:
The 500 million invisible workers work under military regimes or death-squad governments or in the best of circumstances, they work in newly minted political regimes with good intentions that nonetheless come with a history of low wages. We do not mean that there is a particular set of 500 million workers who work for free. We only mean that 20 percent of the work of 2.5 billion Third World workers and peasants is done for free for the imperialist countries, because they are forced to by military regimes or regimes that compete with military regimes.
Even in those regimes where the rulers do not use military force, the threat of political/military force remains and unless they mobilize their peoples for people's war against imperialism, even the best-intentioned rulers must set up their countries to compete with countries that do keep down wages by using death-squads often furnished with Amerikan weapons and training. "Sony's Kirihara observed: 'We should not think only of Japan but Korea and Taiwan, If we compare Korea with China, nobody can compete. Even Malaysia or Singapore is weaker than China or India because of wages."(194) So if you are a poor country emerging from semi-feudalism and you have many people seeking employment, whether you like it or not, you are competing with countries where they do use repression against union organizers to keep down wages.
Today, "he large majority of the roughly five billion inhabitants of the Global South now live in countries where manufacturing exports—mainly to the imperialist economies—form more than a half of their total exports" (66). This and the Third World manufactured goods graphs cited above should make clear that the majority of the working class lives in the Third World:
Let's deal briefly with the change in the US class structure. It's no coincidence that the services industry massively increased as agriculture and manufacturing declined. Sakai was writing when this was occurring:
We can see this in the dramatic increase of the non-productive layers in economic life. While this phenomenon is centered in the rule of finance capital, its manifestation appears in all imperialist institutions. Advertising, marketing, package design, finance, "corporate planning," etc. mushroom with each corporation. Management on all levels grows as numbers of production workers shrink. When one includes the large army of white-collar clerical workers needed to maintain management and carry out its work, the proportions become visibly lop-sided.
MIM continues as to the implications of imperialist capital integration.
Since trade and finance do not produce physical wealth themselves, it only proves that the activities of the unproductive and parasitic sectors have been spread around, so that no one imperialist can enjoy parasitic advantages over another, as in the old days of colonialism.
Now no single imperialist entity can completely exclude other imperialists. As a result, the possible outlets for capital exported from the imperialist countries have become more similar. As late as 1970, cross-border movements of capital were relatively infrequent in the imperialist world. We could say that there was mobility of capital within the United $tates, but we could not say capital was mobile across imperialist country borders. With the collapse of colonialism, all that changed.… If capital is allowed free flow between places, as it is in the 1990s, we can expect the rate of profits and superprofits to become similar across those places. If the profit rises somewhere, the capital will flow to that place from all over the world, if there are no political obstacles. In fact, if there are political obstacles, if the profit differential is great enough, the capitalists wishing to invest where there are political obstacles will see to the removal of those obstacles through bribery or war. As to what it means in regard to other imperialist class structures, they write that "he long-standing freedom of movement for capital that has existed within the old Western imperialist bloc led by the United$tates against the old Soviet social-imperialists is the major reason that MIM believes it proved the nature of the Western European class structure in MIM Theory #1". Why (my emphasis)?
If Amerikan capital is heavily invested in Latin America, then French capital can share in the swag from Latin America simply by investing in the United $tates. Japanese banks can and do buy interest-bearing securities in the United$tates and collect a share of the loot wherever the U.$. monopoly capitalists got it. In the private sector, other imperialists can also integrate themselves with U.$. imperialism.
As to the future:
Meanwhile bourgeois internationalists… share a vision of international imperialist cooperation to exploit first pioneered by arch-revisionist Karl Kautsky. They favor equal opportunity exploitation for people of all countries and they have the momentum in their creation of rudimentary forms of world government.
The intervention of the UN in places previously thought not to be appropriate places for intervention by the UN--Somalia, Bosnia, Iraq etc.--reflects the increased interweaving of imperialist capital and the dissolution of both the socialist bloc and its successor social-imperialist bloc. Whereas UN intervention in Korea was the exception of the time, in the future, the UN under bourgeois internationalist leadership will try to make Korea the rule and not the exception. In this agenda of war only on the oppressed nations, the bourgeois internationalists can be assured of a good degree of labor aristocracy support, compared with other aspects of world government opening the labor aristocracy up to competition.
Compared with Lenin's day, 1997 shows one aspect of Kautsky's theory of super-imperialism has become less far-fetched. That aspect is the unification of imperialism and the amelioration of national conflict amongst the imperialist capitals. The end of strict colonialism in which imperialists were iced out of competitor colonies completely has ended. Not to mention trade, massive cross-national investment amongst imperialists has become a reality.
But Kautsky is incorrect from a Leninist point of view because:
The trend of interweaving of imperialist capital goes along with an acceleration of the gap between the Third World and the imperialist countries, and for this reason, Kautsky's theories are invalid on a grander scale than in Lenin's day. The fact that most investment occurs in the imperialist countries speaks to the decadence of this stage of capitalism where the focus is increasingly on realizing surplus-value and not on production itself. Both the interweaving of imperialist capital and expansion of the labor aristocracy spell the doom of imperialism's viability all the faster, because neither activity generates surplus-value.
And:
There are those who would claim to follow Marx and not Lenin that say that Lenin was wrong about finance capital being the dominant sector of imperialism. These anti-Leninists are wrong, because no industrial capitalist can avoid the competition engendered by banking capital's activities, and because as Poulantzas points out, it is wrong to separate industrial and banking capital.
The obvious limits to bourgeois internationalism are:
Russian and Chinese imperialism are the most concrete manifestations of contradictions for the integration of finance capital. Most bourgeois economists will admit that labor is not free to cross borders, and hence is not a "mobile factor" in a global free market. Thus we say there is no globally integrated market for labor-power. Moreover, it is true that imperialist capital can now penetrate pretty much every nook and cranny, but we would prefer to limit talk of finance capital integration to discussion of the imperialist countries. The meaning of counting the Third World as integrated into finance capitalism is clouded by the stunted development of a bourgeoisie in the Third World.
MIM concludes as follows (my emphasis): "[w]e limit our conclusion to saying that the vast preponderance of imperialist capital is integrated and this is a new and important development fully conforming to Lenin's theses".
Okay, now here's the thing, and here's the real reason I made this thread. If the regular functioning of capitalism is now dependent on these globalized processes, and it's the imperialist countries that are most integrated, with an agreement of sorts between the imperialists to share the spoils of imperialism under US hegemony, how does t H E r H i z z o n E think shit is going to go down with regard to the reaction against "globalism"? I'd have to look at data (if such data exists), but if, say, Japan's economy is dependent on investing in the US economy (even if the surplus-value is really from some Third World country), what's gonna happen when they get cut out because of a low profit rate and so on?
#2
theres going to be a big war
#3
tears posted:
theres going to be a big war
my thought atm is that there isn't going to be a single big war, there's just going to be a bunch of wars over access to markets. i think that's what john smith writes at the end of the book, but i read it like a year ago and between that and the dems russia stuff i stopped thinking that for a while
#4
i keep trying to come up with something but ever ytime i write it out it sounds like that one grover post about how Iraq will be crushed in a week or w/e so i'm gonna keep thinking.
my one point that might be worth considering is i think even China is too involved in our empire and our most direct opposite is India, but that's from a clinical and morally bankrupt IR view and not a marxist one so who knows.
heck, talking about globalism ending is requiring too much vague conjecture on my part so either it's universally unclear or i haven't read enough in that vein.
#5
imo (and someone cleverer than me will probs contradict) its pretty short sighted to talk about the complete integration of imperialist capital or w/e while while different booj cabals maintain their own distinct collections of armed forces for pursuing their own economic interests against each other.
The imperialist means for maintaining "their own" geographically positioned stable base areas through imperialist bribes is drying up as things stand atm, unrest across imperialist countries will spread and those countries where they think they're not getting a "good deal" or just feel that they're better off going their own way will split with the post ww2 order and pursue economic nationalism (including amerikkka itself mebbe), which will force all the others to do it too, which can only lead to war between them, as things stand now with the imperialist labour aristocraccy cheering them on
i mean as an example, for all the talk of peace and integration of imperialism, we are currently seeing some pretty intense EU-USA tit for tat economic warfare going on between multinats of the two blocks exercising state power to fine and fuck about with multinats from the other block. The USA really doesn't like the fact that the German empire is an economic threat any more than it likes china.
the "reaction against globalism" is not some sort of oddity but a direct manifestation of the disintergration of the post ww2 compact for the "peaceful" exploitation of the global south and a trend which will only increese despite various countertendencies
i think there will be war(s) but whatever this is all futerology to me and could be completly wrong
e: and the rising contradiction that may supercided all of this is between capitalism and the environment, which should never be overlooked when thinking about the future. Things are changing almost too fast to make sense of them
e2: hoky fuk am i bad at translating my thoughts in2 words
Edited by tears ()
#6
tears posted:
we are currently seeing some pretty intense EU-USA tit for tat economic warfare
examples? i haven't been able to keep up wiht western european shit while paying attention to eastern europe.
#7
teh VW emissions thing, and the google tax bill
#8
nice OP! in a similar vein Monthly Review has an article on this topic thats relatively comprehensive imo
#9
marlax78 posted:
if, say, Japan's economy is dependent on investing in the US economy (even if the surplus-value is really from some Third World country), what's gonna happen when they get cut out because of a low profit rate and so on?
in a similar vein:
The system of unequal exchange is one of the indispensable mechanisms for the operation of the capitalist world-economy. With unequal exchange, the surplus value produced in the system is concentrated in the core zone, generating large profits for the core-zone capitalists who in turn engage in accumulation in the crucial “leading sectors” that act as the driving engines for the entire capitalist world-economy. Surplus value also provides the financial resources required for the construction of social compromises in the core zone, indispensable for the core-zone’s political stability.
Between the core and the periphery, there is a third layer of states: the semi-periphery. This is another indispensable mechanism for the operation of the capitalist world-economy. In term of their political strength and their positions in the system-wide division of labor, the semi-peripheral states have characteristics and play roles that are located between the core states and the peripheral states.
The semi-periphery acts as the “middle stratum” in the capitalist world-economy and plays a crucial role for the political stability of the world-system as a whole. Without the semi-periphery, the core zone risks the combined resistance from the exploited periphery that comprises the overwhelming majority of the world population. However, to secure the political support or at least the neutrality of the semi-periphery, it is necessary for the core zone to share at least part of the surplus value exploited from the periphery with the semi-periphery.
Until the mid-twentieth century, there was not much problem with this arrangement as the semi-periphery was composed of states with a minority of the world population. The “buying-off” of the semiperiphery was thus relatively inexpensive. Since then, fundamental transformations have taken place in the capitalist world-economy. The rapid growth of the Chinese and the Indian economies has been among the most important developments. What could be the worldhistorical implications of the rise of China and the rise of India?
If the per capita incomes and wage rates in China and India were to approach the semi-peripheral states’ levels, what would remain as the periphery? Would the remaining periphery – much reduced in size – be able to generate a sufficiently large surplus value that would be able to support not only the core zone but also a greatly expanded semi-periphery? [lol no] Could the competition between the core zone and the greatly expanded semi-periphery lead to a dramatic narrowing of the system-wide profit margin and therefore undermine the systemic accumulation as a whole? Related to this, does the world still have the ecological space to accommodate the rise of China and India? In short, can the capitalist world-economy survive the rise of China and India?
- Minqi Li, The Rise of China and the Demise of the Capitalist World Economy
#10
[account deactivated]
#11
here's a thing from a more recent Li paper that fans of Wallerstein, Cope, and/or Smith will like
(USA/China not purely vis-a-vis one another, but in their overall relationship to the rest of the world)
edit:
The practical implications are that if there had been no unequal exchange, for the US to maintain its existing material consumption levels, about 50 million US workers would have to be transferred from the non-essential services back to the goods production sectors (assuming that the American workers will have the same labor productivity to produce the currently imported goods as the foreign workers). Statistically, this would lead to a reduction of the US economic output by about one-third.
#12
lol that graph alone disproves the stupid china = imperialist thesis, i think. wanna link to the original paper?
e: i was looking for it on my uni's database for it and i found this which may be of interest as well, which i'll read later: https://a.uguu.se/uh1y7t3y8EDe_out.pdf
Edited by marlax78 ()
#13
marlax78 posted:
wanna link to the original paper?
here. it's a draft of a chapter for a collection on emerging economies due out next month
#14
20th KKE Congress CC Theses
The first chapter engages with the developments in the international imperialist system and includes assessments on the economic-social developments in the world at the end of the second decade of the 21st century. It identifies the arenas where the inter-imperialist antagonisms are sharpening, where there is an increase of local and regional conflicts and the dangers for a more generalized imperialist war in the conditions where the victims of war have increased the flows of refugees and immigrants. The adjustments-modernizations of the repressive apparatuses of the bourgeois states and their inter-state unions are being pushed to the fore on this terrain.
#15
incidentally is there anything i can read about how the KKE re-embraced stalin post 1991
#16
minqi li fucking owns
#17
[account deactivated]
#18
marlax78 posted:
The obvious limits to bourgeois internationalism are
catchphrase!
haha...but really, good thread. How do you think increasingly significant internal, commercial consumption and the more n more prominent haute bourgeoisie class in countries like India and China will affect this? (if at all.) thanks
Edited by herbsaint ()
#19
random bump. i'm having my library get a hold (they dont have a sub to the journal it was in) of Cope's "Global Wage Scaling and Left Ideology: A Critique of Charles Post on the ‘Labour Aristocracy’" since it's behind a paywall and the link to the pdf online is dead. when i pick it up, i'll transcribe it 4 this thread.
@herbsaint if you're asking if shifts in the reproduction processes (greater or lesser consumption by the bourgeoisie) can cause a breakdown a la luxemburg, my intuition tells me no for the old reasons. i'm an idiot tho.
#20
marlax78 posted:
random bump. i'm having my library get a hold (they dont have a sub to the journal it was in) of Cope's "Global Wage Scaling and Left Ideology: A Critique of Charles Post on the ‘Labour Aristocracy’" since it's behind a paywall and the link to the pdf online is dead. when i pick it up, i'll transcribe it 4 this thread.
@herbsaint if you're asking if shifts in the reproduction processes (greater or lesser consumption by the bourgeoisie) can cause a breakdown a la luxemburg, my intuition tells me no for the old reasons. i'm an idiot tho.
Basically, except what I meant (and didn't actually make clear) was rather than breakdown, how do u think increasing internal consumption in China, India etc among these demographics effects the final point you bring up in this post re: anti globalism backlash? Feels like it might significantly mitigate potential export losses
#21
Global Wage Scaling and Left Ideology: a Critique of Charles Post on the 'Labor Aristocracy':
by Zak Cope
Abstract: This essay demonstrates that US economist Charles Post's attempted rebuttal of the 'labor aristocracy' thesis is both theoretically and empirically flawed. Defending the proposition that colonialism, capital export imperialism, and the formation of oligopolies with global reach have, over the past century and more, worked to sustain the living standards of a privileged upper stratum of the international working class, it rejects Post's assertion that the existence of such cannot be proven. The essay concludes with a working definition of this 'labor aristocracy': setting the concept within the field of global political economy and reclaiming its relevance to the Marxist tradition.
Nowadays, given the enormous gap between the living standards of the people in the First World and the people in the Third World, a statement such that the problems facing most workers in the former are significantly less daunting than those facing the majority of the world's workers residing in the latter may appear self-evident (1). That the lack of any revolutionary movement aiming at the abolition of capitalism in the rich countries may have something to do with the affluence of the workers there might, at first blush, seem equally uncontroversial. After all, as English radical William Cobbett famously challenged in the early nineteenth century, 'I defy you to agitation any fellow with a full stomach'. On the left, however, the idea that the global divide between rich and poor nations has its reflection in the divide between rich and poor workers is very often anathema.
US economist Charles Post is today the leading left theorist concerned with refuting the Marxist concept of the 'labor aristocracy' (2). This term has traditionally come to delineate the most well-off section of the workers of the world constituted through what I shall refer to herein as the global stratification of labor, that is, 'the scaling of radically different wages paid for the same labor in countries of the North and the South' (Amin, 2011). More precisely, the labor aristocracy is that section of the global workforce that is afforded its prosperity in large part by the redistribution of surplus value extracted from non-aristocratic labor. The condition for this redistribution is the labor aristocracy's political rapprochement with capital engaged in the super-exploitation of subject labor in the (neo)colonial countries.
Post has challenged the idea that 'super-profits, derived from either imperialist investment in the global South or corporate monopoly, and shared with a segment of the working class, is the source of enduring working-class racism and conservatism in the United States and other industrialized capitalist societies' (Post, 2010, p. 5). The proposition central to Post's rejection of the labor aristocracy thesis is that the 'existence of a privileged layer of workers who share monopoly super-profits with the capitalist class cannot be empirically verified' (Post, 2010, p. 3). For Post, as opposed to those writers whom he criticizes - Marx and Engels (1955, p. 132), Zinoviev (1984 ), Lenin (1964, 1970, 1974), and Elbaum and Seltzer (1982, 2004) - 'wage-differentials among workers in the advanced capitalist countries be explained either by Britain's dominance of key-branches of global production in the late-nineteenth century, by profits from investments in the global South, or by the degree of industrial concentration' (Post, 2010, p. 4). As we will see, however, not only is Post wide of the mark in his specific criticisms of the aforementioned authors, his narrow concern with wage differentials inside the imperialist countries misses the most significant economic and political repercussions of global labor stratification.
The following critique of Post's view on the labor aristocracy will proceed according to the order in which he himself has traced the intellectual evolution of the labor aristocracy thesis (3). Beginning with a rebuttal of Post's critique of Marx and Engels, we will go on to take issue with Post's dismissal of the classical Marxist understanding of the concept and his repudiation of the role of oligopoly in determining wage differentials.
(1) The term First World refers to the developed countries of the United States and Canada, Europe (excluding Russia and parts of Eastern Europe), Japan, Israel, Australia and New Zealand. 'Third World' refers to the underdeveloped countries of Asia (excluding Japan and Israel), Africa, 'Latin' America, the Caribbean and Oceania (excluding Australia and New Zealand)
(2) The term was originally coined in 1872 by Russian anarchist Mikhail Bakunin, who criticized the idea, which he attributed to Marxists, that organized workers are the most revolutionary social group (Post, 2006a, 2006b, 2010)
(3) It is important to note that Post misses much of the important contributions made to the theory of global labor stratification by dependency, unequal exchange and world systems theorists both inside and outside of academia. See, for example, Emmanuel (1976); Amin (1976); Sau (1978); Stavrianos (1981); Edwards (1978); Communist Working Group (1986); Sakai (1989); Cope (2012).
#22
brb 30 minutes
#23
marlax78's status is set to away
#24
In Defense of Engels on the Labor Aristocracy
Engels famously argued that there is a material basis for metropolitan workers' social chauvinism, that is their patriotic attachment to a (neo-)colonialist government. In 1882, when asked in a letter by German Socialist Karl Kautsky what the English working class thought of colonialism, Engels replied:
Exactly the same as they think about politics in general, the same as what the bourgeois think. There is no working class party here, there are only Conservatives and Liberal-Radicals, and the workers merrily devour with them the fruits of the British colonial monopoly and of the British monopoly of the world market. (Engels quoted in Lenin, 1969, p. 65)
For Engels, 'opportunism' in the British Labor movement was a result of and is conditioned by the preponderance of two major economic factors, namely, in Lenin's words, 'vast colonial possessions and a monopolist position in world markets' (Lenin 1969, p. 65). As he wrote to Marx in 1858:
The British working class is actually becoming more and more bourgeois, so that this most bourgeois of all nations is apparently aiming ultimately at the possession of a bourgeois aristocracy and a bourgeois proletariat as well as a bourgeoisie. Of course, this is to a certain extent justifiable for a nation which is exploiting the whole world. (Marx & Engels, 1955, p. 132)
Deny the existence of a Victorian-era labor aristocracy, Post (2010, p. 7) defines Marx and Engels' position thus:
Marx and Engels argued that British capitalists accrued higher-than-average profits from their 'industrial monopoly' in the world-market of the mid-nineteenth century. These super-profits allowed British capitalists to recognize the skilled workers' craft-unions and accept their respective apprenticeship-practices, which, in turn, enabled the labor-aristocracy to secure a role in supervising less-skilled workers, higher-than-average wages, and more-secure employment.
Post rejects this picture of embourgeoisement - detached as it is from Marx and Engels' emphasis on the division of labor established by colonialism - by asserting, firstly, that the supervision of unskilled workers by skilled workers was not universal (there being only weak evidence for skilled workers in textiles and mining acting as task masters). Secondly, he claims that 'craft-unions were able to secure stable, year-round employment for all of their members'. In the face of technological advancement and the parallel deskilling of labor, Post asserts, by the end of the nineteenth century it became increasingly difficult for the craft-based unions to maintain traditional restrictions over the training and supply of labor. Thirdly, Post underlines that the alleged ascendancy of the British labor aristocracy in the decades after 1870 actually coincides with the decline of Britain's domination of the world market and the rise of German and US competition. During this period, he argues, wages fell for the entire British working class. Finally, Post writes, 'the profits earned through the export of British machinery divided by the number of skilled metal-workers "would not have amounted to the average weekly wage of an engineer in Manchester in 1871"'. Overall, Post argues that the flexibility provided to the capitalist class by its receipt of super-profits cannot provide an explanation for the growth of the labor aristocracy from the mid-to-late nineteenth century. Rather, he suggests that it was the high productivity and skill levels of workers in certain Victorian industries that account for their high wages (Post, 2010, p. 18).
We may deal with each of these criticisms in turn. Before doing so, however, an important point to note about Post's critique of Marx and Engels is that 'every contemporary political commentator on the phenomenon of the classic, late nineteenth century labor aristocracy not only recognized its existence, but usually predicated part of their political activity on either fostering it (The Liberal Party, Disraeli (with his "one nation" conservatism - ZC), organizing it (the New Model Trade Unions), or fighting its bankrupt political standpoint (the revolutionaries)' (Clough, 1993). Clough considers in this regard the example of the Reform League, a British lobby set up in 1865 under the primary auspices of the First Working Men's International to agitate for universal male suffrage and a secret ballot. Its central committee was made up of six middle class Liberals and six workers. However, despite the efforts of Marx and others, the workers in the organization quickly gave in to the Liberals' pressure to qualify the demand for universal male suffrage to those men of a certain 'registered and residential' position. This property qualification quite explicitly excluded the mass of workers engaged in unskilled or casual labor from electoral representation. In fact, the new voting system agreed to by the Reform League was introduced in 1868 by Tory Prime Minister Benjamin Disraeli in the clear understanding that the one in five workers it enfranchised would use their votes 'moderately' (ibid). In the general election the same year, the Liberal Party attempted to garner the support of the enfranchised upper stratum of English workers by paying them £10 a head to canvass for the Liberals. In response to the blatant bribery nurturing reformism within England's labor elite, Marx wrote:
The Trade unions are an aristocratic minority - the poor workers cannot belong to them: the great mass of workers whom economic development is driving from the countryside into the towns every day has long been outside the trades unions - and the most wretched mass has never belonged; the same goes for the workers born in the East End in London; one in 10 belongs to the Trade Unions - peasants, day laborers never belong to these societies… The Trade Unions can do nothing by themselves - they will remain a minority - they have no power of the mass of proletarians. (Marx & Engels, 1996, p. 614)
Moreover, Marx found to his chagrin that the leaders of the English working class were unwilling to lend the necessary political support to the Irish independence struggle being conducted by the Fenian movement of the time or even to the more distant Communards of Paris in 1871. It was the distinctly bourgeois politics of the burgeoning British labor aristocracy that finally convinced Marx (Marx & Engels, 1996a) that the overthrow of British capitalism depended, first and foremost, on the liberation of its colonies, in particular, its Irish one.
For a long time, I believed that it would be possible to overthrow the Irish regime through English working class ascendancy… . Deeper study has convinced me of the opposite. The English working class will never accomplish anything before it has got rid of Ireland … . THe lever must be applied in Ireland.
Not only does Post show complete disregard for the evident realities of British politics in the nineteenth century, but his attempt to define the Victorian labor aristocracy out of existence is similarly quixotic. Post is certainly correct that the position of the labor aristocracy was, and is, precarious and in flux. Indeed, as reflected in hidebound theory, it has been a recurrent weakness of the Marxian position on the labor aristocracy to assume that what Marx, Engels and Lenin sometimes suggested in their fragmentary and century-old analyses were its major characteristics, in particular, its being a thin upper stratum of highly skilled and organized male labor in any given nation, must remain unchanged. In fact, application of the Marxist method demonstrates how the evolution of the labor aristocracy is intrinsically bound up with the historical development of the class struggle as waged internationally, in particular, with the increasing incorporation of super-exploitation into the circuit of capital.
After the depression of 1873, the restructuring of capitalist production signaled the rise of trusts, cartels, syndicates and industrial oligopolies, first in Germany and the United States an then in 'free trade' England and other capitalist nations (Nabudere, 1979, p. 21). By 1880, Britain's unique position as the 'workshop of the world' was being effectively challenged. Thus, while world industrial production increased seven times between 1860 and 1913, British production increased only three times and French production four times as against Germany's seven times, and the United States' twelve times (Stavrianos, 1981, p. 259). Bolstered by the second industrial revolution, Fordist production techniques and state capitalist intervention in the economy, the core capitalist nations sought to use their unprecedented power for imperial expansion. Amin demonstrates that it was during this period that unequal exchange resulting from a global disparity between the rewards of labor (at equal productivity) began to assume increased importance to the capitalist cycle. Between 1880 and 1930, imperialist capital obtained a higher output in the colonized countries by establishing modern facilities and intensifying the exploitation of low-wage labor power there (Amin, 1976, p. 131).
In its own heartlands, as Post highlights, the expanded mechanization of capitalist production displaced the traditional autonomy and organizational hegemony of craft union-based early-to-mid-Victorian labor aristocracy. At this time, labor organization became much broader and more anti-capitalist than it had been previously. However, Post obscures the extent to which capitalism has historically allowed for divisions within the working class to be reformed and recreated in new ways by those groups within it with the necessary sway to influence its development. As such, far from straightforwardly leading to the 'radical decline' of the traditional organizations of the labor aristocracy, the 'technological transformation of the labor-process' (Post, 2010, p. 16) in the mid-to-late nineteenth century established the basis for new forms of skilled labor and narrow craft organization. Thus, Gray (1981, p. 32) writes:
Attempts to rationalize production were limited by the strength of skilled labor, market conditions and the absence of managerial experience; the prospectuses of inventors and entrepreneurs might promise to eliminate independent and willful skilled men, what actually happened as machinery was introduced is another matter. To accept areas of craft control over production could also appear a more viable strategy than grandiose schemes of rationalization, especially with the limited character of managerial technique… . Although skill is partly a question of bargaining power and cultural attitudes, there were few if any groups of skilled workers whose position did not involve control of some specialized technique indispensable to their employers - that control was indeed the basis of their bargaining power.
Similarly, Davis (1986, pp. 42-43) shows how, in the United States, a corporate assault on the power of skilled labor beginning at the end of the nineteenth century 'broke the power of craftsmen and diluted their skills' but 'carefully avoided "levelling" them into the ranks of the semiskilled' through according them significant economic benefits and cultivating new social norms.
As the number of of organized craft workers acting as piece masters and subcontractors dwindled relative to the increasing size of the workforce, the coalition upon which what Hobsbawm (1951, p. 326) has called 'the Liberal-Radical phase of parliamentarism' also declined. Moreover, the extension of the franchise brought the looming prospect of the popular majority voting against the properties interest. Thus, there began a concerted effort by the British rulers to kill the working class party with kindness, that is in the words of British constitutionalist Sir Walter Bagehot, to 'willingly concede every claim which they can safely concede in order that they may not have to concede unwillingly some claim which would impair the safety of the country' (Bagehot, 2001 , p. 202). With this imperative to the fore, between 1907 and 1911, the British government introduced a series of welfare reforms (most notably the Liberal government's 1909 Finance Bill, the so-called People's Budget, and the 1911 National Insurance Act) that delivered real benefits to the British working class, benefits decidedly the indigenous subjects of Britain's overseas Empire.
The periodic unemployment and short-ranging mobility of workers in the late nineteenth century, contrary to Post, do not make it impossible to identify a body of relatively privileged workers. For example, whilst painters were a low-paid and casualized trade, 'joiners, bricklayers and masons, despite vulnerability to seasonal unemployment, often appear in the better-paid and more secure section of the working class' (Gray, 1981, p. 23). Clough (1992, p. 19) notes that, on average, unemployment was three times higher for the unskilled than for the skilled worker. Although there were both continuities and discontinuities within the labor aristocracy - based on geography, ideology, gender and ethnicity - there is no doubt that British trade and industry in the mid-to-late nineteenth century was characterized by specific groups of workers having divergent levels of pay, economic security and measures of control in the immediate work situation (Gray, 1981). It was these better-off workers who furnished the support base and leadership of the British trade union movement of the time. In 1885, Engels (1977) wrote:
The great Trade Unions are the organizations of those trades in which the labor of grown-up men predominates, or is alone applicable. Here the competition neither of women or children nor of machinery has so far weakened their organized strength. The engineers, the carpenters and joiners, the bricklayers are each of them a power to the extent that as in the case of the bricklayers and bricklayers' laborers, they can even successfully resist the introduction of machinery… They form an aristocracy among the working class; they have succeeded in enforcing for themselves a relatively comfortable position, and they accept it as final. They are the model workingmen of Messrs Leone Levi and Giffen, and they are very nice people nowadays to deal with, for any sensible capitalist in particular and for the whole capitalist class in general.
How was the economic welfare and conservative political conformity of this most 'aristocratic' section of the working class afforded? Quite straightforwardly, the economic and political benefits accruing to the skilled working class of Victorian England were directly attributable to their exceptional position in the international division of labor at the time, that is to British colonial imperialism.
If we look at the sectors where skilled workers and their organization were strongest, we find them to be closely connected to Empire: textiles, iron and steel, engineering and coal. Textiles because of the cheap cotton from Egypt, and a captive market in India; iron and steel because of ship-building and railway exports, engineering because of the imperialist arms industry, and coal because of the demands of Britain's monopoly of world shipping. In a myriad of different ways, the conditions of the labor aristocracy were bound up with the maintenance of British imperialism. And this fact was bound to be reflected in their political standpoint. (Clough, 1993)
Post's apolitical and narrowly national explanation of the aristocratic traits of the leading craft-unions thus ignores their basis in Britain's global ascendancy. For it was not simply its skills, its productivity or the forms of its industrial organizations which afforded the upper stratum of British labor its middle class privileges, but its centripetal position in the labor markets and political apparatus established through imperialism.
Post's claim of an increasing immiseration for British workers in the last quarter of the is also open for challenge. In fact, during this period, as a corollary to vastly improved transpiration, increased primary goods exports and super-exploitative conditions in colonial markets, the wages of Britain's domestic working class improved. Thus, wages measured against prices rose by 26% in the 1970s, 21% in the 1880s, while slowing down to 11% in the 1890s (Clough, 1992, p. 19; Halevy, 1939, p. 133). Certainly, much of these improved circumstances disproportionately benefited the skilled upper stratum of workers, the labor aristocracy of the time. This subset of the British workforce earned perhaps double that of its unskilled counterpart, a large proportion of which was barely able to feed its families. Indeed, a study by Liberal economic theorist Sir Leo Chiozza-Money in 1905, Riches and Poverty, found that out of British population of 43 million, 33 million lived in poverty and 13 million in destitution (cited in Clough, 1992, p. 20). Yet even within the latter group, there were important gradations of income unconducive to working class unity. Halevy (1939, p. 133) highlights how the benefits of colonialism came not to be restricted only to a small section of British workers:
The fall in… current prices resulting from British monopoly capital's colonial trade had enabled a very large body to come into existence among the British proletariat, able to keep up a standard of living almost identical with that of the middle class. The self-respecting workman in the North of England wanted to own his own cottage and garden, in Lancashire his piano. His life was insured. If he shared the common English failing and was a gambler, prone to bet too highly on horses… the rapid growth of savings banks proved that he was nevertheless learning the prudent of the middle class.
The phenomenon of falling prices bringing middle class living standards, and hence, middle class aspirations to metropolitan workers was noted as early as 1903 by US sociologist and economist Thorstein Veblen:
The workers do not seek to displace their managers; they seem to emulate them. They themselves acquiesce in the general judgment that the work they do is somehow less 'dignified' than the work of their masters, and their goal is not to rid themselves of a superior class but to climb up to it. (cf. Heilbroner, 1980, pp. 230-231)
At the dawn of the imperialist era, super-profits generated by imperialism trickled down to the broad urban masses of the advanced countries, stimulating new needs therein, including
soap, margarine, chocolate, cocoa and rubber tires for bicycles. All of these commodities required large-scale imports from tropical regions, which in turn necessitated local infrastructures of harbors, railways, steamers, trucks, warehouses, machinery and telegraph and postal systems. Such infrastructures required order and security to ensure adequate dividends to shareholders. Hence the clamor for annexation if local conflicts disrupted the flow of trade, or if a neighboring colonial power threatened to expand. (Stavrianos, 1981, p. 262)
Clearly, as Stavrianos suggests, and given the very public promotion of social-imperialist doctrines and practices, if the economy provided jobs, rising living standards and a strong sense of national identity to the citizens of the colonial powers, were not likely to passively accept rival countries affecting the flow of super-profits, hence the aforementioned clamor for annexation. The clamor was, of course, amplified to a deafening din by the imperialist politicians and ideological state apparati, then as today (Cope, 2012, p. 105; Diamond, 2006; Mackenzie, 1987; Schneider, 1982).
Post's claim of falling wages for the entire British working class in the last quarter of the nineteenth century is fallacious. Although wages were a diminishing portion of national income, measured in real terms, they improved for the British working class, especially for its skilled, unionized members (Stavrianos, 1981, pp. 266-267).
Whether the real wages of the British working class rose or fell during the early years of the Industrial Revolution in the late 18th and early 19th centuries remains a disputed issue. A definitive answer is difficult because the large-scale urbanization accompanying industrialization altered the structure of worker consumption, as, for example, by the introduction of rent for lodging. But there is no question about the steady rise of real wages in the second half of the 19th century. The following figures show that between 1850 and 1913 real wages in Britain and France almost doubled. (4)
It may be argued that the rising purchasing power of wages depicted here merely indicates that British workers were receiving some of the benefits from the increased productivity of domestic labor employed in those industries producing workers' consumption goods (Table 1). Rising British wages are in this regard perfectly consistent with an increased domestic rate of surplus value or exploitation (this being the ratio between the necessary labor time required to produce workers' consumption goods and the surplus labor time workers expend beyond that) (see below). Yet it must be understood that greater productivity in industries producing workers' consumption goods may come from two distinct sources. First, it maybe the result of their more intensive exploitation, that is of their being paid less absolutely to activate the same materialized composition of capital. Second, it may result from their being paid proportionately less to activate a greater materialized composition of capital. Scientific and technical improvements lead to cheapened production costs for workers consumption goods and, hence, a decrease in the necessary as opposed to surplus labor. Capitalists will introduce new technological advances to the production process if the amount of labor expended on producing labor-saving machinery is less than the amount of labor displaced by its introduction.
Mechanization, however, involves substitution living (value-creating) labor for dead labor, and hence, constitutes a growing restriction on the rate of surplus value and the rate of profit. As such, capitalists must strive to increase productivity without proportionate wage increases. Nonetheless, if British workers were wholly responsible for producing their own consumption goods, it could properly be said that rising British wages in the Victorian era represented returns to British labor according to increased domestic exploitation, possibly as forced upon capitalists by working class militancy. This explanation for rising British wages, however, ignores the extent to which they were, in fact, afforded by an increase in the proportion of workers' consumption goods produced by colonial labor.
Between 1870 and 1913, merchandise imports to Britain increased from £279 million to £719 million, and with it the country's trade deficit from £33 million to £82 million (Clough, 1992, p. 18; Michell & Deane, 1962, pp. 828-829, 872-873). As Patnaik notes, the rising consumption of sugar, beverages, rice, cotton and what by West Europeans at this time depended heavily on the unpaid import surpluses from colonial countries (Patnaik, 1999). Thus, although the outsourcing of the production of workers consumption goods to oppressed nations occurred on a much smaller scale during the last three decades of the nineteenth century than it has during recent times, the rising real wage of British workers at that time is in no small measure attributable to their receipt of the colonial loot. A primary reason for nineteenth century British wages falling relative to gross domestic product (GDP) but rising in terms of purchasing power is that value was being transferred from colonial societies wherein the (then largely rural) workforce was on the losing side of the international class struggle.
Whilst most left theorists have for a long time fallen into the habit of gauging exploitation on a national(ist) basis, commonly examining wages in relation to profits in the rich countries (and thereby 'proving' that the most exploited workers in the world are those of the developed nations), in the context of global imperialism, value creation and distribution must be examined as an international process.
As Smith correctly argues, 'GDP, which claims to be a measure of the wealth produced in a nation, is in reality, a measure of a wealth captured by a nation' (Smith 2010. As such, GDP is expanded by surplus value extracted from workers in low-wage countries and is not a valid measure of 'gross domestic product', since it may rise of decline independently of (domestic) labor's share of it. Commodities produced by low-wage workers in the labor-intensive export industries obtain correspondingly low prices internationally. However, as soon as these goods enter into imperialist-country markets, their prices are multiplied several fold, sometimes by as much as 1000%. As Chossudovsky notes, 'value added' is thus 'artificially created within the services economy of the rich countries without any material production taking place' (Chossudovsky, 2003. p. 80). Jedlicki (2007), meanwhile, observes that 'value added' already incorporates those wage and capital differentials that some Western socialists aim to justify in the name of superior First World 'productivity'. In doing so, 'a demonstration is carried out by using as proof what constitutes, precisely, the object of demonstration'.
Post (2010, p. 24) observes that 'in the United States today, real wages for both union and non-union workers have fallen, and are about 11% below their 1973 level, despite strong growth beginning in the mid 1980s'. By measuring wages against GDP figures and reported profits, Post intends to convince his readership that the living standards of the US working class have been declining and that a renewed offensive against capital would entitle them to a greater share of the wealth they ostensibly create. However, there are at least two problems with the idea that US wages have fallen.
Firstly, whilst wages in the United States have indeed fallen since 1973 as a proportionate share of GDP, in real terms the poor in that country were better off in 1999 than they were in 1975. For example, Cox and Alm (1999) show that whereas in 1971 31.8% of allUS households had air-conditioners, in 1994 49.6% of households below the poverty line had air-conditioners. These authors also demonstrate that the United States poor in 1999 had more refrigerators, dishwashers, clothes dryers, microwaves, televisions, college educations and personal computers than they did in 1971. Wages decidedly did not shrink, then, relative to the purchasing power necessary to consume these items.
US economists Meyer and Sullivan (2011) had constructed a measure of consumption which challenges mainstream assessments of declining US living standards. They note that income-based analyses of economic well-being in the United States do not reflect the full range of available household consumption resources such as, for example, food stamps, or lessened marginal tax rates. Second, they demonstrate that official statistics account for inflation using a price index with reflects a cumulative upward trend based on substitution bias, outlet bias, quality bias and new-product bias. Third, official government income measures fail to reflect important components of economic well-being such as consumed wealth, the ownership of durables such as houses and cars or the insurance value of government programs. Thus, a retired couple who own their own home and live off savings, for example, are income-poor but may still be materially well-off. Taking into account the flawed methodologies of official reports on declining US household income, the authors construct a very different picture of US living standards:
Our results show evidence of considerable improvement in material well-being for both the middle class and the poor in the US over the past three decades. Median income and consumption both rose by more than 50 percent in real terms between 1980 and 2009. In addition, the middle 20 percent of the income distribution experiences noticeable improvements in housing characteristics: living units became bigger and much more likely to have air conditioning and other features. The quality of the cars these families own also improved considerably. Similarly, we find strong evidence of improvements in the material well-being of poor families. After incorporating taxes and noncash benefits and adjusting for bias in standard price indices, we show that the tenth percentile of the income distribution grew by 44 percent between 1980 and 2009. Even this measure, however, understates improvements at the bottom. The tenth percentile of the consumption distribution grew by 54 percent during this period. In addition, for those in the bottom income quintile, living units became bigger, and the fraction with any air conditioning doubled. The share of households with amenities such as dishwasher or clothes dryer also rose noticeably.
Not, indeed, dis US incomes decline relative to the costs of those items necessary in the reproduction of the worker as such (the 'value of labor-power' in Marxist terms). Thus, between 1970 and 1997, the real price of a food basket containing one pound of ground beef, one dozen eggs, three pounds of tomatoes, one dozen oranges, one pound of coffee, one pound of beans, half a gallon of milk, five pounds of sugar, one pound of bacon, one pound of lettuce, one pound of onions and one pound of bread fell so that it took 26% less of the workers' time to buy it (ibid pp. 40-41).
It may be argued that several of these items are almost exclusively produced within the United States and that, therefore, it is the increased productivity of US agriculture that accounts for the relative cheapness of these goods over time. Certainly, tomatoes, orange,s carrots, onions, milk, bread and other foodstuffs are produced in great quantities within US borders. However, it must be understood that US agricultural production is heavily subsidized by the government. Indeed, half of the value of all Organization for Economic Co-Operation and Development (OECD) agriculture, according to OECD estimates, consists of government subsidies (Patnaik, 2007, p. 44). As she explains:
Since these are rich industrial countries where the farm sector employs less than 5 percent of full time workers and correspondingly contribute 4 percent or less to GDP, they can easily afford to give budgetary support to the extent of 2-3 percent of GDP, which amounts to half or more of the total value of agricultural output. In India where agriculture employs two thirds of the workers and contributes over a quarter of GDP, a similar order of support would not be possible even if every single rupee of central government revenues went to agriculture alone. (ibid, p. 43)
Second, Patnaik (2007, p. 25) notes that as much as 60-70% of Northern food items have tropical or sub-tropical import content. Finally, the developed world's investment in agriculture, including in the fossil fuel, chemical and machine production which facilitates its great productivity, is in part made possible by the economic buoyancy guaranteed to the import of large quantities of surplus value form the underdeveloped world (Cope, 2012). More generally, it is the globalization of production which plays the major role in cheapening the costs of the reproduction of labor power in the developed countries and, hence, the apparent surfeit of surplus labor preformed by production workers therein.
According to the International Monetary Fund, although OECD's labor share of GDP decreased, the globalization of labor in the last three decades 'as manifested in cheaper imports in advanced economies' has increased the 'size of the pie' to be shared amongst citizens there and thus a net gain in total workers' real compensation (IMF, 2007, p. 179). Smith (2008, pp. 10-11) notes that
WEO 2007 estimates that between 1980 and 2003, real, terms-of-trade adjusted wages of unskilled workers (defined as those with less than university-level education) in the US increased by 14%, and that around half of this improvement resulted from falling prices of imported consumed goods… Broda and Romalis (2008) calculate that 4/5 of the total inflation-lowering effect of cheap imports is accounted for by cheap Chinese imports, these having risen during the decade of 1994 to 2004 from 6% to 17% of all US imports, and that the "rise of Chinese trade… alone can offset around a third of the rise in official inequality we have seen over this period". (5)
In the United Kingdom, declines in the cost of living during the past decade are similarly attributable to trade with China (6). The important point to note here is that a fall in wages relative to GDP does not by itself account for the purchasing power of said wage, nor, crucially, need it compensate for the transferred surplus value (super-profits) inhering in the average OECD wage.
To return to Post's critique of Marx and Engels, the author goes awry in claiming that the United States and German challenge to Britain's monopolistic position on the world market could only have led to lower standards of living for British workers. It is true that British capital's preeminence was profoundly challenged by the rise of monopoly capitalism in Germany and the United States between the 1870s and World War I (WWI). Furthermore, as Hobsbawm notes, the effective end of Britain's industrial monopoly eroded those 'economic devices which created a satisfied "aristocracy of labor" …automatically (that is, without the deliberate adoption of reformist policies)' (Hobsbawm, 1951, p. 328). However, British capitalism's inherent need to expand remained undiminished. On the contrary, to better compete with its imperialist rivals, Britain escalated its extraction of surplus labor embodied in colonial foods and raw materials but, crucially, never paid for in colonial wages. IN doing so, Britain was able to supplement the consumption of its own workforce, still at that time exploited in the main, at the expense of that in the colonized nations. By what means did British colonialism drain surplus from the colonial world?
State-guaranteed colonial investments made through qualified solicitors and bankers (largely self-financed in India where exports exceeded imports by some £4 million per year in the 1850s) had steadily increased throughout the 'classical' era of capitalism so that by 1870 36% of British overseas capital was in the Empire alongside half the annual flow (Barratt Brown, 1974, pp. 133-138). Later, Britain increased its level of foreign investment by an average of £660 million every decade between 1870 and the outbreak of WWI (Nabudere, 1979, p. 64). Its net annual foreign investment between 1870 and 1914 was a then unprecedented one-third of its capital accumulation and 15% of the total wealth of it Empire (cf. Edelstein, 1981, pp. 70-72; Hehn, 2002, p. 135). According to Elsenhans, the percentage of total capital exported to the world economy's periphery up to 1914 was as follows: Britain, 37.9%; France, 34.5%; Germany, 31.1% and United States, 54% (Elsenhans 1983, cf. Feis, 1930, pp. 23, 46, 70; Woodruff, 1975, p. 340). Later, in the highly protectionist interwar period when nearly half of Britain's trade was with its dominions and colonies and one-third of France's exports went to its colonies (Hehn, 2002, p. 145), the imperial powers (not including a Germany stripped of her colonies) could use super-profits to purchase social peace.
Overseas investment greatly facilitated Britain's capital exports. The £600 million invested in overseas railway building between 1907 and 1914, for example, created a captured market for iron, steel, and rolling stock. It also worked to cheapen the (transportation) costs of food and raw materials (Clough 1993, p. 17), thus reducing the costs of British constant and variable capital, and buoying profit rates (7). Moreover, enforced bilateral 'trade' with the colonies financed much of this capital export. The core nations of Europe and North America increased their purchase of raw materials and foodstuffs from the oppressed nations in the decades before WWI, maintaining a constant excess of merchandise imports over exports (Frank, 1979, p. 190). By 1928, Europe had a net export deficit of US$2.9 billion which was offset by the colonial world's merchandise export surplus of US$1.5 billion.
In 1913, the British government exported merchandises valued at £635 million and had imports totaling £769 million. In addition it imported gold worth £24 million and thus had an import surplus of £158 million in the movement of merchandise and gold. To offset this defect, the British had items totaling £129 million (from earnings of the merchant marine £94, earnings of traders' commission £25, other earnings £10 million). The British thus would have a deficit of £29 million, except for interest and dividends from their investments abroad, which amounted to £210 million. Addition of this item to the other 'invisible' exports reversed the balance of payments in favor of the United Kingdom, giving it a net surplus of £181 million. Theoretically, the British could take this balance in increased imports of merchandise and still have the balance of payments in equilibrium. Actually, they left the whole net balance abroad as new investment. In fact, in 1913, London advanced to colonial and foreign concerns long-term loans for £198 million - almost exactly the amount of the current profits from investments abroad. (Woytinsky & Woytinsky, 1955, p. 199)
Effectively, then, British imperialism's trade deficits with the colonies financed much of its overseas capital investment. British re-investment in foreign and colonial ventures of nearly £200 million in 1913 may thus be compared to its export deficit and import surplus of £158 million in the same year, representing pure profit of which India alone contributed two-fifths (Frank, 1979, pp. 192-193).
These sums may also be compared with the profit required to subvent the labor aristocracy. Let us assume that Britain's 1.5 million unionized workers in 1892, representing 11% of all British workers in trade and industry, constituted the core of the labor aristocracy of the time (with the very partial exception of miners, unskilled unions were then negligible) (Clugh, 1992, p. 20). Skilled workers in 1900 could expect an average weekly wage of 40s (£104 annually). Since these earned almost double that of unskilled workers, we will take the 'excess' annual wage of the labor aristocracy to amount to £52 annually, a total wage bill for the group of £78 million per annum. At £59.2 million in 1913 (Frank, 1979, pp. 192-193), it is likely that at least three quarters of this total can be accounted for by Britain's trade deficit with India alone. Post errs, then, in examining profits from foreign investments and machinery exports as the sole measure of British parasitism. More crucially, his narrow focus on profit levels is indicative of his glaring indifference to the extraction of surplus value, that is, to exploitation per se.
According to Marx, during the time they are employed, production workers spend part of their day reproducing the value of the goods necessary to their own reproduction, that is, the cost of their own labor power (or variable capital). Marx calls this necessary labor. For the rest of the working day, these workers produce value exceeding that of their labor power, what Marx called surplus value (the combined value of gross domestic investment, the non-productive or service sector and profits). The rate of surplus value (or of exploitation) is the ratio of surplus labor to the necessary labor or of surplus value to the value of variable capital. Fundamentally, however, capitalists are not interested in creating surplus value, but in generating profit. Profit, as the unpaid labor time of the worker appropriated by the capitalist as measured against total capital invested, must be properly distinguished from surplus value. In bourgeois accounting terms, profit is simply the excess of sales revenue over the cost of producing the goods sold.
Thus, the price of production of a commodity does not directly correspond to its value within a single industry or group of industries (Marx, 1977b, pp. 758-759). Rather, as capital is withdrawn from industries with low rates of profit and invested in those with higher rates, output and supply in the former declines and its prices rise above the actual sums of value and surplus value the industry produces, and conversely. As a result, competing capitals using different magnitudes of value-creating labor ultimately sell commodities at average prices. As a result, surplus value is distributed more or less uniformly across the branches of production. An average rate of profit is formed by competing capitals' continuous search for higher rates of profit and the flight of capitals' to and from those industrial sectors producing commodities in high or low demand. Overall, where one commodity sells for less than its value, there is a corresponding sale of another commodity for more than its value.
This equalization of profit rates under capitalism ensures that surplus value does not necessarily adhere to the particular industry (or territory, give international restrictions on the mobility of capital and/or labor) in which it was created. Instead, surplus value is transferred from those industries (or territories) providing less socially necessary labor to those providing more. Thus, even branches of production which may enjoy the same rate of exploitation, that is, the same underpayment of the workforce for the value produced by its labor, will have different rates of profit depending upon the organic composition of capital involved in the production process (8). Capitals equal in size yield profits equal in size, no matter where investment is made or how the capital is shared between constant and variable capital (or, indeed, between capitalists and workers).
As Marx (1977a, p. 238) recognized, though purely at the level of divergent international 'productivity' levels, super-profits derived from foreign trade enter into the rate of profit as such
Capitals invested in foreign trade can yield a higher rate of profit, because, in the first place, there is competition with commodities produced in other countries with inferior production facilities, so that the more advanced country sells its goods above their value even though cheaper than the competing countries. In so far as the labor of the more advanced country is here realized as labor of a higher specific weight, the rate of profit rises, because labor which has not been paid as being of a higher quality is sold as such. The same may obtain in relation to the country, to which commodities are exported and to that from which commodities are imported; namely, the latter may offer more materialized labor in kind than it receives, and yet thereby receive commodities cheaper than it could produce them. Just as a manufacturer who employs a new invention before it becomes generally used, undersells his competitors and yet sells his commodity above its individual value, that is, realizes the specifically higher productiveness of the labor he employs as surplus-labor. He thus secures a surplus-profit. As concerns capitals invested in colonies, etc., on the other hand, they may yield higher rates of profit for the simple reason that the rate of profit is higher there due to backward development, and likewise the exploitation of labor, because of the use of slaves, coolies, etc. Why should not these higher rates of profit, realized by capitals invested in certain lines and sent home by them, enter into the equalization of the general rate of profit and thus tend, pro tanto, to raise it, unless it is the monopolies that stand in the way. There is so much less reason for it, since these spheres of investment of capital are subject to the laws of free competition. (Cope's emphasis)
My own definition of super-profits, accounting for global divergences in the rate of exploitation at equivalent levels of productivity, is the extra or above average surplus value the metropolitan capitalist countries extort from workers in colonial or neocolonial countries by means of capital export imperialism, debt servitude, and unequal exchange (Cope 2012).
(4) Ibid. See 'Measuring Worth: Exchange Rates between the United States Dollar and Forty-one Currencies'. In 1900, the average real wage of Russian agricultural day workers, building, factory and railroad workers - the latter category paid almost twice as the previous two - was 251 rubles (Allen, 2003, pp. 38-42) or 49.5% of the average French real wage. Russian wages were very constant throughout the period of Russia's industrial capitalist boom (c. 1861-1913) and Russian workers, unlike their British, French and German counterparts,'did not receive rising incomes in step with the economic growth of the country' (Ibid, p. 37). Alongside miserable wages, another factor helping to explain the relatively militant ethos of Russian labor in the pre-war period is its higher socialization. In comparison to German workers, 70% of whom in 1895 were employed in industries employing 50 or less (Bernstein, 1961), nearly 50% of Russian workers worked in industries with over 1000 employees. Fully 83% of the Russian population was engaged in agriculture as compared to 23.8% of Germans in the immediate pre-war period (Kemp, 1985, p. 191).
(5) The IMF calculation was made by deflating nominal wages by the rate of inflation as reported by the official consumer price index (CPI).
(6) '"Made-in-China" helps make rich countries richer' in People's Daily, China, August 20, 2005.
(7) By 1925, the Caribbean, South Africa, Asia and Oceania (furnishing about 73% of colonial produce), produced some 54-60% of all oil seeds, 50% of all textiles, 34-35% of all cereals and other foodstuffs, 100% of rubber, 24-28% of all fertilizers and chemicals, and 17% of all cereals alone (an average increase of 137% of 1913 levels of raw material production) (Krooth, 1980, pp. 84-85).
(8) The term 'organic composition of capital' refers to the ratio between variable and constant capital, that is to the amount of value-creating labor power against technology and raw materials utilized in the labor process. A greater ratio between capital outlay and wages may result from the increased materialized composition of capital (i.e. fixed capital costs) or from the diminishing share of wages in total capital outlay. Whereas in the first case, the rate of profit is threatened by a diminution of the living labor involved in production, in the second case it may be buoyed by the diminution of necessary labor costs (rising surplus value). The latter is typically the result of the greater productivity of labor in those industries producing workers' subsistence goods. In the capitalist world system, reduced labor costs have always been associated with the extortion of subordinate peasantries and the (related) super-exploitation of dependent wage labor.
Edited by marlax78 ()
#25
In Defense of Lenin on the Labor Aristocracy
For Lenin, Zinoviev and the Bolsheviks, super-exploitation (the lower than average return to nationally oppressed wage labor, often at levels insufficient for their households to reproduce their labor power) generates super-profits which may be used to supplement the 'wages' of core-nation workers. According to Lenin (1970 ), it is not only capitalists who benefit from imperialism:
The export of capital, one of the most essential economic bases of imperialism, still more completely isolates the rentiers from production and sets the seal of parasitism on the whole country that lives by exploiting the labor of several overseas countries and colonies. (Cope's emphasis)
Super-profits derived from imperialism allow the globally predominant bourgeoisie to pay inflated wages to sections of the proletariat, sections who thus derive a material stake in the preservation of the capitalist system.
In all the civilized, advanced countries the bourgeoisie rob - either by colonial oppression or by financially extracting "gain" from formally independent weak countries - they rob a population many times larger than that of "their own" country. This is the economic factor that enables the imperialist bourgeoisie to obtain super-profits, part of which is used to bribe the top section of the proletariat and convert it into a reformist, opportunist petty bourgeoisie that fears revolution. (Lenin, 1963 , p. 443)
Although not articulated as such by any of the writers Post criticizes, there are several pressing reasons why the haute-bourgeoisie in command of the heights of the global capitalist economy engages in such 'bribery' (sic), even where it is not forced to by militant trade union struggle within the metropoles. Economically, the embourgeoisement of First World workers has provided oligopolies - that is, those few giant firms dominating key industries - with the secure and thriving consumer markets necessary to capital's expanded reproduction. Politically, the stability of pro-imperialist polities with a working class majority is of paramount concern to cautious investors and their representatives in government. Militarily, a pliant and/or quiescent workforce furnishes both the national chauvinist personnel required to enforce global hegemony and a secure base from which to launch the subjugation of Third World territories. Finally, ideologically, the lifestyles and cultural mores enjoyed by most First World workers signify to the Third World not what benefits imperialism brings, but what capitalist industrial development and parliamentary democracy alone can achieve (Cope, 2012, p. 30).
In receiving a share of super-profits, a sometimes fraught alliance is forged between workers and capitalists in the world's core nations. As long ago as 1919, when global wage scaling was nowhere near so marked as today, the first congress of the Communist International (COMINTERN) adopted a resolution, agreed on by all of the major leaders of the world Communist movement of the time, which read:
At the expense of the plundered colonial peoples capital corrupted its wage slaves, created a community of interest between the exploited and the exploiters as against the oppressed colonies - the yellow, black, and red colonial people - and chained the European and American working class to the imperialist 'fatherland'. (Degras, 1956, p. 18)
Post (2010, pp. 18-21) challenges this compelling interpretation of the roots of opportunism, reformism and national chauvinism amongst core-nation workers, suggesting that profits earned in the global South by US transnational corporations today are negligible compared to the total wage bill of the US working class.
Imperialist investment, particularly in the global South, represents a tiny portion of global capitalist investment even today, in the era of globalization. Foreign direct investment made up only 5% of total world-investment prior to 2000 - 95% of total capitalist investment took place within the boundaries of each industrialized country. Nearly three-quarters of total foreign direct investment flowed from one industrialized country - one part of the global North - to another. Less than 2% of total world-investment flowed from the global North to the global South. It is not surprising that the global South accounted for only 20% of global manufacturing output, mostly in labor-intensive industries such as clothing, shoes, automobile-parts, and simple electronics. The rapid growth of transnational corporate investment in China in the last decade has changed this picture, but only slightly. Foreign direct investment as a percentage of global gross fixed-capital formation jumped from 2.5% in 1982, to 4.1% in 1990 to 9.7% in 2005. The percentage of foreign direct investment flowing to the global North fell from 82.5% in 1990 to 59.4% in 2005. However, the global South still only accounts for less than 4% of global fixed-capital formation.51 While China has led the growth of transnational capital-accumulation, the bulk of the capital invested in China remains in labor-intensive manufacturing - the low and medium end of transnational-corporate organized global-production chains.
Even accepting that as much as 50% of repatriated foreign profits of US companies emanate from the global South, profits earned from investment in the global South make up a tiny fraction of the total wages of workers in the global North… Total profits earned by US companies abroad exceeded 4% of total US wages only once before 1995 - in 1979. Foreign profits as a percentage of total US wages rose above 5% only in 1997, 2000 and 20002, and rose slightly over 6% in 2003. If we hold to our estimate that half of total foreign profits are earned from investment in the global South, only 1-2% of total US wages for most of the nearly 50 years prior to 1995 - and only 2-3% of total US wages in the 1990s - came fro profits earned in Africa, Asia and Latin America. Such proportions are hardly sufficient to explain the 37% wage differentials between secretaries in advertising agencies and machinists working on oil pipelines, or the 64% wage differential between janitors in restaurants and bars and automobile workers.
Post is here reiterating the familiar view amongst Western economists, socialists and otherwise, that the super-exploitation of Third World labor is today entirely marginal to capital accumulation on a world scale. Thus, economist Raphael Shaub writes: 'The data reveals that most of the FDI stock is owned by and is invested in developed countries… FDI stock and flows have increasingly been concentrating in the industrialized countries since the 1960s' (Schaub, 2004, pp. 26-27). British socialists Ashman and Callinicos concur that 'the transnational corporations that dominate global capitalism tend to concentrate their investment (and trade) in the advanced economies… Capital continues largely to shun the global South' (Ashman & Callinicos, 2006, p. 125). However, Smith (2007) provides the following reasons as to why this interpretation, based as it is 'on an uncritical regurgitation of deeply misleading headline statistics' is wrong and how 'far form "shunning" the global South, northern capital is embracing it and is becoming ever more dependent on the super-exploitation of southern low-wage labor'.
First, nearly 50% of manufacturing foreign direct investment (FDI) is received by the developing economies (US$82.1 billion between 2003 and 2005 compared with US$83.7 billion to developed countries). Meanwhile, FDI within the developed world is hugely inflated by non-productive 'finance and business' activities (US$185 billion, or more than twice the inward flow of manufacturing in the period cited ) (United Nations Conference on Trade and Development , 2007, p. 227). Moreover, intra-OECD manufacturing (particularly in those Transnational Corporations (TNCs) which have offshored or outsourced much of their production processes to low-wage nations) is heavily dependent upon capital infusions from the Third World. Smith cites the example of the restructuring of Royal Dutch Shell having increased the United Kingdom's inward FDI by US$100 billion even though nearly all of Shell's oil (and, he adds, profits) production takes place in Latin America, Central Asia and the Middle East. Post's citation of the low level of global fixed capital formation that takes place in the global South, moreover, suggests a misunderstanding of the purpose of imperialism, namely, to siphon and extort surplus value from foreign territories (Grossman, 1992). That imperialism is moribund, that is that it holds back the full potential development of the productive forces, has long been noted by its critics. Thus, where oligopolies dominate Third World markets, there is not the same urgent imperative to replace cheap labor with expensive machinery.
Secondly, whilst the United States, Europe and Japan (the 'Triad' powers) invest in each other at roughly equivalent rates, there is no investment flow from the Third World to the developed world to match investment from the latter to the former. Whereas 'repatriated profits flow in both directions between the United States, Europe and Japan, between these "Triad" nations and the global South the flow is one-way' (Smith, 2007, p. 15). So much is this the case that profit repatriation from South to North now regularly exceeds new North-South FDI flows. Jalée (1968, p. 76) has earlier described this process of 'decapitalizing' the Third World:
There are many well-meaning people, both in the imperialist countries and the Third World, who still have illusions as to the usefulness of private investment in the underdeveloped countries. It is simple to make the following calculation. A foreign private enterprise sets up in a Third World country where it makes a regular, yearly profit of 10% on its investment. If the whole of these profits are transferred abroad, at the end of the tenth year an amount equal to the original investment will have been exported. From the eleventh year onwards, the receiving country will be exporting currency which it has no received; in twenty years it will have exported twice as much, etc. If the rate of profit is 20% instead of 10% the outflow will begin twice as early. If only half the profits are exported the process will be only half as rapid. This example is a somewhat oversimplified hypothesis, but reflects reality. There is no end to the loss of Third World capital through such outflows, except through nationalization or socialization of the enterprises.
Smith also makes the point that much supposed 'South-South' FDI is, in fact, 'North-South' FDI (Smith, 2007). Not only is it the case that United States and United Kingdom TNCs using profits earned in one Third World country to finance investments in another show the FDI as originating in the former (Lipsey, 2006, p. 3), but 10% of Southern FDI originates from the British Virgin Islands, the Cayman Islands and other offshore tax havens and, hence, likely originates from imperialist sources.
Thirdly, FDI flows are purely quantitative and say nothing about the type of economic activity they are connected to. As such, merges and acquisitions, merely representing a change in ownership, should be distinguished from 'greenfield' FDI in new plant and machinery. Whilst intra-OECD FDI is dominated by mergers and acquisitions activity, between 2000 and 2006, 51% of all Greenfield FDI was North-South (UNCTAD, 2007, p. 206).
Fourthly, and perhaps most significantly for the present purposes, undue fixation on FDI flows as a means of calculation the value of imperialist super-exploitation to the capitalist system and the wealth of the developed nations ensured that obscured from view are the tens of thousands of Third World-owned factories whose hundreds of millions of workers supply inexpensive intermediate inputs and cheap consumer goods to the imperialist countries via the vertical integration of production (Smith, 2007, p. 18). Rather than FDI being the major means of securing this supply, outsourcing and subcontracting by TNCs has become a prevailing mode of monopolistic capital accumulation in recent decades.
Finally, data on FDI stocks and flows are given in dollars converted from national currencies at current exchange rates. However, a dollar invested in a Third World country typically buys much more resources than a dollar invested in the First World. Measuring the value of Southern FDI in Purchasing Power Parity (PPP) dollars, we find that UNCTAD totals must be multiplied by a factor of 2.6 (the weighted average PPP coefficient between the OECD and non-OECD countries). Furthermore, as Harvie and de Angelis highlight, whereas in the United States $20 commands one hour of labor time, in India the same US$20 is sufficient to put ten people to work each for ten hours (Harvie & de Angelis, 2004). Thus, between 1997 and 2002, some US$3.5 billion of intra-imperialist FDI flows commanded 190 billion labor hours at just under US$18 per hour. Meanwhile, some US$800 billion of FDI flowing into the Third World commanded 330 billion hours at US$2.4 per hour (an average labor cost ratio of 7.5:1). As such, the 19% of the glboal total of FDI that went from the North to the South in this period comprised 63% of the total 'labor commanded' (ibid).
Post's acceptance of capitalist accounting figures at face value, that is, without critiquing their real world significance in terms of average socially necessary labor and surplus labor (Cope, 2012), can only head him to the absurd positions that (a) the world's largest capitals have practically no interest in the Third World and (b) that the most exploited workers in the world (i.e. those whose higher productivity supposedly generates the biggest profits) are also the world's richest. Thus, in an article for the Trotskyist Fourth International, Post writes that 'global wage differentials are the result of the greater capital intensity (organic composition of capital) and higher productivity of labor (rate of surplus value) in the advanced capitalist social formations, not some sharing of "super profits" between capital and labor in the industrialized countries. Put simply, the better paid workers of the "north" are more exploited than the poorly paid workers of the "south"' (9). Post shows complete disregard for the massive infusions of capital which result from global surplus value transfer and the all-too-obvious facts of Northern working consumption goods being the product of super-exploited Third World labor. For Post, the North's purportedly greater 'capital intensity' and its workers higher 'productivity' may as well have dropped from the sky.
(9) Charles Post, 'Ernest Mandel and the Marxian Theory of Bureaucracy'
#26
this is all super interesting, thanks for posting!!!
#27
i just started reading "Marx & Engels: On Colonies, Industrial Monopoly, and the Working Class Movement," which is a bunch of excerpts from marx & engels displaying their evolving thought on colonialism, labor aristocracy, etc.
I mention it here instead of in the Reading thread because its intro is an essay by Zak Cope & Torkil Lauesen that's practically worth the price of admission by itself. however, it also alerted me to the fact that there's apparently a second (2015) edition of Divided World, Divided Class that's got more data
maybe i've been spoiled by software practices but it'd be cool as hell if publishers started including changelogs between editions of scholarly works
anyway, i may transcribe said essay some time this week, probably in this thread after Marlax is done with his own transcriptions (let me know!), so things don't get muddled. (incidentally, cope apparently also wrote a follow-up to his initial critique of Post)
Edited by Constantignoble ()
#28
Constantignoble posted:
however, it also alerted me to the fact that there's apparently a second (2015) edition of Divided World, Divided Class that's got more data
maybe i've been spoiled by software practices but it'd be cool as hell if publishers started including changelogs between editions of scholarly works
otherwise known as the preface to the 2nd edition
anyway, ima leave this here:
#29
Night Vision is amazing. Written in 1993 but still feels cutting edge.
#30
tears posted:
otherwise known as the preface to the 2nd edition
that's too many words, i need terse bullet points. twitter dot com, i live for this
(though thank you, this has alerted me to the fact that you can actually read the entire new preface via Amazon's "look inside" thing)
#31
marimite posted:
Night Vision is amazing. Written in 1993 but still feels cutting edge.
been meaning to read it for ages bu t kept getting distracted...until now and yea, it feels v good, wish more people wrote books in this type of writing
#32
sry for the delay in transcribing this. i'm high right now so i figured now'd be a good time to finish this. also i will transcribe something of semi-relevance on the validity of value theory in analyzing finance by norfield. no one here would doubt its relevance, but it cleared up some things up for me and - if i recall correctly - this was something norfield didn't touch upon in The City.
a note with regard to Norfield while i'm at it: Amin's second and thrid major monopoly sources of super-profits mentioned in Cope's essay below (in Norfield's words, the 'financial privileges' of the imperialist powers) are excellently analyzed in The City.
Constantignoble posted:
(incidentally, cope apparently also wrote a follow-up to his initial critique of Post)
i realized this a little while ago, but i appreciate the heads up. i'll have my library request it. annoyingly, this journal (research in political economy) is subscribed to by a university in boston, whereas i go to a different university in cambridge. so i have to wait for what i should be able to get immediately, lol.
tears posted:
marimite posted:
Night Vision is amazing. Written in 1993 but still feels cutting edge.
been meaning to read it for ages bu t kept getting distracted...until now and yea, it feels v good, wish more people wrote books in this type of writing
i really enjoyed night-vision too. there's a new edition out from Kersplebedeb, but i was informed in an email that towards the end of the year they'll be releasing a newer edition with additional material (the authors, however, are (fortunately) in no rush to complete it). i'm gonna buy that when it comes out.
the table you posted above from the book is indeed useful in quickly breaking down the qualitative transformation of imperialism with neocolonialism/globalization. however, for the purpose of clarification (re: im bored), there are several things one could point out about it*:
* the authors may be aware of these problems. i can't remember if and how they were dealt with.
a) "growing integration of world into one class structure" & loss of the settler aristocracy. i would place emphasis on the 'ing' in growing, as this is certainly what the neoliberal project seeks to accomplish (and hence the authors were very prescient in predicting the intensified settler versus imperialist antagonism), but we now know this project is breaking down, and thus a misreading could lead to something like kautskyite ultra-imperialism -- and with that to white guy identity politics ("national oppression? nah, it's all about the global working class").
i think one is justified in questioning whether or not this was even theoretically possible to begin with. one might think that if capitalism is able to overcome its current crisis, this neoliberal project will continue, the trumps and assads will disappear, and we'll eventually witness this projected one world class structure. but if a decline in the profitability of metropolitan industry is what caused neoliberal globalization to begin with, and the current capitalist crisis can only be overcome with a restoration to a higher rate of profit (some believe this is possible, others do not), does this mean the process of neocolonialism can be reversed, and the collapsing settler-aristocracy can be appeased? to ask it another way: what is the exact connection between neocolonialism and globalization?
b) 'spread of industry around the world' should be qualified. the authors write that this spreading of industrial production develops in a 'pathalogical' way, which i presume is a way of just saying uneven and combined development (industry in the third world is a 'distroted extension of the metropolis'), but this is a given. what the authors fail to mention, after themselves saying that the "Third world was restricted to producing raw materials" under colonialism is that there is uneven development within the third world itself. iirc, europe/north america were both home to the periphery & semi-periphery of the capitalist world-system under colonialism, but under neocolonialism we see a geographic shift of the semi-periphery outside of these continents. (of course, a major reason for this is that china isn't capitalist.) this is of importance in thinking of the long term trajectory of imperialism, i think. moreover, "spread of industry around the world" kind of obfuscates that manufacturing is not the most capital-intensive sector, and thus first world de-industrialization does not mean the end of the labor aristocracy, as high-paying white collar jobs replaced the high-paying blue collar jobs (i.e. does not signal a singular growing world class structure).
another thing with regard to the above 2 points. globalization is the globalization of production, the globalization of the capital-labor relation, but this does not signal anything other than a tendency towards a global class structure in the final instance, i think. vivek chibber's response to the claims of the postcolonial theories with regard to capital's universalizing tendencies would fit here. for instance, the wellknown lack of mobility in labor-markets is a major bulwark for first world labor - and this can co-exist with globalization, can it not? (i guess part of this depends on how you understand labor-power as a commodity, but there are other examples we could think of.) empirically, we know that the 90s saw massively increasing globalization (see OP graphs), but super-profits continued flowing throughout the system and thus there was no massive proletarianization in the first world (one can quickly check the poverty rate, for instance, and see it did corresponding increase).
c) the authors locate the primary tendency of neocolonialism with regard to the black nation as being "re-placed economically by new Third World population transfer". in other words, there is a tendency towards the lumpenization of the black nation. perhaps one could study this change empirically between 1975-2016? this is possible and shouldn't be hard.
a random offtopicish thought: while the causes are different for Third World pauperization versus black lumpenization, it would be interesting to attempt to develop a sophisticated analysis of pauperization vis-a-vis the capital-accumulation process. with this, uncovering what the 'sustainable' level of the lumpenproletariat produced endogenously by capital accumulation is, and whether it's getting 'too big' to the point where it jeopardizes the process itself (because the lumpenproleratiat cannot produce value - as only wage-labor can), and how this expansion to the point where it's 'too big' is endogenous to the process itself, and whether this has happened before in history. like, how do we bring amin's analysis of the high productivity of agribusiness + creation of a world-market of food causing falling food prices pauperizing millions into these basic economic categories?
something like:
capital accumulation -> expansion of the proletariat + concentration of capital -> centralization of capital + rising OCC -> creation of a lumpenproletariat -> fall in profitabiltiy -> further expansion of a lumpenproletariat = slumdweller commie revolution
...but cooler
d) the authors regard neocolonialism as the 'managed' form of captialism heading towards a chronic collapse , whereas colonialism was characterized by cyclical crises resolved by "periodic bloodletting". this is a little weird to me, as trustification & cartelization, regarded as having prevented crises of overproduction, and social democracy, regarded as having prevented crises of underconsumption, were supposed to be the end of cyclical crises, according to the revisionists. the authors are repeating this line, but for globalization instead, but with the caveat of having a theory of collapse. i think the correct posistion is that capitalism has always suffered from cyclical crises, and theories of collapse are bad because they encourage laziness.
other than that, the main issue with the book, MIM pointed out in their review of it, is that the authors could use a good reading of mao's on contradiction to avoid a kind of agnosticism whereby they see race/nation/gender/class as 'equal'.
but it's still a great book.
finally, since this is somewhat relevant, to whoever is unaware of it, i'd check out sam williams' ongoing serialized review of shaikh's magnum opus on capitalism, where he contrasts "real competition" with the 'updating' of marx's analysis by baran and sweezy on this the 50th(-51st) anniversary of monopoly capital, and will end with a contrasting john smith's analysis with shaikh's anti-leninism. presumably (if i understood smith correctly, and it would make sense given what i just said) he'll analyze smith's attempt to marxicize lenin's theory of imperialism.
Oligopoly and Global Wage Differentials
Post's third and final version of the labor aristocracy thesis is that presented by Elbaum and Seltzer. They argue that the severely limited competition faced by oligopolies and large-scale industrial concerns means that these can secure higher-than-average profits (the authors' singular definition of super-profits) which allow them to afford their unionized workers higher wages and benefits and more secure employment than their counterparts in the 'marginal' industries, in the retail and services sector, in agriculture and amongst the under- and un-employed.
In refuting this thesis, Post cites studies which demonstrate the absence of a strong correlation between industrial concentration and higher-than-average profits and wages. Instead, for Post, the lower wages of female and black and minority ethnic workers can be explained by capitalists' recruiting them into the more labor-intensive industries. The stratification of labor, then, is based on how 'competition and accumulation - not monopoly - continually differentiate in terms of technique, profitability, and wages and working conditions' (Post, 2010, pp. 27-28). As such, profit and wage differentials are rooted in differences in labor productivity. It is not, then, that workers in unionized capital-intensive industries share in their oligopolistic employers' super-profits, but that their higher-than-average wages may be accounted for by the lower unit costs of these industries and effective, militant union organization.
Tellingly, Post is entirely oblivious to the lower unit costs of non-OECD manufacturing. Smith (2010, p. 215) tabulates data from the World Bank (2006) showing value added versus labor costs between 1995 and 1999 for 64 countries. THis table demonstrates that unit labor costs (i.e. the average cost of labor per unit of output) are an average 1.6 times lower for non-OECD manufacturing workers than OECD manufacturing workers. Thus, if an OECD worker is paid $1 for an hour's work and creates #20 worth of output in that hour, a non-OECD worker paid at the same rate would create$32 worth of output in that hour. Obviously, OECD wages are greatly in excess of non-OECD wages, by around 1000%, so one hour of OECD labor appears to generate much more value added than one hour of non-OECD labor. Nonetheless, in purely price-based terms, terms abstracted from the ratios between what Marxists call necessary and surplus labor, non-OECD manufacturing workers are 60% more exploited (more 'productive') than OECD workers.
Post is certainly correct, however, in highlighting as he does how the technical division of labor as organized according to the uneven development of the productive forces is crucial to the issue of labor stratification. The national, 'racial' and gender hierarchies upon which social chauvinism is predicated are de- and reconstructed in the age of globalization (i.e. globalized imperialism). As Bhattacharyya, Gabriel, and Small (2002, p. 8) write:
Overall, several global developments have helped to reconfigure old patterns of ethnic relations and create new forms of racial privilege and politics. These include: economic restructuring in the West, including the demise of heavy industries, the rise of the new technologies, and the expansion of old and new service industries; the growth in significance of transnational and multinational operations; the emergence of new global divisions of labor and, finally, the rise of international agencies and global economic blocs, all of which have surfaced to transform 'national' production forms and processes and their corresponding social relations. These relations have been racialized in a number of ways; the role assigned to migrant labor in the new service economy; the shift of production sites from inner city areas, where migrant communities have traditionally resided, to greenfield (high-technology) sites, where they traditionally have not, and finally internal patterns of migration within the Third World and the use of female labor in the production of microchips and the manufacture of designer sportswear.
The facts of racist workers' and labor organizations' responsibility for the exclusion of black and minority ethnic workers from particular industries, occupations and countries are largely beyond the scope of the present essay. Suffice it to note that global wage differentials are politically grounded in such a way that the conservative political behavior of the metropolitan working class must be taken into account.
Just as serious an issue is Post's dismissal of the role of oligopoly, alongside its political, military and cultural supports, in sustaining wages differentials on a global scale. By focusing on critiquing the possibility of super-profits derived from the uneven development of (1) branches of industry, Post misses the greater significance of super-profits generated via the uneven development of (2) countries and (3) regions of the world economy (Strauss, 2004). Amin (2000, pp. 4-5) cites five major sources of monopoly super-profits through which the imperialist countries constrain competitive production in the developing world and ensure that value is transferred sui gratis from the global South to North.
• Technological monopolies sustained mainly by state control, military spending in particular. Metropolitan 'defense' systems, as afforded by taxing the affluent Western public, function as a massive fund for research and development in 'private' industry;
• Financial control of worldwide markets ensuring that national savings are subject to international banking interests based largely in the developed countries. The US trade deficit currently swallows fully 80% of all global savings in the form of foreign purchases of US municipal, state and government bonds;
• Monopolistic access to the planet's natural resources. 'Petrodollar warfare', for example, enables the transfer of surplus value from the global South to the global North, as the militarily secured denomination of oil sales in US dollars forces countries to maintain large dollar reserves, creating a consistent demand for dollars and upwards pressure on the dollar's value, regardless of economic conditions in the United States;
• Media and communication monopolies provide developed countries with a crucial means by which to manipulate political events. The corporate and government media monopolies, largely based in the metropolitan countries, present a picture of the world perfectly suited to their own anti-social agenda; and,
• Monopolies of weapons of mass destruction, particularly by the United States, ensure that Third World states are literally forced to comply with imperialist diktat, upon pain of terrible war (Amin, 2000, pp. 4-5).
The dominance of OECD-based monopolies in non-OECD markets entail for the latter: (1) a constant drain on available capital, (2) deteriorating terms of trade and (3) massive surplus value transfer resulting from unequal exchange. To pay for the product of OECD-based oligopolies, non-OECD countries must send abroad a greater amount of socially necessary labor time than they would were their own industries free to develop according to the demand of their own peoples. Developing countries are compelled under capitalism to compete with one another for access to the capital, electronic and military goods monopolized by the OECD. This ensures that each must drive down wages to gain comparative advantage over the other, hence contributing additional surplus value than would result simply from unequal exchange based on divergent materialized compositions of capital.
Post on Labor Aristocratic Militancy
Post (2010) misunderstands the significance of the labor aristocracy thesis when he ascribes to it the notion that bourgeois workers are politically quiescent, in his words, unable to 'play a leading role in radical and revolutionary working-class organizations and struggles'. He thus sets up a straw man version of the labor aristocracy thesis which he attempts to then refute by citing examples of the economic struggles of relatively well-paid, skilled and securely employed workers, both in the developed world and elsewhere, against their employers. Indeed, besides the examples cited by Post, it may be noted that the English trade union movement has always been the strongest in those trades wherein workers were most independent, most in demand and best paid. The wool combers, for example, were the first group of English workers to organize against the common exploitation of their employers (Mantoux, 1970, p. 78). More generally, there is some sociological truth in the idea that it has been mostly skilled workers and intellectuals who have been members of the Communist parties in Europe. That does not, however, change the reality that these have been small in numbers or that the main policy they have pursued has been narrowly economistic and at least tacitly social-imperialist.
Proponents of the labor aristocracy thesis do not assert that the interests of the haute-bourgeoisie and the labor aristocracy are identical or entirely congruous. There is a conflict of interest between rich workers and capitalists and this may at critical moments manifest itself in widespread strikes and social turmoil. In South Africa, for example, where th white working class per se constituted a labor aristocracy (Davies, 1973), there were frequent conflict between it and the state over the impact of the job color-bar system on production costs, outputs, and profits (Phakathi, 2012, p. 283). The labor aristocracy thesis affirms, however, that workers in the major imperialist countries cannot and will not overthrow the capitalist system so long as a system of super-exploitation existst to maintain lagging profit rates and guarantee them high living standards.
Post is distinctly disingenuous, therefore, in disregarding the pro-capitalist-imperialist tendencies of the metropolitan working class in the twentieth century and beyond. As Sassoon (1997) has amply demonstrated, the effective parties of the left in the imperialist countries have functioned as vehicles to enforce the partial regulation and socialization of capitalism, as opposed to having posed any serious threat to its replacement. Indeed, those parties and organizations that the metropolitan working class has supported throughout the twentieth century and beyond have certainly been no less imperialist or militarist than their 'conservative' counterparts. It is demonstrably absurd to meekly attribute the reformism of the working bourgeoisie to 'false class consciousness', job insecurity ('precarity') or Stalinist or social democratic 'betrayal' as is typical amongst Western Marxists.
Yet whilst independent parties of the working class, distinct from the two or three main imperialist parties, have had practically zero electoral significance for the past century, that situation is changing today. That Western 'workers' are today fascism's major constituency has been shown by Oesch in his survey of literature showing an 'increasing proletarianization (sic) of right-wing populist parties' electorate' since the 19990s' (Oesch, 2008, p. 350). In particular, studies show that workers have become the core electoral base of the Austrian Freedom Party, the Belgian Flemish Block, the French National Front, the Danish People's Party, and the Norwegian Progress Party. At the same time, 'working class' votes for the Swiss People's Party and the Italian Lega Nord are only surpassed by those of small-business owners, shopkeepers, artisans and independents. It seems reasonable to suggest, then, that during the 1990s, right-wing populist parties constituted a new type of working-class party. Oesch queries why persons 'strongly exposed to labor market risks and possessing few socioeconomic resources', 'located at the bottom of the occupational hierarchy', might vote for right-wing populist parties and finds the answer in popular cultural protectionism and deep-seated discontent with the functioning of the 'democratic' system, as opposed to 'economic grievances' per se (Oesch, 2008). In fact, it is a mistake to postulate a rigid dichotomy between the racist authoritarian nationalism of metropolitan labor and its socioeconomic position. The degree of core-nation workers' exposure to labor market risks and their possession of socioeconomic resources are directly related to their location, not at the bottom of the occupational hierarchy but, at the level of the global economy, right at its top. As such, the political intent to oppress, disenfranchise and exclude 'non-white', non-Christian people from state boundaries is not only based on actual or potential competition for jobs. Rather, it is an expression of 'working class' support for an imperialist system that more and more openly subjects entire nations in order to monopolize their national resources and capital. That global imperialism has found it necessary to admit persons from neocolonial states ac cross its border for economic, diplomatic, political and other reasons has consistently met with the disapproval of the metropolitan workforce. This has only intensifies as Keynesian social democracy has been replaced with neoliberal economic restructuring and the accompanying growth of the racialized police state. The super-wages of metropolitan labor do not only depend upon militarized borders and job market discrimination but also on the degree to which workers can influence state policy in their own favor. In the absence of trade union vehicles (appropriate to an earlier, social democratic phase of labor organization), First World democracy, based as it is upon the oppression of more than three quarters of the world's population, finds its sine qua non in racist national chauvinism. As such, it is not uncommon for brazenly national-chauvinist parties to gain support from groups of persons considering themselves politically left-wing. With 20% of its members considering themselves 'left', Jean-Marie Le Pen's fascist Front National, for example, did well in the 1995 French elections wit hthe slogan 'neither right nor left, but French', garnering 30% of the working class vote and 25% of the unemployed vote (Weissmann, 1996). More recently, a 2011 poll found that while 48% of Britons would vote for a far-right anti-immigration party committed to opposing so-called Islamist extremism with 'non-violent' means, 52% agreed with the proposition that 'Muslims create problems in the UK' (Townsend, 2011).
None of the above testifies to the labor aristocracy constituting what Post refers to as 'the revolutionary and internationalist wing of the labor movement' (Post's emphasis).
What is the Labor Aristocracy?
Post castigates sections of the left for writing about a 'labor aristocracy' for which 'there is no single, coherent theory'. To clarify my position, I will attempt to outline the fundamentals of such a theory.
The labor aristocracy is that section of the international working class whose privileged position in the lucrative job markets opened up by imperialism secures for it wages approaching or exceeding the per capita value created by the working class as a whole. As such, the class interests of the labor aristocracy are bound up with those of the haute-bourgeoisie so that if the latter is unable to accumulate super-profits, then the super-wages (wages supplemented by super-profits) (Edwards, 1978, p. 20; Emmanuel, 1972, pp. 110-120) of the labor aristocracy must be reduced.
The labor aristocracy provides the major vehicle for bourgeois ideological and political influence within the working class. As highlighted above, for Lenin, these conditions allow for ever-greater sections of the metropolitan working class to be granted super-wages.
As it has developed over the course of the last century, the labor aristocracy was first transformed from being a minority of skilled workers in key imperial industries to a majority of core-nation workers dependent on imperialist state patronage. From WW1 to the 1970s, social-democratic politicians and trade union bureaucrats were the reputable middlemen in the social partnership forged between globally ascendant oligopoly capital and metropolitan labor. Even as the Keynesian social contract was systemically dismantled under the ensuing neoliberalism, however, the massive proletarianization and super-exploitation of Third World labor in the final decades of the last century provided that unprecedented standards of living and the widespread introduction of supervisory and circulatory occupations further insulated metropolitan labor from the intrinsic conflict between capital and labor (10). Nineteenth century restrictions imposed by labor aristocratic unions on membership for the mass of workers have today been entirely substituted for restrictions on immigration from the Third World which are national in scope and allow the maintenance of profound global wage differentials.
Divergent global rates of exploitation have profound consequences in terms of the amount of wealth workers in different countries consume. Fig 1. compares total contribution to global production to the share of total working class and middle class household consumption for the world's population, ranked in order of income. In the Lorenz curve used to depict global income equality, where the χ-axis is cumulative population and the Υ-axis is cumulative income, perfect income equality is expressed in a diagonal straight line. The reality of income distribution, however, shows a curve that is more or less flat for the first two-thirds of its trajectory, but rises ever more steeply towards the end. The definition of the 'Gini Inequality Index' is the ratio between the area bounded by the curve and the straight diagonal, and the total area under the straight line. Plotted according to international income distribution, we refer to this as the 'world consumption curve'. Smith has suggested that generating a 'world production curve' by plotting each country's production of social wealth and superimposing this on said consumption curve can illustrate much in regard to global exploitation (11). In a world without exploitation, the two curves would be identical, that is each person/household would produce what they consume. In fact, however, the global production curve diverges greatly from the consumption curve. In Fig 1., the area bounded by the two curves to the left of their intersection ought to be the same as the bounded area to their right were the world's workers to consume what they themselves produce. The ratio between this area and the area under either of the two curves (by definition identical, since total production = total consumption) might be called the 'global exploitation index'. The countries closest to the point of intersection are those whose total contribution to global wealth is closest to their total consumption of it. All countries to the right are net exploiters, that is imperialists, and all countries to the left are net exploited.
According to mainstream economic doctrine, since markets equalize the income of workers, capitalists and nations with the value of their product, the production curve must be identical to the consumption curve; any deviation of one from the other being the result of the interruption of market forces. As neo-classical marginalist economist John Bates Clark put it:
Where natural laws have their way, the share of income that attaches to any productive function is gauged by the actual product of it. In other words, free competition tends to give labor what labor creates, to capital what capital creates, and to entrepreneurs what the coordinating function creates. (Clark quoted in Baran, 2012, p. 29)
As Smith notes, 'Marx pointed out the fundamental error in this view: workers are paid not for what they produce, but for what they consume' (12). As such, the two curves described and depicted above directly juxtapose neoclassical and Marxist value theory. Moreover, by graphically illustrating the great disjuncture between contribution to global production and share of global consumption, Fig. 1 refutes the views of Post and others on the left who persist in denying the effects of global labor segmentation and stratification on the transformation of the global class structure.
For Post's non-Marxist, marginalist, view of income distribution, global wage differentials are the result of productivity differentials conditioned by differences in the level of the productive forces at different societies' disposal. However, as Marx argued, it is only living labor and not machinery or constant capital which adds value. According to Marx (1977a, p. 53), an hour of average socially necessary labor always yields an equal amount of value independently of variations in physical productivity, hence the tendency for labor-saving technological change to depress the rate of profit. Although increased productivity results in the creation of more use values per unit of time, only the intensified consumption of labor power can generate added (exchange) value. Since wages are not the price for the result of labor, but the price for labor power, higher wages are not the consequence of (short-term) productivity gains accruing to capital. Rather, in a capitalist society, the product of machinery belongs to the capitalist, not the worker, just as in a feudal or tributary society part of the product of the soil belongs to the landlord, not the peasant. As Engels (1995, pp. 181-182) wrote:
Marx demonstrates that machinery merely helps to lower the price of the products, and that it is competition which accentuates that effect; in other words, the gain consists in manufacturing a greater number of products in the same length of time, so that the amount of work involved in each is correspondingly less and the value of each proportionately lower. Mr. Beaulieu forgets to tell us in what respect the wage earner benefits from seeing his productivity increase when the product of that increased productivity does not belong to him, and when his wage is not determined by the productivity of the instrument (i.e. the machine - ZC)
In Fig. 1, the economically active population (EAP) is defined as all persons who furnish the supply of labor for the production of goods and services. As such, the EAP includes hundreds of millions of persons engaged in private, so-called subsistence farming in the Third World. WE have favored Eurocentric assumptions that subsistence farmers contribute nothing to global production (even though most contribute money to capitalist landlords and supply goods for sale on the market), and assumed that only wage labor capable of generating surplus-value is considered productive. Total global production is defined as the working hours of full-time equivalent production sector wage employment in all countries (13). The total production workforce was obtained by multiplying the EAP in each country by the rate of full employment for its correspondingly global income quintile (Kohler, 2005, p. 9) and then by multiplying this total by the percentage of each country's workforce in industry and agriculture. The figure thus obtained was then multiplied by 133%, since I define 'underemployment' as being employed for only one-third of the hours of a full-time worker. To calculate capitalists' share of household income expenditure, Piketty and Saez's (2004) measure of the income share of the top echelons of the US income distribution has been used as a global benchmark. Capitalists typically earn more than they can possible consume, and much of their household consumption is reinvested. Since accumulated wealth is almost entirely in the hands of capitalists, the share of wealth of the top 10% of the population has been subtracted from total household consumption expenditures figures for each country. Doing so allows a morel focused comparison of relations between the world's working and middle classes, the major bone of contention between exponents and opponents of the labor aristocracy thesis. Rather than adjusting each country's figure by the ratio of its Gini index to that of the United States (so that for countries with more unequal income distribution like Brazil or Pakistan, a larger portion of its national income would be subtracted), I have assumed a flat rate of 42% for capitalist household income expenditure in all countries.
To suppose that these stark inequalities are purely the result of superior economic efficiency and skill levels of the part of the core capitalist nations, or that they are the reward of a section of the global working class for its exceptional militancy, is to stretch reality to a breaking point (Cope, 2012, pp. 221-251).
Conclusion
The failure on the part of the left to rigorously examine the structuration of the international class structure by imperialism, as evidenced by the global contradiction between production and consumption highlighted above, has in no small measure added to the serious difficulties facing the socialist movement, both historically and today. Socialist movements in the metropolitan countries have tacitly accepted the global division between imperialist and exploited nations by obfuscating and divaricating from the issue of international surplus value transfer. Working class internationalism and the struggle against racism and colonialism within the imperialist countries are both sacrificed at the altar of narrow appeals to material self-interest on the part of the wealthiest sections of the inelectuably global workforce. Historically, such economism has its corollary in a deeply conservative reformism and chauvinist acceptance of the status quo ante, such that imperialist governments have been and are permitted to carry out virtually any act of aggression and penal repression against foreign countries and minority communities without fear of widespread national opposition.
Metropolitan labor's dependence upon imperialism for its existence as such - that is as labor whose affluence is predicated upon the maintenance of the core-periphery divide - clearly precludes the possibility that its conservatism is based purely on intellectual myopia. However, to paraphrase Noam Chomsky, intellectuals have the responsibility to expose untruth wherever they see it. This is all the more imperative when disclosing that the reality of vested interests can only assist conscious workers and their representatives, those really committed to socialism, to combat working class acquiescence in the creeping fascination of the body politic associated with the ascendancy of the neoliberal police state.
Understanding how the 'labor aristocracy' is formed means understanding imperialism, and conversely. Those socialist organizations which do not understand the embourgeoisement of labor typically play down the significance of imperialism, so that even those ostensibly opposed to imperialism very often miss their target. Thus, some socialist organizations prioritize peace work and opposition to militarism, equating imperialism with the exercise of brute force against one or more sovereign nations. Their foil may be a particular administration, tis foreign policy, or even the military-industrial complex tout court. Alternatively, imperialism might be opposed as befitting only a handful of ultra-rich bankers and foreign investors (or even, at a stretch, a handful of very well-paid union bureaucrats and highly skilled professionals). In this case, only the richest 1-5% of society is seen as upholding the rule of monopoly capital.
The approach recommended here to readers, by contrast, is to treat imperialism as essentially involving the transfer of surplus value from one country to another and an imperialist country as a net importer of surplus value. This approach allows us to gauge the size and boundaries of the labor aristocracy and, hence, to work out the logistics of mounting really effective opposition to capitalism and its military, legal, financial and political bulwarks.
(10) Class struggle pivots around the exploited section of the working class' retention or otherwise of the surplus value it creates at the point of production. Since the fundamental class antagonism in capitalism is thus between the producers of surplus value and the capitalists who receive it in the first instance, unproductive laborers receive what Resnick and Wolff (2006, pp. 206-220) call 'subsumed class income' from the distribution of already appropriated surplus value. As imperialism comes to form the central core of the capitalist system, the physical toil needed to produce surplus value is increasingly the sole preserve of super-exploited Third World labor.
(11) The idea for Fig 1., a graphical comparison between global consumption and global production, was suggested to me by Dr. John Smith in private correspondence. I am indebted to John Smith for the use of the idea herein.
(12) Private correspondence from John Smith.
(13) For a Marxist view of productive and unproductive labor, see Amin (1976, p. 244); Marx (1968, p. 157); Marx (1977a, pp. 518-519); Resnick and Wolff (2006, pp. 206-220); Shaikh and Tonak (1994, p. 24)
Edited by marlax78 ()
#33
great post dude very nice e: frontpage??
#34
marlax78 posted:
i realized this a little while ago, but i appreciate the heads up. i'll have my library request it. annoyingly, this journal (research in political economy) is subscribed to by a university in boston, whereas i go to a different university in cambridge. so i have to wait for what i should be able to get immediately, lol.
i might could save you a little time, for look what i have just found via an imperialism study group:
Cope - Global Wage Scaling and Left Ideology: A Critique of Charles Post on the ‘Labour Aristocracy’ (2013)
Post - The Roots of Working Class Reformism and Conservatism: A Response to Zak Cope's Defense of the "Labor Aristocracy" Thesis (2014)
Cope - Final Comments on Charles Post’s Critique of the Theory of the Labour Aristocracy (2014)
Bagchi - A Comment on the Post–Cope Debate on Labour Aristocracy and Colonialism (2014)
maybe the real Secret PDF Subforum... was the friends we made along the way
Edited by Constantignoble ()
#35
tyvm... is there a pdf of norfield "value theory and finance" or should i transcribe it
#36
(From “Marx & Engels: On Colonies, Industrial Monopoly, and the Working Class Movement,” 2016 Kersplebedeb edition, originally compiled & edited by the Communist Working Circle, 1972)
Introduction
by Zak Cope and Torkil Lauesen
This collection of texts by Marx and Engels on colonialism, industrial monopoly, and the labor movement is a reprint of a booklet published in 1972. The texts were originally collected by a Danish anti-imperialist group called the Communist Working Circle (CWC). In the late 1960s, the CWC developed the so-called “parasite state” (“snylerstaten,” literally “leech state”) theory linking the imperialist exploitation and oppression of the proletariat in the global “South” with the establishment of states in the global “North” in which the working class lives in relative prosperity.
In connection with the CWC’s studies of the development of this division of the world and of the global working class, they selected and published these texts by Marx and Engels under the title On Colonies, Industrial Monopoly and the Labor Movement. As the title indicates, the texts focus on the connection between colonialism, the establishment of an English industrial monopoly around the middle of the 19th century, and the consequent spread of bourgeois ideology within the English working class.
There is a tradition in ostensibly Marxist thought which prides itself on making the labor of hundreds and thousands of millions of slaves, peasants, and superexploited workers in the export dependencies and colonies disappear from the ledger sheets and pay packets of the advanced capitalist countries. By contrast, we argue that a section of the working class had and continues to have a vested interest in maintaining the profitability of capitalist enterprise, thus necessitating imperialism (first colonialism, and now neocolonialism). The dimensions of this labor aristocracy, undergirded by the superexploitation of the international proletariat, have expanded to encompass the overwhelming majority of metropolitan employees. The stratification of labor globally implies a relatively rigid caste-like system for which white nationalism is typically a basic organising principle (Cope 2014).
Primitive Accumulation
What are the main observations of Marx and Engels concerning the connection between colonialism, English industrial monopoly, and the spread of bourgeois ideology throughout the working class? First, Marx underlines the connection between the colonial plunder of Latin America, Africa, and Asia and the breakthrough of capitalism in Northwestern Europe. In the 17th and 18th centuries, the number of laborers and slaves in plantations, haciendas, factories, and mines in the colonies was at least as large as the proletariat of Europe itself (Blaut 1987, p. 181). The exploitation of the colonies created the wealth that made up the original capital that produced the breakthrough of industrial capitalism in England in the early 19th century. Marx describes this in Capital, in the chapter “The genesis of industrial capital” (page 93 in this book).
Blaut suggests two ways to assess the real significance of colonial production to the beginnings of capitalism in the 16th century. The first is to “trace the direct and indirect effects of colonialism on European society, looking for movements of goods and capital, tracing labor flows into industries and regions stimulated or created by colonial enterprise or closely connected to it, and the like” (Blaut 1993, p. 193). The second “is to arrive at a global calculation of the amount of labor (free and unfree) that was employed in European enterprises in America, Africa, and Asia, along with the amount of labor in Europe itself which was employed in activities derived from extra-European enterprise, and then to look at these quantities in relation to the total labor market in Europe for economic activities that can be thought of as connected to the rise of capitalism” (ibid).
On the basis of population data, and noting the divergent rates of exploitation of labor, Blaut (1993, p. 194) argues that “the European populations were [no] more intimately involved in the rise of capitalism than the American populations—that is, the 13 million people who we assume were in European-dominated regions.” Moreover, “It is likely that the proportion of the American population that was engaged in labor for Europeans, as wage work, as forced labor including slave labor, and as the labor of farmers delivering goods as tribute or rent in kind, was no lower than the proportion of Iberian people engaged in labor for commercialized sectors of the Spanish and Portuguese economy” (ibid).
Acemoglu et al (2002) have argued that the rise of Western Europe after 1500 “is due largely to growth in countries with access to the Atlantic Ocean and with substantial trade with the New World, Africa, and Asia via the Atlantic. This trade and the associated colonialism affected Europe not only directly, but also indirectly by inducing institutional change. Where “initial” political institutions (those established before 1500) placed significant checks on the monarchy, the growth of Atlantic trade strengthened merchant groups by constraining the power of the monarchy, and helped merchants obtain changes in institutions to protect property rights. These changes were central to subsequent economic growth.” The authors further demonstrate that nearly all the differential growth of Western Europe between the 16th and early 19th centuries is accounted for by the growth of Atlantic trading nations directly involved in trade and colonialism with the New World and Asia, namely, Britain, France, the Netherlands, Portugal, and Spain, a pattern in large measure reflecting the direct effects of Atlantic trade between Europe and America, Africa and Asia.
The external precondition of Britain’s growth as a capitalist country was commercial hegemony founded upon a burgeoning colonial empire. Britain became the center of world trade, and an industrial division of labor developed in relation to overseas countries. These supplied the raw materials for British industry, which in return supplied the finished products. Britain became the workshop of the world, and her industry expanded in an international setting created by the British navy. This hegemonic position guaranteed Britain a monopoly of industrial manufacture, a monopoly that it held through the first half of the 19th century. During this initial period, British capitalism developed at the expense of handicraft production nationally and internationally, ensuring that comparatively cheaper British industrial goods dominated the world market.
Marx and Engels on the Development of Capitalism
This situation was not to last, and the most serious economic depression capitalism had yet experienced materialized around 1873. As capitalism expanded, competing firms in the metropolitan countries endeavored to increase productivity using new industrial techniques. The discovery of electricity alongside scientific innovations in chemical and steel production (the so-called “second industrial revolution”), alongside the expansion of colonial and American agriculture, led to overproduction and a consequent fall in the prices of commodities. At the same time, the ratio between constant (c, raw materials and machinery) and variable capital (v, labor power, or wages), what Marx called the “organic composition of capital,” has an ongoing tendency to expand. The combination of a rising organic composition of capital, rising wages, and intensified international competition accompanying the spread of industrialization in Germany and the United States, resulted in a glut in commodities markets, a regression in the rate of exploitation, and a decline in the rate of profit (that is the sum of surplus value(s) divided by total capital (c + v) outlay) (Cottrell [1980] 2006, pp. 262–4).
In the first half of the 19th century, it was difficult for capital to meet the demands of the proletariat if the rate of profit was to be maintained. During this period the productive forces were revolutionized. The advance from spinning wheel to spinning-machine, from handloom to power-loom, the invention of the steam engine, the introduction of the railways and so on, increased productivity exponentially. However, this increase in productivity did not in any way mean better conditions for the working class—on the contrary. During the whole period, wages were near the physiological subsistence level. Industry no longer competed only with handicrafts, but competition among the capitalists themselves became its most important form. Consequently, major demands for improvements were rejected. The bourgeoisie could not in this period afford the luxury of higher wages and better working conditions, not to mention universal male suffrage, the right to form trade unions, and other demands made by the English working class.
Reform at that time threatened the very existence of the capitalist system. The 1840s and 1850s were thus a period of chaotic conflict between labor and capital. The early labor movement developed new forms of action such as the strike and industrial sabotage. Citizens revolted in the streets of the big cities. This was met with harsh repression by the ruling elite which feared revolution and the “dangerous classes.” In 1848, a wave of revolutionary uprisings swept through Europe’s cities. It was in this context that Marx and Engels wrote in the first line of the Communist Manifesto: “There is a specter haunting Europe—the specter of communism.”
The Colonialist Solution
In the mid-19th century, the system had only one way in which crises might be avoided, and that was to find new markets for goods and capital. Imperialism is in part the attempt to resolve the contradiction between production and consumption by creating a buoyant consumer market in the First World while at the same time relocating low-wage production to the Third World. Capitalism cannot be confined to one country; according to its very nature it must continuously expand. Marx and Engels describe this trend in The Communist Manifesto (see page 64 in this book). Marx regarded capitalist development as a centrifugal process driven by the contradictions of capitalism itself. These were manifested by the decreasing possibilities of profitable investment in the most highly developed capitalist countries. At the same time, more profitable investments could be made in the colonies and in the less developed countries.
Marx believed that the export of capital would result in capitalism spreading all over the world. However, he did not imagine that it would institute a rigid division of the world between a highly developed imperialist center and an exploited and underdeveloped periphery. Marx thought that capital would diffuse outwardly, making the rest of the world a reflected image of Britain, and thus develop the same contradictions globally as it had domestically, ones which threatened capitalism’s existence and would thereby pave the way for a worldwide revolutionary socialist process. As Marx ([1867] 1954, p. 19) writes in Capital: “The country that is more developed industrially only shows, to the less developed, the image of its own future.”
Around 1830, Britain had completed the initial stage of the industrial revolution. At that time, continental Europe and the United States had hardly begun theirs. These countries did not become a periphery to Britain. On the contrary, British capital contributed largely to making them highly developed capitalist countries. The United States caught up with Britain a few decades later. Marx believed that the development of capitalism in the colonized countries of Asia and Africa would be similar. When Britain had destroyed the original societies and introduced capitalism, these colonies would experience a rapid development. Marx describes this in August 1853 in his article “The future results of the British rule in India” in the newspaper Daily Tribune (page 78).
The opening of new markets in Africa and Asia, and the export of capital to North and South America would put off the collapse of capitalism for a while. However, it would only be a short respite; the final result would merely be an even more intense accumulation which would lead to a new and more intensified crisis of overproduction (Engels [1848] 1976, pp. 527–9). In fact, however, capitalism did spread out across the globe, but as a polarizing process, not only between the bourgeoisie and the working class, but also as a division between an imperialist center and an exploited periphery. This fundamental contradiction gave capitalism completely new conditions of growth and a longer life.
Marx and Engels’s considerations regarding the development of the colonies and the early collapse of capitalism did not transpire. This is not to say that their analyses of capitalism at that time were wrong. In the middle of the 19th century, the capitalist system was indeed on the verge of having exhausted its potential. Crises arose at ever shorter intervals and assumed an increasingly serious character. The strength and fighting spirit of the proletariat grew accordingly: the “specter of communism” moved through Europe, materializing in the uprising of the Paris Commune of 1871. The bourgeoisie was terrified of revolution. What Marx and Engels could not foresee was that just when aggravating crises seemed to forebode terminal crisis, a new development offered it renewed strength and vigor, namely, the transfer of values from abroad.
Crucially, in Marxist terms, as a force to counteract the tendency for the rate of profit to fall, imperialism is of the highest import. Marx listed five major ways that this tendency is forestalled: (1) cheapening of the elements of constant capital (raw materials and machinery); (2) raising of the intensity of exploitation (longer working days and more efficient labor organisation); (3) depression of wages below their value (superexploitation, or the payment of less than the domestic average value of labor power); (4) relative overpopulation (or a larger reserve army of labor); and (5) foreign trade (Sweezy 1949, pp. 97–100). Every one of these countervailing forces is realized through imperialist exploitation of dependent nations.
Jawaharlal Nehru ([1934] 1982, p. 548), India’s first Prime Minister, highlighted the significance of imperialism in a world history originally written for his daughter:
“It is said that capitalism managed to prolong its life to our day because of a factor which perhaps Marx did not fully consider. This was the exploitation of colonial empires by the industrial countries of the West. This gave fresh life and prosperity to it, at the expense, of course, of the poor countries so exploited.”
From Dangerous Class to National Citizenship
In the second half of the 19th century, the conditions of the European proletariat slowly began to change. For the first time in the history of capitalism, the capitalists had to pay wages above the mere subsistence level. This first tiny improvement was not primarily a result of the fight of the proletariat itself. The labor movement was politically weaker than before and Chartism had been impaired by cleavage and corruption. Rather, these first improvements in wages and working conditions for the British proletariat were due to contradictions between rival factions of the ruling class.
As noted, Britain had a virtual monopoly of industrial goods at the beginning of the 19th century, resulting in extra profits. However, these profits did not only go to the industrial capitalists, and during the first part of the century it definitely did not result in higher real wages for the working class either. Paradoxically, a large portion of the extra profits from the industrial monopoly was passed on to the landowning class, its historically strong position in Parliament having allowed it to introduce an embargo on the import of corn and other agricultural products into Britain from 1804. The landowners could thereby maintain a high level of prices for their products ensuring that capitalists had to pay their workers comparatively high nominal wages just to enable them to live above the breadline.
By this artificially high price of corn the landowners could apportion to themselves a considerable part of the extra profits earned by Britain’s industrial monopoly. Therefore, in the 1840s the industrial capitalists struggled to have the Corn Laws repealed. Allied with the working class they succeeded in 1846. The reopening of the importation of corn from Prussia and later from the United States caused a fall in the prices of bread and other food.
Following the fall in corn prices, the industrial capitalists tried to decrease wages, but the working class was able to limit this decrease and thus obtain an improvement. This victory was added to shortly after the abrogation of the Corn Laws, by the introduction of a ten-hour working day, a goal for which the workers had been fighting for thirty years. Here organized labor was unexpectedly supported by the landowners in Parliament, who thirsted for revenge on the industrial capitalists.
The extra profits of the British industrial monopoly and the internal fight between landowners and industrial capitalists meant that the wages of the British working class were increased above the subsistence level at which they had been so far kept. Between 1850 and 1872, imports of wheat more than doubled and imports of meat increased eightfold. Slowly the bourgeoisie changed its political strategy from repression of the “dangerous classes” to a gradual inclusion of the working class as national citizens. In the 1860s and 1870s both Napoleon III of France and the Conservative government in England allowed the working class to organize. Socialist parties were formed in all Western European countries while the trade union movement grew in strength. The right to vote was extended to include men from the working class, wage levels rose, and the first social and health insurance systems were introduced.
Parallel to this development was a de-radicalization of the Western European working class. It had left the 1848 revolutions and the Paris barricades behind in favor of parliamentarism and negotiation with employers. Class struggle became a controlled process within the parameters set by the system. Working-class political parties and trade unions successfully fought for higher wages and better working conditions, for unemployment and health insurance, pensions, and so forth. The result was a compromise between capital and the working class which dampened the future form class struggle would take.
This historic compromise had a dark side. The developing welfare services of the state, and the widening and deepening of the franchise, united the former “dangerous classes” behind the nation-state in imperialist wars. So that the citizens in the center of the Empire could enjoy a growing welfare, ideologies of “national interest” and racism arose to justify policies which, by contrast, meant death and misery for the people in the colonies. It is this that Australian academic M.G.E. Kelly calls “biopolitical imperialism.”
“Imperialism, therefore, is primarily thanatopolitical, a politics of death, contrasting with the biopolitics of the population found within the metropole. There is, I will contend, a direct relation between the two things, in which death is figuratively exported and life imported back, in a systematic degradation of the possibilities for biopolitics in the periphery, arising out of the operation of biopolitics in the center. …
“I will argue that biopolitics constitutes a missing link in explaining how imperialism involves ordinary people of the First World. For one thing, biopolitics provides a mechanism by which the profits of imperialism may be spread to a whole population. By uniting us in a single population, moreover, biopolitics generates solidarity between ordinary people and elites.” (Kelly 2015, pp. 18–19)
Mike Davis (2000, p. 59) illustrates this reality through case studies of India, China, and Brazil that show how imperialism in the form of direct governmental intervention or “neutral” economic processes destroys the health and welfare of these countries’ populations:
“Between 1875–1900—a period that included the worst famines in Indian history—annual grain exports increased from 3 to 10 million tons, equivalent to the annual nutrition of 25m people. Indeed, by the turn of the century, India was supplying nearly a fifth of Britain’s wheat consumption at the cost of its own food security.”
In addition India also had to pay a part of the British Empire’s military effort in cash and lives:
“Already saddled with a huge public debt that included reimbursing the stockholders of the East India Company and paying the costs of the 1857 revolt, India also had to finance British military supremacy in Asia. In addition to incessant proxy warfare with Russia on the Afghan frontier, the subcontinent’s masses also subsidized such far-flung adventures of the Indian Army as the occupation of Egypt, the invasion of Ethiopia, and the conquest of the Sudan. As a result, military expenditures never comprised less than 25 percent (34 percent including police) of India’s annual budget” (ibid, pp. 60–1).
As an example of the restructuring of the local economy to suit imperial needs regardless of the consequences for the population in the colonies, Davis (ibid, p. 66) notes: “During the famine of 1899–1900, when 143,000 Beraris died directly from starvation, the province exported not only thousands of bales of cotton but an incredible 747,000 bushels of grain.”
The Rationale Behind Capitalist Colonialism
During the 1850s, committed proponents of free trade considered that the costs of administering and enforcing British colonial diktat would outweigh any potential or actual economic benefits derived from it. For authors then and since, including those ostensibly opposed to formal colonialism, the colonising nations of Europe and North America did not substantially benefit from colonialism; rather, it was only a thin stratum of private investors, officials, and migrant workers who benefited.
During the early 19th century, there were precious few consistent free trade anti-imperialists, the most famous, manufacturer and Radical free trade supporter Richard Cobden, excepted. As Marx recognized in 1853, “when India had been in the process of annexation, everyone had kept quiet; once the ‘natural limits’ had been reached, they had ‘become loudest with their hypocritical peace cant.’ But, then, ‘firstly, they had to get it [India] in order to subject it to their sharp philanthropy.’ … In 1859 Marx was writing that ‘the “glorious” reconquest of India after the Mutiny’ had been essentially carried out for securing the monopoly of the Indian market to the Manchester free traders” (Habib 2002, pp. 8–9).
Adam Smith is well-known for having insisted that colonies were a never-ending source of war and expense for the colonising country. Less well known is the fact that his opposition to colonialism was fundamentally based on opposition to colonial monopolies in trade and investment as opposed to colonialism tout court. For Smith, colonialism was permissible if the colony contributed net revenue to the metropolis within a system of free trade for all members of an Imperial Federation (Kittrell 1965, p. 49).
More recently, Thomas and McCloskey (1981) argue that the Empire was an overall burden on the British economy. For not only did Imperial preferential duties ensure that British consumers paid over the world market price for West Indian commodities like cotton, ginger, indigo, molasses, rum, pimento, and sugar (they neglect to discuss the purchasing power of British wages), but the costs of occupying and administering the colonies, not to mention defending them from rival colonial powers, were a severe drain on the British government budget.
Are these authors correct in their estimate of the negligible role of Empire in Britain’s economy? According to economic historian Ralph Davis (1979, p. 10):
“Overseas trade did much to strengthen Britain’s economic life during the eighteenth century, and in doing so it helped to create the base without which the industrial take-off might not have proceeded so fast or gone so far. Moreover, once home demand ceased to be sufficient to maintain the momentum of growth of the most advanced industries, around 1800, overseas trade did begin to play an absolutely vital direct part in their further expansion.”
Indeed, there can be no doubt that colonialism was crucial to British and European capital accumulation. Imperialist trade and investment in the Third World is the foundation of the capitalist world economy, and not only historically. As the great historian and first Prime Minister of Trinidad and Tobago Eric Williams wrote: “The colonial system was the spinal cord of the commercial capitalism of the mercantile epoch” (Williams 1944, p. 142). In particular, the massive profits accruing from the slave trade and slave-based production were used to finance early British capital accumulation in shipping, insurance, agriculture, and technology, notably including James Watt’s epoch-making invention and production of the steam engine.
Australian economic historian G.S.L. Tucker has argued that the investment of English savings in countries where wheat and other primary goods could be produced more cheaply than at home tended to raise and maintain the rate of profit and thereby enlarge the sphere of investment (Tucker 1960, p. 135). A declining rate of profit, by contrast, could neither be averted by investing in one form of manufacture instead of another, nor by transferring capital to agriculture rather than industry. Instead, for proponents of colonialism, the rate of profit could only be maintained and extended by exporting capital and labor to the colonies, “where they would produce the food and raw materials that England required, and at the same time create new and growing markets for her export industries.” In so doing, Britain would no longer be so dependent on foreign markets and the exigencies of foreign tariff policies. Rather, by setting up a “colonial Zollverein” (or customs union) it would be able to control its own economic destiny (ibid, p. 141).
Despite being a staunch opponent of North American slavery, Liberal economist and political theorist John Stuart Mill was firmly convinced of the benefits of colonialism to human progress, so much so that he vouchsafed the option of the enslavement of colonized peoples. For Mill, whose advocacy of a liberal pluralist voting system based on the educational standards of citizens was explicitly formulated so as to exclude the representation of the broad working class (he feared that its numerical preponderance would lead to political domination), freedom applied “only to human beings in the maturity of their faculties” and could not be demanded by minors or “those backward states of society in which the race itself may be considered as in its nonage” (Mill 1972, p. 72). In Mill’s view, “a ruler full of the spirit of improvement is warranted in the use of any expedients that will attain an end, perhaps otherwise unattainable” (ibid, p. 73). He demanded the barbarians’ “obedience” for the purposes of their education for “continuous labor,” the supposed foundation of civilization. In this context, writes Italian historian Domenico Losurdo (2011, pp. 225–6), Mill did not hesitate to theorize a transitional phase of “slavery” for “uncivilized races” (Mill 1972, p. 198), since there were “savage tribes so averse from regular industry, that industrial life is scarcely able to introduce itself among them until they are … conquered and made slaves of” (Mill 1963–91, p. 247).
Mill was sanguine about the benefits of colonialism to the British economy:
“It is to the emigration of English capital, that we have chiefly to look for keeping up a supply of cheap food and cheap materials of clothing, proportional to the increase of our population; thus enabling an increasing capital to find employment in the country, without reduction of profit, in producing manufactured articles with which to pay for this supply of raw produce. Thus, the exportation of capital is an agent of great efficacy in extending the field of employment for that which remains: and it may be said truly that, up to a certain point, the more capital we send away, the more we shall possess and be able to retain at home” (Mill 1909, p. 739, quoted in Tucker, 1960, p. 136).
For Porter (1984, p. 142), the centrality of the developing world to British capital accumulation was threefold:
“Firstly: in so far as it was developing, and not merely stagnant, it followed that it required more capital than it could provide itself: and this Britain could supply. In the 1890s, ninety-two per cent of the new capital Britain invested abroad went outside Europe, and half of it to the developing countries of Africa, Asia and Australasia. Secondly: from the commercial point of view it was a market which overall bought more from Britain than it sold—just; and such markets were becoming very rare. Thirdly: it was a market which, in so far as it had not been cornered by European rivals and surrounded by their tariffs or saturated with their capital, was still ‘open.’ ‘Open’ markets were getting hard to find in the protectionist ’nineties; but if Britain’s products were to be sold abroad at all, those that were still open had to be kept open.”
Capital Gains from Empire
What was the extent to which capitalism relied on colonialism for its advancement? We will examine several measures here, concentrating in particular on the British case. We encourage readers to research the impact of colonialism on other European economies. Our view is that an important reason why the shift in Europe in the exercise of state power from repression to inclusion could take place within a dynamic capitalist environment is due to the fruits of the colonial Empire. These came partly in the form of (1) imported colonial mass consumption goods, (2) raw materials imports for expanding British industry, (3) profit from colonial trade, taxes, and investments, and (4) an area for settlements for the “industrial reserve army”—the unemployed surplus population in Europe.
Hidden Colonial Surplus Value
Marxist author, teacher, activist, and a founding member of the Non-European Unity Movement in South Africa, his country of birth, Hosea Jaffe (1921–2014) coined the term “hidden colonial surplus value” to describe the large amount of surplus value transferred to the imperialist countries by the oppressed countries of Africa, Asia, and South America. This “hidden surplus value” is the difference between the selling price of Third World exports and the selling price of these same exports in the imperialist markets (Jaffe 1980, p. 113). The source of this cheapness is not purely “economic,” but intrinsically a matter of political economy, that is, the ensemble of power relations within which goods and services are produced, distributed, and consumed.
For Jaffe, as for Cope (2015, p. 219), imperialist value transfers may be resolved into two components: repatriated profits and hidden surplus value. Repatriated profits represent only the visible portion of the value transfers generated by foreign investment and loan capital, whilst superprofits (the extra or above average surplus-value extracted from the labor of nationally oppressed workers) represent the invisible portions retrieved through capital export imperialism, unequal exchange, and debt usury.
As Jaffe has argued, and Cope (2015) has demonstrated applies in today’s world economy, the intra-imperialist rate of profit may be negative if hidden surplus-value from invisible net transfers amounts to more than net profits. In such a case, value-added (s + v) is less than wages (v), and profits derive only from the exploited nations whilst wages are subsidized by superprofits. In short, were the Third World workers involved in the production of commodities for First World markets suddenly to be remunerated at the same rate as “workers” in the latter, the entirety of profits of the world’s leading capitalist powers would be completely annihilated.
Jaffe estimates that no less than 500 million people were killed by Europeans during the four centuries of its primary accumulation of capital in the Americas, Asia, and Africa, an average of 100 million people per century at a time when the total world population increased from 300 million to 1 billion. As he writes: “This 400-year long process left a permanent mark on the value of human labor power of the colonial workers and on the immediate ‘value’ equivalent, in gold and its money representation, of the labor time of these workers” (ibid, p. 102).
Between the 16th and 19th centuries, the major international motors for European capital accumulation were the trade in African slaves carried in British and French ships; silver and gold exports from South America to Spain and Portugal; profits from the use of slave labor in the British West Indies; profits from the Dutch spice trade; profits from the opium trade; and colonial land revenue. In each case, colonialism as the expansion and acquisition of control of overseas territories by burgeoning capitalist European powers, many featuring unmitigated slavery, provided the impetus for nascent capitalist accumulation (Blaut 1980, p. 105).
Jaffe argues that during the first half of the 19th century, the wages of British, French, Dutch, and German workers differed little from the maintenance cost of slaves in the United States, Brazil, Cape, and the Dutch and French colonies. The rate of exploitation for these two distinct groups of workers (those from oppressed nations and those from oppressor nations) was more or less equally miserable. However, with the transition to imperialism in the second half of the 19th century, the ratio s/v rose for colonial and fell for metropolitan workers (ibid, p. 111).
The Drain Theory of British Colonialism
Among the earliest writers to systematically analyse and oppose the parasitic relationship obtaining between a colonial and a colonising country was Parsi intellectual, teacher, cotton trader, and early Indian nationalist Dadhabai Naoroji (1825–1917). Naoroji, India’s “grand old man,” was the first Asian to be a member of the British Parliament (the House of Commons), which he was from 1892 to 1895. Naoroji formed the Indian National Congress together with A.O. Hume and Dinshaw Edulji Wacha. His book Poverty and Un-British Rule in India drew attention to England’s exploitation of the country. One of the few contemporary descriptions of England’s colonial exploitation comes from Naoroji. In an appeal from 1882, On Justice for India, addressed to the British parliament, and based on extensive statistical calculations of the transfer of wealth from India to Britain, Naoroji described how taxes, trading profits, the destruction of India’s handicraft sector, and monopoly prices on imports from England to India drained the country. In 1896, the Indian National Congress officially adopted Naorojii’s “drain theory” as their political criticism of colonialism. Naoroji considered that by dint of its oppressed position, India was subject to British capitalist exploitation without being thereby enabled to reap any of the fruits of capitalist development.
For Naoroji, there were several underlying bases for this unrecompensed transfer of India’s wealth to Britain. First, he argued, India is a vast country ruled by a handful of Europeans whose income is a “moral drain,” that is, a cost to British India. Second, India develops as a market for British manufactures and a supplier to Britain of its raw materials strictly because India’s economic policies are dictated by Britain and in the interests of the British economy and the British capitalist class. Third, the Indian government under British rule is forced to pay an ever increasing list of official overseas expenses which Naoroji calls Home Charges (see Table I). Fourth, rather than creating domestic employment and income, India’s public expenditure out of the proceeds of taxation is instead used to pay for the infrastructure required by Britain to more effectively plunder the country. Finally, India’s transformation into a “mere agrarian appendage and a subordinate trading partner” of Britain ensures that it has become a typical colony dominated from afar (Karmakar 2001, p. 69).
For Naoroji, the introduction of commercial relations in agriculture, capital investment in crop production, the imposition of a rural tax in kind, and the consequent monetisation of the Indian economy were not conducted on the basis of a thorough extirpation of the system of landlordism and a redistribution of landholdings amongst the peasantry, as in autochthonous capitalism, but on the incorporation of the landed class into a system of cash crop export dependency dominated by foreign capital. As such, Naoroji’s “drain theory” was a precursor to Marxist theories of the “development of underdevelopment” (Andre Gunder Frank) and semi-feudalism.
The transfer of capital from India to Britain effected by colonial subordination precluded India from implementing development opportunities in the form of infrastructural investment, education, and so on. This view was later echoed by United States Marxist economist Paul Baran (1957, p. 163) who, having estimated that around 10 per cent of India’s national product was transferred to Britain each year in the early decades of the 20th century, wrote that “[far] from serving as an engine of economic expansion, of technological progress, and of social change, the capitalist order in these [underdeveloped] countries has represented a framework for economic stagnation, for archaic technology, and for social backwardness.”
Nauroji estimated that Britain exacted an annual “tribute” from India of huge proportions. Following the Mutiny of 1857, India’s First War of Independence, he estimated that the annual transfer from India to Great Britain amounted to a total of £30 million (Karmakar 2001, p. 67). Accepting Bank of England data (see Table II), we can say that between one third and one half of Britain’s gross fixed capital formation (that is, the value of acquisitions of new or existing fixed assets by the business sector, governments, and households—excluding their unincorporated enterprises—less disposals of fixed assets and typically including land improvements; plant, machinery, and equipment purchases; and the construction of roads, railways, and the like, including schools, offices, hospitals, private residential dwellings, and commercial and industrial buildings), with the attendant productivity gain of British labor, was financed exclusively through the drain of India’s wealth from colonial tribute.
British Income in the Absence of Empire
American economist Michael Edelstein specialising, inter alia, in the economics of the British Empire in the 19th century, has attempted to measure what Britain gained from the underdeveloped parts of its Empire. He has done so through positing a counterfactual condition, namely, that the aforementioned countries had remained independent.
Edelstein argues that if the Empire territories had remained independent of British rule they would not have participated in the international economy to the same extent that they, in fact, did. Thus, he writes, the British Raj brought a more peaceful, unified, and commercially oriented political economy to India than would have been the case if the country had remained independent. While we might argue that India was by no stretch of the imagination a peaceful place under British rule, or that it may have been more commercially engaged outside it than Edelstein supposes, his working assumption that Britain’s trade with India and the other non-Dominion regions would have been a quarter of its existing level in 1870 and 1913 in the absence of British rule (Edelstein 1994, p. 203) is plausible.
What, then, is Edelstein’s assessment of the gains made by Britain from trade with its oppressed colonies?
“Summing the 75 per cent reduction to British exports to the non-Dominion colonies and the 30 per cent reduction to British exports in the Dominion regions (weighted by their respective shares in British colonial exports), British colonial exports in 1870 and 1913 would have been 45 per cent of their actual levels under this ‘strong non-imperialist’ standard of the gains from Empire. (The shares of white-settler and non-white-settler colonies in British exports to the colonies were approximately 45 per cent and 55 per cent, respectively. With their ‘strong’ non-Empire levels hypothetically reduced to 0.7 and 0.25, respectively, of their actual levels, British exports to both types of colonies would have been = 45 per cent (0.7) + 55 per cent (0.25) = 45.25 per cent of actual levels.)
“The ‘strong’ gain is the difference between the actual British Empire exports and this hypothetical 45 per cent level in the absence of Empire. British exports of goods and services to the Empire were approximately 7.9 per cent and 11 .9 per cent of GNP in 1870 and 1913; therefore the ‘strong’ gain from Empire was 4.3 per cent (i.e. 55 per cent of 7.9 per cent) of GNP in 1870 and 6.5 per cent of GNP in 1913” (Edelstein 1994, p. 204).
According to the Bank of England figures listed in Table II, gross fixed capital formation (GFCF) was 7.55 per cent of Britain’s GDP in 1870 and 7.13 per cent of its GDP in 1910. Using Edelstein’s “strong non-imperialist” standard, we may therefore suppose that around 57 per cent of Britain’s fixed capital investment in 1870 (4.3 / 7.5 × 100) and 91 per cent of its fixed capital investment in 1910 (6.5 / 7.13 × 100) was funded by its trade with the colonies.
British–Indian Merchandise Trade and Capital Accumulation
Specifically colonial trade differs from domestic and other foreign trade. Crucially, the colonial market is kept compulsorily open while the metropolitan market is strictly protected; in the case of Britain against Asian textiles, for instance, draconian tariff duties were applied for 150 years. Moreover, as Indian Marxist economist Utsa Patnaik notes, “colonial goods for export were purchased out of local tax revenues raised from the colonized population as in India, or by the export-goods equivalent of slave rent as in the West Indies” (Patnaik 2006, p. 36). In effect, either the money paid to the colonial goods exporter by the colonial power came out of high taxes that the latter had itself paid to the colonial state, as in the case of India, or the export goods were the commodity form of economic surplus directly taken in the form of rent (slave rent as in the West Indies, and land rent as in Ireland). Finally, India’s foreign exchange earnings were appropriated by Britain so as to settle its trade deficits with continental Europe and the USA (see below).
As shown in Table I, the nominal balance of trade includes more than direct merchandise trade, making it appear that Britain ran a trade surplus, not deficit, with its colonies. For no matter how great the trade surplus became (in 1913 India had the second largest trade surplus earnings in the world at £71 million), much larger fictitious invisible political charges were imposed to nullify the increased export earnings and, in fact, produce a small deficit on current account. Thus, as Patnaik highlights, countries with large and growing merchandise export surpluses such as India and Malaysia had more than their exports earnings siphoned off by Britain through politically imposed invisible burdens and had to borrow, while the country with a large and growing trade deficit, Britain, was able to siphon off the exchange earnings of its colonies and more than offset its current account deficit with sovereign regions, so that it actually exported capital to these regions on an increasing scale (Patnaik 2006, p. 41).
Nonetheless, as Patnaik shows, the unpaid trade surpluses extracted from the oppressed nations of the British Empire allowed British capital accumulation to advance rapidly. By calculating the direct merchandise import surplus from India and the West Indies into Britain and using this as the measure of surplus transfer from these colonized regions, Patnaik (2006) estimates the level of Britain’s rates of capital formation that were thereby made possible. She finds that the combined colonial transfer expressed as a percentage of Britain’s savings, is at least 62.2 in 1770, 86.4 in 1801, 85.9 in 1811, and 65.9 in 1821 (Patnaik 2006, pp. 49–50, quoted in Cope, 2014, pp. 276–278).
Britain’s capital accumulation was intimately connected to its plunder of the colonies. Value transferred from the Third World, over and above the prevailing domestic level, raises the profitability of First World business not only by cheapening the costs of constant and variable capital, allowing for much higher rates of consumption of both, but also, in the colonial era at least, by allowing for increased rates of capital formation through unpaid trade surpluses.
Colonialism, Popular Consumption, and Labor Reformism
It is clear from the above that European capitalists derived enormous wealth from colonialism. The British economy was in part the product of commercial hegemony achieved through imperialism, allowing Britain to become industrialized with a large proletarian population. However, the question remains: to what extent did the European proletariat itself benefit from colonialism? We argue here that despite creating much of the surplus value produced by their respective nations in the earlier part of the industrial capitalist era, the European proletariat between 1875 and 1950 (roughly the era of high imperialist colonialism) was nourished by colonized peoples’ labor, their incomes were dependent on the proceeds of colonialism, and their employment was a function of the maintenance of colonialism. The divide between the workers of the colonial nations and those in the colonized nations widened as imperialism advanced so that both the living conditions and the political horizons of each group of workers became increasingly polarized. We will examine here how colonialism raised the living standards of all European workers, particularly those organized workers poised to exploit the scarcity of their skills, as well as their “racial” and religious affiliations, vis-à-vis the colonized.
Capital and revenues from the colonies made wage increases for the metropolitan working class possible. Wages in England increased relative to prices by 26 per cent in the 1870s, 21 per cent in the 1880s, and 11 per cent in the 1890s. It was skilled workers who particularly benefited. A skilled worker earned approximately twice that of an unskilled worker, still living just above subsistence level.
The working class had, following the political reforms of the second half of the 19th Century, organized into powerful trade unions. This allowed the upper layers of skilled workers to obtain better wages and working conditions as well as expansion of trade union rights. This wage increase—which occurred first in England and later in France, Germany, and other Western European countries—contributed to the expansion of consumption power and to the reduction of the recurring overproduction crises that capitalism had hitherto suffered.
The only way wage levels could rise without the profit rate falling below what was necessary for capital accumulation, was by the exploitation of an increasing number of people employed in the colonial areas as workers in plantations, mines, and factories. Here, wages were set at subsistence level or less. The superexploitation of labor was the basis of the higher profits for capital invested in the colonies. The fall in the rate of profit that would have occurred as a result of rising wages in Europe was thereby compensated for by the increasing amounts of surplus labor performed in the colonies. On the one hand, capital benefited from rising wages at home by raising effective demand for commodities, while on the other hand, the low wages in the colonial areas maintained high profits. In this way colonialism solved the contradiction of capitalism in the North by dissolving the stagnating effect of higher wages within the enhanced exploitation of the proletariat in the South.
Economist Joan Robinson (1970, pp. 64–6) described the link between colonialism, the development of capitalism in Europe, and working-class consumption patterns:
“It was not only superior productivity that caused capitalist wealth to grow. The whole world was ransacked for resources. The dominions overseas that European nations had been acquiring and fighting over since the sixteenth century and others also, were now greatly developed to supply raw materials to industry. … The industrial workers at home gained from imperialism in three ways. First of all, raw materials and foodstuffs were cheap relatively to manufactures which maintained the purchasing power of their wages. Tea, for instance, from being a middle-class luxury became an indispensable necessity for the English poor. Secondly the great fortunes made in industry, commerce and finance spilled over to the rest of the community in taxes and benefactions while continuing investment kept the demand for labor rising with the population. … Finally, lording it around the world as members of the master nations, they could feed their self-esteem upon notions of racial superiority. … Thus the industrial working class, while apparently struggling against the system, was in fact absorbed in it.”
Imperial Consumption
The most important commercial crop at the beginning of the 19th century was sugar. Produced by slave labor, its sale generated enormous profits for sugar merchants, plantation owners, and investors. Sugar consumption in Britain doubled between 1690 and 1740. By the 1830s and the advent of industrialised textile production, however, its market value had been exceeded by cotton. Britain was unable to produce cotton and imported all of it from America, where it was produced by slaves, and from Egypt and India, where it was produced by subsistence peasants. Raw cotton, sugar, rum, and tobacco imports were shipped by the tonne into prosperous British ports like Bristol, London, and Liverpool (see, for instance, Lane, 1987); all originated in the expanding slave plantations of America and the Caribbean.
Many of Britain’s primary products were producible exclusively in colonized tropical countries, though some were “temperate” food grains from colonies such as Ireland and India, as well as from the settler-colonial United States. In 1800 and at the height of the industrial revolution, an estimated 18 per cent of beef and pork consumption, 11 per cent of butter and margarine consumption, and 12 per cent of wheat and wheaten flour consumption in Britain was met by Irish imports (Jones 1981, p. 67). British importing of Irish grain, cattle, butter and so on contributed to the hellish starvation in Ireland in the 1840s and 1850s, from which that country’s population has still not recovered almost two centuries later. These temperate foodstuffs came to constitute 31 per cent of all imports of food and drink in 1844–6 and fully 43 per cent in 1854–6 (Davis 1979, p. 37). The most important items of direct mass consumption for which there was substantial or complete import dependence were wheat (of which India was probably the third most significant source) and wheaten flour, rice, cane sugar (beet sugar production in Continental Europe being fairly insignificant), tea, coffee, and tobacco. Of these, only the first was produced in Britain but production was not growing as fast as population between 1700 and 1850.
From the middle of the 19th century, a substantial general rise in incomes, particularly, as Davis notes, those of a large minority of the population (farmers, many kinds of skilled workers, the professional classes, and rentiers), led to a sudden leap in demand for semi-luxury food and drinks and a sharp increase in the amount consumed per head. In this period Britain shifted to “the kind of import dependence in which starvation, rather than inconvenience or even poverty, became the alternative to importing” (Davis 1979, p. 52).
Whereas standard long-run real wage series simply divide the nominal wage by the price of an unchanging consumption basket, Hersh and Voth (2009) show that after Europe’s “discovery” of America, its consumption habits were profoundly transformed and dramatically improved. They calculate that income gains from colonial goods imports such as tea, coffee, and sugar added at least the equivalent of 16 per cent, and possibly as much as 20 per cent, of household income to British people’s welfare by the middle of the 19th century. For McCants (2007, p. 436), the intercontinental luxury trades of the early modern period transformed the European economy. Moreover, it was not purely the consumption habits of Europe’s elites that drove this transformation, but those of its working and middle classes:
“Who was drinking all of this tea and coffee? Surely not just wealthy elites, as the volumes are too high to even entertain the possibility of limited social access to hot caffeinated beverages. Some of the import volume was ‘lost’ to re-exports, but the ultimate consumers of these re-exports were, of course, just other Europeans (or their colonial counterparts). Eighteenth century commentators of all national stripes did not hesitate to ascribe consumption of these caffeinated luxuries, usually as a complaint, to the teeming masses of their social inferiors. Probate inventory evidence on the social diffusion of the artefacts associated with this consumption has been accumulating over the past several decades, and it suggests that it was indeed widespread across the social landscape” (McCants 2007, p. 446).
Investigating the consumption habits of European nations over the course of two centuries, popular consumption historian Carol Shammas has defined an item of mass consumption as one consumed by over a quarter of the population, showing that tobacco passed the mass consumption threshold by the middle of the 17th century and sugar at the end of the 17th century (McCants 2007, p. 449).
The mass consumption of these and other consumer imports proceeded apace with the liberalisation of trade, the incorporation of new producer countries in the international market, and the decline of prices all predicated on the expansion of Empire. McCants (2007) summarizes the main trends:
“The consumption of tea, coffee, sugar, tobacco, porcelain, and silk and cotton textiles, increased dramatically in western Europe beginning as early as the closing decades of the seventeenth century, only to accelerate through much of the eighteenth century. The consumer setbacks associated with the period of the French Revolution and a continent at war, especially as triggered by the Napoleonic blockades, should properly be seen as a severe interruption to the trend which would otherwise have extended rather more seamlessly from the early modern trade system to the ‘transport revolution’ of the nineteenth century. Use of the new commodities brought by this trade spread rapidly, both in geographical and social space. … [The] presence of many of these so-called luxury goods is well documented down into the ranks of the working poor by the middle of the eighteenth century. There can be little doubt then, that European demand was fuelled not only by the rich with their growing ‘surplus incomes’ but by the much more numerous lower and middling classes of Europe’s multitude of urban centres, followed by their rural counterparts.”
Over the course of the 1700s perhaps 11 million slaves were exported by European merchants from Africa to the slave colonies on the opposite side of the Atlantic to produce many of these luxury items or their raw materials. As many as one in five slaves died during the journey, after enduring cramped, filthy, and dangerous conditions. Many more would die later on the plantations as a result of disease, overwork, and maltreatment. The expansion of the transatlantic slave trade can be located in the growth of popular consumer demand, behind which lay the sale into bondage of many millions of Africans.
The Political Consequences: From Revolution to Reform
There has been considerable research into the 19th century English labor aristocracy. Both contemporary political opinion and historical research agree that it was the upper layer of organized skilled workers who constituted the labor aristocracy, and that its size and importance changed with economic conditions in the second half of the 19th century. The crucial point is that colonialism and imperialism opened up the possibility of increasing welfare for the metropolitan working class within the framework of capitalism. Reformism became a successful political line and, in tandem, the revolutionary line subsided.
The new economic trends changed the conditions of class struggle. The economic and political improvements that the capitalist class could not provide in the first half of the century—because it was impossible within that regime of accumulation—began to be provided towards the end of the century. Ruling class largesse (such as it was) was definitely not offered voluntarily. But in the first half of the century wage rises and political enfranchisement of the proletariat was a life and death question for capital. Now it became possible for capital to accede to these demands. Higher wages, improved working conditions, and extended political rights strengthened the faith of the working class in reformism and made it ever safer for capitalists to give the working class more power. Revolution was no longer on the agenda in Western Europe.
Hobsbawm (1964, p. 341) observed the relationship between colonialism and the development of a strong reformist current within the working class, stating: “The further we progress into the imperialist era, the more difficult does it become to put one’s finger on groups of workers which did not in one way or another draw advantage from Britain’s position […].”
From Internationalism to Nationalism
Marx and Engels coined the battle cry: “Proletarians of all countries, unite!” in The Communist Manifesto in 1848, expressing their hope for working-class solidarity across national boundaries—and even between imperial powers and their colonies—in a common struggle for a socialist revolution. However, they became disillusioned with the prospects for the same. They pointed out several times the relationship between colonialism and Britain’s position as an imperial power and the embourgeoisement of its working class, that is, the proliferation of middle-class living standards and ideologies amongst the workforce. This can be seen, for example, in Engels’s letter to Marx dated October 7th, 1858 (page 90), Engels’s letter to Kautsky dated September 12th, 1882 (page 123), or Engels’s letter to Bebel dated August 30th, 1883 (page 125 of this book).
The labor movement in the imperialist countries had not only difficulties demonstrating solidarity with the people in the colonies; they had also difficulties coming to terms with oppressed ethnic groups or nations struggling for equal rights at “home.” This issue is played out most clearly in the United States over the question of slavery. In England, the Irish immigrants’ struggle for equal rights is a parallel. The Irish immigrants were seen as competitors to the English workers and were met with hostility. National chauvinism—the belief in national superiority—played a prominent role in the politics of the English working class. In a letter to Meyer and Vogt of April 9th, 1870 (page 108), Marx compares the British working-class attitude to colonial Ireland and the Irish working class with white Americans’ attitude to slaves in the American South.
Marx and Engels’s political practice in this period was centered on the First International Working Men’s Association, which in reality consisted of trade unions and political organizations from the Northwestern part of Europe. The divergent wage levels between the imperial powers and the colonies, and between different ethnic groups within the imperial center, were already an important issue of the time as, for instance, between English and Irish workers in England and between German and Czech workers in Germany. In his speech to the Lausanne Congress of the First International in 1867, Marx ([1867] 1975, p. 422) declared:
“A study of the struggle waged by the English working class reveals that, in order to oppose their workers, the employers either bring in workers from abroad or else transfer manufacture to countries where there is a cheap labor force. Given this state of affairs, if the working class wishes to continue its struggle with some chance of success, the national organizations must become international.”
Marx already had an eye for the significance of differences in national wage levels for the prospects of developing an international class struggle—and that at a time in history where the wage gap was much less stark than today. Marx’s strategy in relation to this situation was clear: international solidarity and struggle. Instead, defense of imperialism would become cemented in the British working class in the years ahead.
Imperialist Reformism and the Labor Aristocracy
In Victorian England, we see precisely the kind of social imperialism avant la lettre of which the Western left would find itself approving as superprofits increased:
“The domestic Radical programme, like the Fabian program of a few years later, rested on the assumption that home and foreign affairs had in practice very little connection. At home, the task of the radicals was to promote a more even distribution of wealth; but the wealth that was to be redistributed was taken for granted, without any examination of its sources. It was regarded, in effect, as natural and assured that Great Britain, as the leader of world industrialism, should go on getting richer and richer, and should devote her surplus capital resources to the exploitation of the less developed regions of the world, drawing therefrom an increasing tribute which Radical legislation would proceed to redistribute by means of taxation more equitably between the rich and the poor in Great Britain” (Cole and Postgate 1949, pp. 411–412).
According to Kirk (1985, p. 9), the ranks of the labor aristocracy were broadened in the second half of the 19th century with the rapid expansion of the capital goods sector and its high demand for skilled males, new labor aristocrats in the metal trades joining older ones in building and printing in the capitals of England and Scotland. The political moderation of the mid-Victorian labor movement, especially its trade union component, was due largely to the increased dominance of these skilled males therein, and its having laid in the hands of “moderate and ‘responsible’ men who, whilst laying strong claims to the rights of male citizenship, wished to achieve a stake in society” (ibid, p. 11).
At least in terms of the third quarter of the 19th century, Kirk argues that Hobsbawm is correct to draw a close connection between the “distinct if modest” improvement in all but the environmental conditions of the working class and increased political moderation. The evidence points to a clear rise in the living standards of a significant section of the British working class from around 1860 and an increasing differential between many skilled and lesser- and unskilled male workers during that period (McClelland 2000, p. 104).
With some important qualifications and corrections, it is valid to posit “an overall link between economic improvement and reformism during the third quarter of the century” (Kirk 1985, p. 81). Thus cotton operatives were generally much better off in material terms in 1875 than they had been in 1850, with the post-1864 years being a period of substantial, indeed, in many cases, spectacular rises in money and real incomes. Given this overall improvement, Kirk argues, “it is surely not coincidental that reformism took increasingly deep root in the cotton towns” (ibid, p. 82). Certainly many labor leaders consciously attributed their newfound moderation to the material and institutional gains of the years after 1850. That there had been real improvements in the standard of living of the working class was explicitly vouched for in the analysis of working-class reformers and their allies at the time (see, for example, Ludlow and Jones, 1867).
Alongside structural changes in the capitalist mode of production (Stedman Jones 1975), rising living standards brought about by falling prices, and the ability of trade union organisations to ensure that wages did not fall concurrently (of which more below), Kirk accounts for working-class conservatism by highlighting conflicts following a massive and unprecedented increase in the level of Irish Roman Catholic immigration into the cotton districts. In the years after the catastrophic famine of the late 1840s, this led to tensions between sections of the immigrant and host communities. Kirk establishes that a “working class fragmented [we would emphasize, stratified—ZC and TL] along ethnic (and wider cultural) lines greatly facilitated the (re)-assertion of bourgeois control upon the working class, and helped to attach workers more firmly to the framework of bourgeois politics” (Kirk 1985, p. 310). Thus, “[ethnic] conflict operated, against the background of the apparent inevitability of capitalism, to restrict further the potential for class solidarity in Lancashire and Cheshire, and to provide sections of the bourgeoisie with the opportunity to assert their authority, in a fairly direct way, upon workers” (ibid, p. 335).
Stedman Jones (1971, pp. 241–2) argues that the extension of the franchise to part of the male working class in Britain with the Reform Act of 1867 (the Second Reform Act) was the means employed by the ruling class to forestall “an incipient alliance between the casual ‘residuum’ and the ‘respectable working class,’ as fear grew on a national level of a possible coalition between reformers, trade unions and the Irish.” Indeed, this analysis is borne out with the example of fiscal policy with respect to sugar duties:
“Government strategy was driven by a number of different elements, not least the fiscal problems of the state. It was necessary to increase revenues by imposing income tax, beginning to shift the burden of taxation from indirect to direct taxes and, at the same time, keeping income tax low through increasing revenues by lowering duties on consumption goods and thus boosting, in particular, working-class consumption. This has to be seen in the broader context of, on the one hand, dealing with the Chartist insurgency by attempting to attach the working class to the state through encouraging consumption and some measures of social reform and, on the other, of dealing with the interests of manufacturing and the effects of the economic depression of 1837–42 through attacking the Corn Law problem. The latter would also entail addressing the crisis in Ireland by moving towards free trade as the putative solution.
“Within the wider framework, [British Conservative Chancellor of the Exchequer and slave plantation owner Henry] Goulburn situated his aims so far as sugar was concerned. Sugar had become an essential element of working-class consumption so his aim was ‘to secure to the people of this country an ample supply of sugar.’ But he also wished to make that supply ‘consistent with a continued resistance to the Slave Trade, and with the encouragement of the abolition of slavery.’ Finally, he sought ‘to reconcile both with a due consideration to the interests of those who have vested their property in our Colonial possessions.’” (Hall et al 2014, p. 145)
However militant the labor aristocracy’s struggles against employers over the past century (and these are frequently and massively exaggerated), they were never directed against the division between oppressor and oppressed nations, against the imperialist system that guaranteed the amount of colonial loot to be divided amongst the warring parties.
The bargaining power of metropolitan wage labor improved as the outmigration of the unemployed to settler and non-settler colonies reduced the size of the reserve army of labor, and as the huge inflow of colonial transfers boosted domestically generated productivity, profits, and investment, thus serving to raise mass living standards.
The connection between labor reformism and colonialism was, however, even more direct. As primary wealth-creators, the major producer industries of the Victorian period were agriculture, textiles, coal, iron and steel, and engineering. These industries were also the major employers, the major export earners and, in the latter part of the century, the major targets for the newly emerging trades unions. In 1889 trades unions had 679,000 members, the majority of whom were in the primary industries. By 1900 there were over two million union members in Britain. Of equal importance was the diversification of industry in this period, along with the ever-increasing range of imported products. According to data compiled by Clegg et al (1964), the majority of the unionized workers in the late 19th century were in iron and steel, coal mining, and cotton and woollen textiles.
Clough (1993) explains how the economic and political benefits accruing to the skilled working class of Victorian England organized in these industries were directly attributable to their exceptional position in the international division of labor at the time, that is, to British colonial imperialism:
“If we look at the sectors where skilled workers and their organisation were strongest, we find them to be closely connected to Empire: textiles, iron and steel, engineering, and coal. Textiles because of the cheap cotton from Egypt, and a captive market in India; iron and steel because of ship-building and railway exports, engineering because of the imperialist arms industry, and coal because of the demands of Britain’s monopoly of world shipping. In a myriad of different ways, the conditions of the labor aristocracy were bound up with the maintenance of British imperialism. And this fact was bound to be reflected in their political standpoint.”
Meanwhile, Hatton et al (1994) have found that the effect of union membership on earnings at this time was of the order of 15–20% and that this effect was similar at different skill levels. A broadly similar pattern is observed for industry groups, although the difference in the impact of unions on earnings across industries was greater than across skill groups.
Socialist Internationalism and Anti-Imperialism Today
Since decolonisation, there has been a shift from value transfer based on colonial tribute to that based on imperialist rent, that is, “the above average or extra profits realized as a result of the inequality between North and South in the global capitalist system” dominated by Western monopolies (Higginbottom 2014, p. 24). The mass embourgeoisement of the metropolitan working classes via receipt of value transferred from the exploited nations and minority communities and the attendant political pacification is not admitted by socialists in the imperialist countries. The point to be grasped by the genuine left—those struggling to see an end to capitalism and imperialism alike—is that so long as imperialism functions, internationalist labor movements in the core imperialist countries will be strictly delimited.
In The Eighteenth Brumaire of Louis Bonaparte, Marx remarked that “[t]he Roman proletariat lived at the expense of society, while modern society lives at the expense of the proletariat” since almost all of Rome’s wealth derived from landlordism, slavery, and imperial tribute. The Roman proletariat (from the Latin proles, “offspring”) had little in common with the proletariat of capitalist society. It was considered useful for little but siring children to serve as soldiers or settlers for the empire. As such, the proletariat was a parasitic class that was maintained at the expense of the empire’s peasants, slaves, and colonized peoples. In that sense, it was much like the First World working class of today, which is largely maintained by the surplus labor of proletarians, peasants, and slaves in the exploited nations.
Fighting for higher wages and better living conditions for First World workers is reactionary outside of the struggle against imperialism. Government deficit spending, expanded welfare measures, and protected industry in the affluent countries are not necessarily socialist measures. Those groups, whether ostensibly left-wing or right-wing, which act to preserve the inequality of imperialist relations invariably promote national chauvinist solutions to problems of unemployment and declining living standards (Baran 1978, p. 247). The increasingly respectable fascist movement promises the highest levels of parasitism for white workers, national business interests unhappy with neo-liberalism, and the petty-bourgeoisie opposed to the fiscal requirements of globalized finance capital. The denial of gigantic imperialist value transfer adds fuel to the fire of right-wing populism.
Bibliography
Spoiler!
Edited by Constantignoble ()
#37
gateway error doublepost
so if you get a gateway error while posting, give it 5-10 minutes to update before assuming it failed
Edited by Constantignoble ()
#38
gateway error triplepost
Edited by Constantignoble ()
#39
relieved to see that the bad gateways are back
#40
putting this here because i think its an excellent piece by amber b, who i like, considering things most people dont
https://anti-imperialism.org/2018/08/09/practical-notes-concerning-service-workers-productive-and-unproductive-labor/ posted:
09/08/2018 — Amber B.
Practical Notes on Service Work: Implications of Unproductive Labor
As of 2016 more than 80% of active workers in the united $tates are employed by the service—or tertiary—sector, a large variety of jobs which are defined primarily against the traditional backdrop of “productive” labor employed in agriculture, industry or resource extraction. This is where a bulk of the so-called “unskilled” workforce is employed, particularly in retail and trade jobs, as salespersons, stockers and cashiers. With such prevalence—more than 15 million people being employed in retail trade alone—it demands more than just a stereotyped explanation of conditions and tasks. Even in the First World, we cannot, nor should we, rule out the importance of transforming workplace struggles into revolutionary ones. That said, we must understand exactly what peculiar relations exist here, to say the least, and what problems they pose/how we might solve them to transform the everyday cycle of economistic struggle into revolutionary class struggle on the side of the global proletariat. There are a number of important factors to consider about the type of work and the environment when approaching the question of struggle among service sector workers. On its face, we must consider what their relationship to production is and how it is important to the politicization of their conditions. Service work is locked firmly in the tertiary sector, or that sector that Marx described as primarily involving the realization of commodities, rather than their production, wherein industrial capital sacrifices part of the surplus-value extracted in the production process to the sphere of circulation as the profit of the “merchant”. This ranges from the hourly waged workers in retail, to self-employed professionals who contract with advertising firms (although these are sometimes lumped into a quaternary sector dealing with information, nevertheless, they are parasitic outgrowths on productive capital). Of course these two jobs have little in common despite the sectoral similarity. Nevertheless, it is clear that those in the service sector are not productive of surplus-value, but simply aid in its realization as profit. Therefore, despite being generally organized in a capitalistic setting, these workers are not productive in a capitalist sense, that is, of surplus-value. This may appear on its face to be a pedantic point, since to a low-wage service sector employee, their conflict with management remains constant. However, the division between unproductive and productive labor has a fundamental impact on our political practice. The problem is twofold: (1) Since the workers do not produce surplus-value, their contribution to the concentration of capital and the expansion of the capitalist system is external, indirect. Productive capital surrenders part of the surplus-value to pay for circulation, and their wages are mere faux frais of that sphere of distribution. (2) Since they do not produce surplus-value, they are not exploited in the strict Marxian sense, and therefore the politics of their contradictions with management are put into a different politico-economic context. So what does this mean for the reality of class struggle in a service/retail environment? Before we can answer this question more exactly, we must more precisely locate that section among which it would be valuable to agitate. There is very little to be done at this moment for the upper echelons of this sector—some self-employed, all comfortably petty bourgeois. These are the individuals employed as artists, proprietary salespeople, skilled craftspeople and consultants. Rather, for the purposes of this analysis we are interested in those employed regularly in a capitalistic setting, receiving a wage as the primary payment for their work. These are the cashiers, sales associates, fast food workers, janitorial staff, etc. who still have a contradictory relationship with a bureaucratic structure above them. Even still, a number of challenges face us in actually mobilizing these workers politically, and away from the economistic self-interest that tends to define most unconscious (or petty bourgeois-conscious) workers’ struggles. In the united$tates and the First World more broadly, one of these is certainly the relatively high wages of the workers employed in this sector. Many workplaces now pay far above the minimum wage, with Walmart (the united $tates’ largest private employer) offering a starting rate of$11 per hour to all its associates, and even higher depending on one’s department. Not extravagant by any means, but they certainly aren’t revolting any time soon.
The recent increases in wages, and the promise of further increases, have severely hampered the development of worker organization and class struggle in these firms. Simply put, high wages in some firms have placed negative incentives on struggle, even around serious issues, as workers simply do not see the cost of struggling for their resolution as “worth it” when measured against these high wages. This is by no means the deciding factor, but clearly it does play a role, especially when so many are living above just mere subsistence. The overall condition of the workers is not reducible to wages, but at the same time we cannot discount them. The ability for workers to live in relative comfort and security has a huge impact on the impetus for class struggle, and so long as imperialism functions as it does, emphasizing the consumptive power of First World people over their productive power, this is not likely to change. Even across sectors, we cannot deny the impact that high wages have had in diminishing the support for class struggle, and qualitatively changing the standard of living enjoyed in the First World in comparison to the global majority in the Third World.
But even for those low-wage service workers, who live significantly less comfortably, what impact does their unproductive status have on the prospects for struggle? Does the technical lack of exploitation mean anything in the face of the hardships they experience? We must answer yes, in fact it does. The bourgeoisie will always, in the last instance, move to protect production over the unproductive sectors in society, on the basis that it is only production which can lead to accumulation. So follows the old saying, reiterated by Smith and Marx, that one grows rich from workers, but poor from servants. In a general crisis, the heights of bourgeois society would rather terminate a vast majority of those employed in the tertiary sector than allow permanent damage to the productive chain. This is why Marx says that it is ultimately the proletariat, who produce all surplus-value in society, that has the power to destroy class society by first seizing the means of production. For service workers, their status in the eyes of the bourgeoisie as unproductive workers makes their labor much more expendable.
It is inevitable that some will read “unproductive” as a moral categorization. That is unfortunate. It does not mean that these workers are unimportant in society, but rather that they have been rendered “unproductive” in the eyes of the bourgeoisie, who place primary importance on surplus-value extraction as the engine of accumulation. For us, the support of these workers is still important when it can be developed. However, their position is not the same as those in the industrial or even the agricultural sector, and therefore we cannot approach their predicament in the same way. The wages of productive workers are pegged, in many ways, to the values they create, as well as the historically determined reproduction cost of their labor, however the wages of unproductive workers are instead pegged only to this historical determination, hovering alongside or below the wages of the productive sector, and are dependent upon surplus from the productive sector. So in terms of concrete political work, this complicates our demands. For productive workers in the abstract, it is greater control over their production and underscoring the deviation in compensation and the value they produce. For unproductive workers, the issue is more complex.
Even so, there are many vectors through which revolutionaries can insert themselves at the forefront of workers’ grievances. The bourgeoisie in core countries, even while it pursues an overall policy of maintaining social peace, still comes into conflict with workers in their own countries on a regular basis. Class contradiction is restrained through the concessions to the First World working class. Its results are often greatly maligned by the bourgeoisie, but they have not been eliminated. One of the larger and more serious claims against the oppressive relations in the service sector is the outright theft of wages or guaranteed compensation. Even in the First World, this is extraordinarily common, and is one of the prime examples of ways in which management repays their privilege through strict service of the bourgeoisie’s interests.
There still remains a serious limitation, however. The propensity for these struggles to inform an economistic strategy is born out of the fact that capital makes no legal claim on stolen wages. Ultimately, bourgeois rights enforce the rights of individuals to be paid for their work, and it is one of the fundamental realizations of Marx that people can be exploited even while they are paid for their work. This is not truly the case with service workers and the petty bourgeoisie at large, so the issue of wage theft is further magnified in their political demands. The problem is that legal struggles can return the stolen funds, plus damages, and no higher ideological point is necessary. This is not the case with the general exploitation present in the capitalist system, and capital certainly does make legal claim on the surplus-value produced by the workers who have no legal right to demand it returned to them in any bourgeois court. So on the question of wage theft alone, a very serious campaign must be waged against simple “fair trade” appeals, that stop at the demands for compensation. We must illuminate to the workers what forces provide the impetus for wage theft among the management, who oftentimes do not directly pocket the money owed to their workers, but do so on behalf of the big bourgeois, who cycle down privileges to them.
Drawing further from this realization, another possibility for revolutionaries is underscoring the fact that the division between mental and manual labor that—especially in larger firms—is oftentimes practically irrelevant and serves a primarily political purpose. For the most part, there is no specific task performed by management which even “unskilled” workers cannot do, especially when combined with the relative high-technology in most of these firms. Most of the complicated tasks handled by management are automatically performed by computers, and for the most part the skill level needed to interact with this technology is seriously overstated. For instance, Walmart’s “point of sale” software automatically reorders items when they are sold, tracks sales, inventory and other statistics useful to the operation of the store. What’s more, these figures are available to any worker who cares to look at them. Primarily, the division of labor and abundant bureaucracies of these firms play a political role, providing the corporation overall with loyal representatives to carry out their interests against those of the workers. Their role is more to do with discipline than distribution.
This is definitely something that could be explored as the basis for the transition from economistic demands to something more revolutionary, but it is still dependent upon the leadership and solidarity of productive labor to push toward a final conclusion. This is for two reasons: firstly, despite handling the final distribution of commodities, they do not occupy the strategic engine of capitalist accumulation that is production. The takeover of the centers of distribution alone cannot suffice to actually put society in the hands of the proletariat. Only productive capital in the hands of revolutionaries is capable of transforming society in such a fundamental way, and this is not something which can be annexed by workers of the tertiary sector without leadership fundamentally emanating from it. In addition to this, there is the ideological gap existing between them, with the ideological impulses of retail workers especially more resembling that of the petty bourgeoisie and proprietor. Territory won by them in control over their firms put them in greater charge of capital not produced by them, and profits earned through the sale of commodities produced elsewhere. Were the unproductive mass of First World workers elevated into control over their industries, they would merely oversee, collectively, value-chains beginning in south, east and southeast Asia, and ending in their own consumption. Their livelihoods, still dependent upon productive capital, help to build loyalty to the bourgeoisie without direct leadership and solidarity with the proletariat.
This is true even in moving away from retail to other service work—like cosmetic and medical work, for instance—the dependency upon productive labor is the same. It is clear that a direct link must be formed throughout the process of struggle directly to workers in the productive sector, and in the First World that means emphasizing those links to the proletariat in the Third World, whose sweat produces the commodities First World workers consume, and whose surplus-labor forms the greater part of the wages that cycle down through the processes of super-exploitation and imperialist rent. That does not mean immaterial and esoteric calls for symbolic international action, but deep and serious work that ties labor action in these sectors to the productive sector.
In actuality, workers in distribution especially can have a tremendous effect in amplifying the consequences of strikes in the productive sector. Yet this cannot be merely coincidental. It must be the focus of our work, and we must continue to stress to the workers the necessity in actions organized across sectors, rather than confined ones that aim themselves only at the distinct (and in many cases petty bourgeois) interests of workers in only one sector. This also implies conflict between nativist, loyalist and racist workers, who are the majority, and progressives/revolutionaries. This is merely the microcosmic expression of internationalism, which takes the macrocosmic form of active national/land and anti-imperialist struggle. Building international ties, as well as uniting all who can be united, will be difficult. We do not discount the power of the subjective element.
Overall, the greatest single challenge to overcome in organizing workers in the service sector on a truly revolutionary basis is their relationship to imperialism and the international division of labor. Any effective struggle within this sector must be solidly linked to proletarian leadership in the productive sector, without question. In the First World this is complicated by the sheer size of the service sector in comparison to the productive sector, as the imperialist campaign for super-profits has lead the largest deliberate “deindustrialization” project in history.
The productive sector that remains in the First World has been fragmented by the labor aristocracy and their broad class control over the spheres of struggle among those still trapped in horrid working conditions, and even still this sector has been greatly reduced in size. Steadfast work must be done to counter labor aristocratic consciousness and class influence in the productive sector, work that can only be carried out by communists. If an organized and politically conscious working class in First World countries cannot be united with their Third World counterparts—that is, transformed from a parasitic enemy contingent into an active accomplice—then the effort has been wasted, and vital communist energies diverted to a deleterious project.
These notes by no means exhaustively answer the question of what deeper issues lay in organizing the service sector, it serves only to open discussion on the topic. Still many more peculiarities in class need to be discussed and further elucidated to fully grasp the overall problems in organizing within the service sector. What has been underscored here, however, is the need to firmly weld the movement of service workers to the broad movement for social and internationalist control over production. It remains true that only the proletariat is capable of leading the whole of humanity to communism, therefore the leading role of the proletariat and its vanguard—as well as the its location geographically—cannot be minimized.
my thoughts are that i feel that it lacks a discussion of how women fit into this great contraction of wages in the service sector (i.e. less surplus value sacrificed to the sphere of circulation) since a cursory glance over the gendered occupation structure shows that men maintain a stranglehold on productive jobs (and other traditionally high paying non-productive jobs), excluding women while avoiding non productive jobs themselves - what will this mean now and in the future?
Edited by tears () | 2022-01-20 04:59:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3514293134212494, "perplexity": 4283.075642867844}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00193.warc.gz"} |
https://princetonuniversity.github.io/Athena-Cversion/AthenaDocsUGProbGens?action=diff&version=5 | Documentation/UserGuide/Problem Generators
# What is a Problem Generator?
The purposes of the problem generator are to
• set initial conditions for all variables, in a function that must be called problem and has the following prototype
void problem(DomainS *pDomain)
• enroll any problem-specific boundary conditions, if required. See Boundary Conditions.
• enroll any problem-specific user-defined history outputs, if required. See User-defined Output Variables.
• enroll any problem-specific physics controlled by function pointers, like forces due to a static gravitational potential, or optically-thin cooling. It will probably be necessary to read the Programmer Guide in order to understand the data structures and Mesh in Athena well enough to write a new problem generator. The existing files in the /src/prob directory can be used as starting templates.
In addition, the file containing the problem() function must also contain a number of other required functions
* problem_write_restart() - writes problem-specific user data to restart files
* get_usr_expr() - sets pointer to expression for special output data
* get_usr_out_fun() - returns a user defined output function pointer
* Userwork_in_loop - problem specific work IN main loop
* Userwork_after_loop - problem specific work AFTER main loop
In particular, see User-defined Output Variables for a description of how to use the function get_usr_expr() to add new user-defined output variables using one of the existing file formats, and see User-defined Output Formats for a description of how to use the function get_usr_out_fun() to add new user-defined output formats.
It may also be necessary to include special user-defined functions in the same file that contains the problem generator if new output variables, or new output formats, are used.
A large number of problem generators are included in the ./athena/src/prob directory.
# Parsing the Input File
As metioned in the section on Input Files, data in the input file must be read using functions defined in a parser written for Athena located in the file src/par.c. It is quite likely that every problem generator will require the input of at least a few parameters from the <problem> block in the input file. The following functions can be used in the problem generator to read data from the input file.
char *par_gets(char *block, char *name); /* reads a string called "name" from input block "block" */
int par_geti(char *block, char *name); /* reads an integer called "name" from input block "block" */
double par_getd(char *block, char *name); /* reads a double called "name" from input block "block" */
The following three functions do the same thing, except set the value to that given in the def, if the name cannot be found in the input block. This is useful for setting default values to parameters without having to always include them in the input file.
char *par_gets_def(char *block, char *name, char *def);
int par_geti_def(char *block, char *name, int def);
double par_getd_def(char *block, char *name, double def);
Most of the problem generators in the src/prob have examples of the usage of the above functions. Below are some examples.
/* Read problem parameters. Note Omega_0 set to 10^{-3} by default */
Omega_0 = par_getd_def("problem","Omega",1.0e-3);
qshear = par_getd_def("problem","qshear",1.5);
amp = par_getd("problem","amp");
filename = par_gets("problem","fname");
Finally, sometimes it is useful to set parameters that are not already defined a specific <input block>. The following functions can be used for this purpose.
void par_sets() - sets/adds a string
void par_seti() - sets/adds an integer
void par_setd() - sets/adds a Real | 2017-07-22 08:45:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4121447801589966, "perplexity": 3713.8877525401517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423927.54/warc/CC-MAIN-20170722082709-20170722102709-00035.warc.gz"} |
http://clay6.com/qa/18027/arrange-the-correct-sequence-of-enzymes-which-act-on-food-in-different-regi | # Arrange the correct sequence of enzymes which act on food in different regions of alimentary canal :
$\begin {array} {1 1} (a)\;Pepsin & \quad (b)\;Ptyalin \\ (c)\;Dipeptidase & \quad (d)\;Carboxypeptidase \end {array}$
$\begin {array} {1 1} (1)\;(a),\: (c),\: (b),\: (d) & \quad (2)\;(b),\: (a),\: (d),\: (c) \\ (3)\;(a),\: (d),\: (c),\: (b) & \quad (4)\;(b),\: (a),\: (c),\: (d) \end {array}$ | 2018-01-18 02:12:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5555137395858765, "perplexity": 4122.720912346894}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887054.15/warc/CC-MAIN-20180118012249-20180118032249-00377.warc.gz"} |
https://www.learnable.education/how-to-ace-your-hsc-physics-trial-exam-physics-exam-guide/hsc-physics-module-8-practice-questions/ | # HSC Physics Module 8 Practice Questions with Solutions
Can you solve these 10 Must Know HSC Physics Module 8 Exam questions? Test your exam-readiness with our Module 8 'From the Universe to the Atom' questions.
Module 8 ‘From The Universe to the Atom’ is the most conceptually challenging topic in HSC Physics. It explores the development of the atomic model and the origins of the universe.
A large number of students lose marks from Module 8 long-response questions because their written responses lack sufficient details.
• 17 commonly asked question types from the HSC Physics Module 8 Syllabus
• 10 exam long-response questions you are likely to be asked in your trial or HSC exam for Module 8 ‘From The Universe to The Atom’. Detailed solutions are included.
## What are commonly asked exam question types in HSC Physics Module 8?
17 Commonly asked question types from the HSC Physics Module 8 Syllabus are listed below:
NESA Content Block Question Type Origins of the Elements Describing the processes that led to the transformation of radiation into matter that followed the ‘Big Bang’.Discussing the evidence that led to the discovery of the expansion of the Universe by Hubble.Analysing the key features of stellar spectra and describing how these are used to classify stars.Using the Hertzsprung-Russell diagram to determine the following about a star:characteristics and evolutionary stagesurface temperaturecolourluminosityIdentifying and discussing the types of nucleosynthesis reactions involved in Main Sequence and Post-Main Sequence stars Structure of the Atom Assessing the experimental evidence supporting the existence and properties of the electronAssessing the experimental evidence supporting the nuclear model of the atom Quantum Mechanical Nature of the Atom Assessing the limitations of the Rutherford and Bohr atomic models.Examining the Balmer series in hydrogen quantitatively using the Rydberg equation.Discussing de Broglie’s matter waves and the experimental evidence that developed the formula $\lambda=\frac{h}{mv}$Analysing the contribution of Schrödinger to the current model of the atom Properties of the Nucleus Analysing the spontaneous decay of unstable nuclei, and the properties of the alpha, beta and gamma radiation emittedMaking quantitative predictions about the activity or amount of a radioactive sampleExplaining the process of nuclear fission, including the concepts of controlled and uncontrolled chain reactions, and account for the release of energy in the processAnalysing relationships that represent conservation of mass-energy in spontaneous and artificial nuclear transmutations, including alpha decay, beta decay, nuclear fission and nuclear fusionAccounting for the release of energy in the process of nuclear fusion.Predicting quantitatively the energy released in nuclear decays or transmutations, including nuclear fission and nuclear fusion.
## 10 Must Know Questions for HSC Physics Module 8
### Question 1 (5 marks)
Bohr’s atomic model is known as the first quantum model of the atom.
(a) Explain why the spectroscope was important in the development of the Bohr model of the atom. 3 (b) Assess the limitations of the Bohr atomic model. 2
See Question 1 Solution
### Question 2 (2 marks)
Describe Schrödinger’s contributions to the quantum model of the atom.
See Question 2 Solution
### Question 3 (6 marks)
The mass of a helium atom has been found to be $4.00389 \ u$. The exact masses of proton, neutron and electrons in unified mass units are:
• $m_{proton} = 1.007276 \ u$
• $m_{neutron} = 1.008664 \ u$
• $m_{electron} = 0.0005486 \ u$
(a) Calculate the binding energy of the helium nucleus. 3 (b) Using the law of conservation of mass-energy, explain why the mass of the nucleons inside the helium nucleus differs from their rest mass $m_0$. 3
See Question 3 Solution
### Question 4 (5 marks)
The first atomic weapons were based on the uncontrolled fission of uranium (and plutonium). An example of a uranium fission reaction is shown below:
$^1_0n + \ ^{235}_{92}U \rarr \ ^{141}_{56}Ba + \ ^{92}_{26}Kr + 3\big( ^1_0n \big)$
The table below provides the mass of each particle within the reaction:
Particle Mass (amu) $^{235} _{92} U$ $235.0439299$ $^{92} _{26} Kr$ $140.9144$ $^{141} _{56} Ba$ $91.92617$ $^{1} _{0} n$ $1.00867$
(a) Explain what is meant by an uncontrolled fission chain reaction. 2 (b) Use the information in the table to calculate the energy (in Joules) released by the reaction. 3
See Question 4 Solution
### Question 5 (5 marks)
According the Standard Model of Matter, all the particles in the universe can be grouped into just three “families” of elementary particles.
(a) State the three families of elementary particles in the standard model. 1 (b) Outline the composition of the He-2 nucleus in terms of fundamental particles. 2 (c) He-2 is an unstable isotope of helium. Its nucleus undergoes a positron decay. Write the nuclear reaction and hence state the force carrier particle responsible for the decay of a helium-2 nucleus. 2
See Question 5 Solution
### Question 6 (6 marks)
Particle accelerators are devices that accelerate particles to high velocities and energies. They are used to discover, investigate and understand the fundamental particles and physical laws that govern matter, energy and spacetime.
(a) Compare the operating principles of a cyclotron and synchrotron. 2 (b) Describe how the key features and components of the standard model of matter have been developed using accelerators as a probe. 4
See Question 6 Solution
### Question 7 (6 marks)
A sketch of a Hertzsprung-Russell diagram is shown below.
(a) Identify the stars that will evolve into a white dwarf. Provide a reason. 2 (b) Outline the TWO nuclear reactions that are occurring in star M. 2 (c) Explain the effect of the main sequence star’s mass on their life span. 2
See Question 7 Solution
### Question 8 (5 marks)
Energy can be produced in the cores of main sequence stars by two different nuclear processes.
(a) Identify the two nuclear processes. 1 (b) Describe ONE similarity and ONE difference between these two processes. 2 (c) Explain which feature of a star determines which of the two processes will be predominant in that star. 2
See Question 8 Solution
### Question 9 (2 marks)
What is the origin of the Cosmic Microwave Background radiation and why is the wavelength of the radiation in the microwave part of the spectrum?
See Question 9 Solution
### Question 10 (6 marks)
Hubble’s Law states that the velocity of a galactic object relative to Earth is proportional to its distance from Earth:
$v = H_0 d$
(a) Outline the significance of Hubble’s Law. 1 (b) Explain how Hubble used cosmic redshift observations and Cepheid variable stars to support Friedmann’s prediction of an expanding Universe. 5
See Question 10 Solution
## Solutions to HSC Physics Module 8 Questions
Question Solution 1 Part (a):For explain questions, students should use the CEO Framework (Cause, Effect, Outcome) to provide a logical and sequential response.The spectroscope allowed scientists to identify the characteristic spectral lines of different elements, including hydrogen. The Rutherford model of the atom had no electronic structure and was unable to explain the existence of atomic spectra. These spectral lines caused Bohr to hypothesise the existence of stable and fixed orbits for electrons, since the specific wavelengths of the lines suggested that there were fixed gaps between the available atomic electron energy levels. Without the spectroscope Bohr would not have made this hypothesis and would not have developed his model of the atom. Part (b):For assess questions, students should consider pros and cons and make a judgement.The Bohr atomic model was successful only in predicting spectral line wavelengths, and only of single electron species (e.g. $\textrm H, \textrm H\textrm e^+$).It was unable to explain the spectra of atoms larger than hydrogen, the relative intensities of spectral lines, Zeeman splitting or hyperfine splitting. This is due to Bohr’s model, with only one quantum number $n$, being an incomplete model of the atom.Back to Question 1 2 Schrödinger developed the wave function model of the electron, treating the electron as a three dimensional standing wave. This model leads to the first three of four quantum numbers, $n, l, m_l$, required to explain how electrons exist in atoms.Back to Question 2 3 Part (a):Step 1: Calculate the sum of masses of the components.$m_{protons} = 2 \times 1.007276 = 2.014552 \ u$$m_{neutrons} = 2 \times 1.008664 = 2.017328 \ u$$m_{electrons} = 2 \times 0.0005486 = 0.0010972 \ u$Step 2: Calculate the mass defect by subtracting the mass of the nucleus from the masses of the constituents.\begin{aligned} \Delta m &= m_{components} - m_{helium} \\\\ &= (2.014552 + 2.017328 + 0.0010972) - 4.00389 \\\\ &= 0.0290872 \ u \end{aligned}Step 3: Using the result obtained in Step 2, calculate the binding energy of the helium nucleus. \begin{aligned} E &= 0.0290872 \times 931.5 \\\\ &= 27.0947 \ MeV \end{aligned} Part (b):The Law of Conservation of mass-energy extends Einstein’s theory of mass-energy equivalence. Since mass is a type of energy, it can be transformed into other forms of energy. This means that mass on its own is no longer required to be conserved.When an atom is formed, the constituent nucleons each lose a small amount of mass. Consequently, the mass of the nucleus is less than the summed mass of the individual, separate particles. By the Law of Conservation of mass-energy, the “lost” mass is conserved and is converted into energy according to $E = \Delta mc^2$.Separating the nucleus into individual nucleons requires work, which corresponds to the binding energy of the nucleus.Back to Question 3 4 Part (a):An uncontrolled fission chain reaction refers to a fission chain reaction where the reproduction constant $K$ is greater than 1. This means that more than one neutron from each fission reaction will go on to trigger another fission reaction. Since each fission leads to more than one subsequent fission the reaction rate will increase exponentially, resulting in an uncontrolled fission chain reaction. Part (b):Step 1: Calculate the mass defect of the reaction.\begin{aligned} \Delta m &= m_{reactants} - m_{products} \\\\ &= (235.0439299 + 1.00867) - (140.9144 + 91.92617 + 3 \times 1.00867) \\\\ &= 0.1860199 \ u \end{aligned}Step 2: Apply $E = m \times 931.5$ to calculate the energy released in the reaction.\begin{aligned} E &= 0.1860199 \times 931.5 \\\\ &= 173.2775 \ MeV \end{aligned}Back to Question 4 5 Part (a):The three families of elementary particles in the standard model of matter are: Quarks, Leptons and Bosons. Part (b):A Helium-2 nucleus consists of 2 protons and no neutrons. Each proton consists of two up quarks and one down quark. Part (c):A Helium-2 nucleus undergoes a positron decay. This turns the isotope into deuterium.The nuclear reaction for this decay is expressed as:$^2_2He \rarr \ ^2_1D + \ ^0_1e + \ \nu_e + energy$The force carrier particle responsible for positron decay is the W Boson.Back to Question 5 6 Part (a):A cyclotron is a circular disk-shaped accelerator that uses electric fields to accelerate charged particles, and confines them in a circular path of increasing radius (an outward spiral) with the use of a constant magnitude magnetic field. As a charged particle gains energy, the radius of its orbit increases, and it spirals out to the limit of the magnetic pole area.In comparison, a synchrotron is a circular ring-shaped accelerator that uses electric fields to accelerate charged particles, and confines them in a circular path of fixed radius by synchronising the magnetic field strength with the energy of the accelerated particles. The magnetic field strength is increased as the energy of the particle is increased. The particles to travel in a circular loop, rather than a spiral. Part (b):The Standard Model of Matter describes the elementary particles of matter, and the forces by which they interact (excluding gravity). According to the Standard Model, all particles in the Universe can be grouped into three families of elementary particles; Quarks, Leptons and Bosons.The first subatomic particle (an electron) was discovered using the an accelerator known as a cathode ray tube. As electrons are elementary particles, this was the discovery of the first particle in the Standard Model. Rutherford additionally used alpha sources (natural accelerators) to bombard different materials, leading to the discoveries of nuclei, the proton and the neutron. This was the limit of what natural accelerators could identify due to the limited energies of the alpha particles.In 1969 the Stanford Linear Accelerator achieved much higher energies than natural accelerators and discovered that protons were not fundamental, but were made of smaller particles. This saw the discovery of quarks, which led to a huge advancement in the understanding of matter and the development of the Standard Model.The development of synchrotrons allowed even greater particle collision energies, producing particles that could not be identified under normal conditions. This played a crucial role in the discovery and understanding of the rest of the particles described in the Standard Model. This culminated in the Large Hadron Collider’s discovery of the Higgs Boson in 2012, which is the final particle to be verified in the Standard Model.Back to Question 6 7 Part (a):Stars T, U and P will evolve into White Dwarfs. Main Sequence stars will evolve into White Dwarfs if their mass is between 0.1 and 8 solar masses. This corresponds to spectral classes from M to B. From the diagram stars T, U and P all satisfy this condition. Part (b):Star M is a red giant. This can be identified by analysing the HR diagram. Red giants undergo two nuclear reactions:Hydrogen shell burning,Core helium fusion.Hydrogen shell burning involves the fusion of hydrogen into helium in a shell around the star’s helium core. The helium produced sink into the core.Core helium fusion involves the fusion of helium into heavier nuclei within the star’s core. This occurs within the star’s hydrogen-burning shell. Part (c):Higher mass stars live shorter lifetimes, despite having more fuel for fusion. The time taken to consume core and shell fuel supplies (e.g. hydrogen) is inversely proportional to the mass, since higher mass results in stronger gravitational forces. This in turn leads to substantially higher fusion rates, and the larger fuel supplies are consumed in a shorter time.Back to Question 7 8 Part (a):The two nuclear processes are:The proton-proton chain, andThe CNO cycle. Part (b):Similarity:Four hydrogen nuclei are progressively fused into a single helium-4 nucleus.Difference:In the CNO cycle carbon-12 nuclei act as nuclear catalysts to facilitate the reaction (and are ultimately not changed by it). In the proton-proton chain, there is no nuclear catalyst. Part (c):The mass of a main sequence star will determine which of the two nuclear processes will dominate in the star.In higher mass stars, the CNO cycle will dominate. This is due to:Higher mass stars have higher core temperatures. This allows the CNO cycle to occur.Higher mass stars have more carbon-12 readily available in the core. Lower mass stars have insufficient amounts of Carbon-12.Back to Question 8 9 Before atoms formed the Universe consisted of free nuclei and electrons. These free charged particles constantly absorbed and emitted (i.e. scattered) light. Once the Universe expanded and cooled sufficiently electrons combined with nuclei to form neutral atoms, which cause dramatically less scattering of light. The light that was suddenly free to travel the Universe has been propagating ever since, and due to cosmic expansion has been redshifted into microwave wavelengths. Back to Question 9 10 Part (a):Hubble’s Law is observational proof of the expansion of space. Objects at larger distances are seen to recede at higher velocities, due to the larger amount of expanding space between them and Earth. This is the observation expected in an expanding Universe, hence this evidence directly supports the Big Bang theory. Part (b):Cepheid variable stars were seen to pulsate in intensity or brightness $B$ with consistent period $T$. By using parallax to measure the distance $r$ to nearby Cepheids, their luminosity $L$ could be calculated from the inverse square law, $B = \frac{L}{4\pi r^2}$. It was found that there was a clear proportionality between the luminosity of a Cepheid and its period of pulsation, establishing a period-luminosity relationship. This information was then used with observations of very distant Cepheids that were too far away for parallax to provide distance. Their observed periods of pulsation were used with the period-luminosity relationship to determine their luminosities by the relationship. The observed intensities and inferred luminosities then allowed their distances to be found.Meanwhile, Hubble had observations of the visible spectra of the same distant galaxies in which Cepheids were observed. The spectra of those galaxies exhibited redshifted spectral lines, indicating they had velocities away from Earth. The amount of spectral line redshift is proportional to the velocity. The velocity of the light source (the galaxy) can be determined from the Doppler equation.$f' = f \frac{ v_{wave} + v_{observer} }{ v_{wave} - v_{source} }$Hubble then combined the velocities from spectral redshifts with the distances from Cepheid variable observations and discovered that the more distant galaxies had higher recession velocities. The recession velocity was directly proportional to the distance, $v = H_0 d$, now known as Hubble’s Law.This was direct evidence of cosmic expansion and thus supported Friedmann’s prediction of an expanding Universe.Back to Question 10
## Access our library of HSC Physics Module 8 Exam Questions
Test your understanding of any HSC Physics Module 8 concepts in just 10 minutes with Learnable’s customisable quizzes with over 500+ questions for each module. Instant feedback provides immediate adjustments on your misconceptions. Try Learnable for free now.
### Written by DJ Kim
DJ is the founder of Learnable and has a passionate interest in education and technology. He is also the author of Physics resources on Learnable. | 2023-03-30 01:05:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 37, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7461807131767273, "perplexity": 991.1989700161819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00113.warc.gz"} |
http://glennjlea.ca/latex/6-0-creating-chapters/ | 23. Formatting chapter sections
## A note about file structure
If a document consists of multiple chapters, then it is good practice to create one .tex filer per chapter. For example:
• chapterone.tex
• chaptertwo.tex
• chapterthree.tex
• chapterfour.tex
Then you define the title and other information at the top of each chapter using specific commands. Furthermore, the document’s TOC (Table of Contents) are built using the chapter title and each chapter’s section headings. You can also add index entries for the chapter.
## Defining a chapter title
Each chapter begins with the following elements:
• The first line requires you to define the title of the chapter. This is used in the headers and in the Table of Contents.
• The second line is used for cross-referencing to this chapter. It serves as a marker or anchor.
• The third line is used by the index.
For example:
## Section headings
Creating a section headings is quite easy - use the section command. Second and third level headings are just as easy. These levels use subsection and subsubsection commands. For example:
Note: Three levels of headings are best. Any more and you may need to rewrite sections so they are at most third level deep. If you must, then just use a bolded paragraph for a fourth level heading.
## Paragraphs
Paragraphs are entered without markup tags. Adding a new paragraph simply requires a blank line between paragraphs.
## Comments
You can add comments to a .tex file as required using the % character.
## Paragraph line breaks
You can add a line break into text by adding a backslash {\} or using the \newline command. However, these two commands are not entirely identical. The backslash provides two optional parameters.
The following command tells LaTeX to start a new line.
The following command tells LaTeX not to start a new page after the line by issuing a \nobreak.
The following command specifies the vertical space <len> to be inserted before the next line. This value can also be negative.
Note: The above two can also be mixed. That is, using both a starred + optional argument combination \*[<len>].
The following command is similar to the backslash.
## Indenting paragraphs in lists
You can define the indentation value of paragraphs within lists in the stylesheet then apply the command where needed.
Then Use myindentpar in the document flow to indent a paragraph based on the settings.
Note that you can adjust the indent in this command as required.
Click to continue. | 2021-04-11 00:45:16 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9611501097679138, "perplexity": 2204.2639683467623}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060603.10/warc/CC-MAIN-20210411000036-20210411030036-00310.warc.gz"} |
https://phd.row1.ca/phd/ontology | Appendix A
PhD Thesis
# Frameworks & Ontologies
Rowan Cockett
University of British Columbia
A computational science framework provides a set of standards such that individual scientists can contribute software components, in this case components used in simulation or inversion routines, with the confidence that those components will work with other components in that framework. As such, the standards of the framework define the responsibilities of each component (or class of component) and the required interfaces of all components. The term framework is commonly used in the software community, however, the formal term for the component organization and their properties is an ontology. The use of ontologies in the sciences to formally describe domain knowledge has exploded in recent decades, especially in domains of artificial intelligence, chemistry, and biology, but also more recently in the geosciences . A computational ontology (rather than the underlying discipline of philosophical ontology) is a "formal explicit specification of a shared conceptualization" . summarizes the purpose of computational ontologies as enabling a shared understanding of the structure of information and systematically enabling knowledge and information reuse. Practically ontologies can (a) provide access and discoverability to heterogeneous information; (b) act as a common language to lower the barrier to transfer of ideas; and (c) act as a specification for interoperability, for example, as a communication protocol or application programming interface . The techniques for building ontologies amount to capturing, synthesizing, organizing, and digitizing the relationships between concepts, conceptual inheritance patterns, and behaviour. Ontologies are most commonly used for storing and organizing data, for example, connecting genetic data with phenotypic data in bioinformatics. However, ontologies are also used in defining tasks, workflows, and problem solving methods . In more mature interdisciplinary fields, this research is becoming core to scientists' day to day research; for example, notes that "all current biomedical cyberinfrastructure efforts use ontologies." As a result of successes in other fields, geoscience integration is currently the target of major funding initiatives across the world (e.g. EarthCube - 11 year NSF project, $35M in 2015; CIMIC Footprints Project - NSERC Project, 24 Universities, 30 Industry,$13M). Many of the current efforts are focused on computational science frameworks, formally describing geoscientific data (using ontologies), and formally describing methods of integrating disciplines. For example, a Common Component Architecture for high performance scientific computing has been used as the basis for coupled forward integration of a number of geoscience simulation tools written by different authors . The research into these domain specific standards for interoperability is critical for sustainable interdisciplinary research.
The growth in complexity of geophysical data and analysis and the necessity for cross-disciplinary integrations is also coincident with the revolution of open source software communities, largely enabled through web-based interactions. Other research communities, for example Astropy in astronomy and SciPy in numerical computing, have embraced the open source approach for collaboration and research . These pioneering efforts are now complemented by easy-to-use, ubiquitous web-based repositories and version-control systems (e.g. GitHub), that have removed many of the barriers associated with management and collaboration. The growth of such systems, coupled with the maturity of individual geophysical subdisciplines (e.g. potential fields, electromagnetics), presents an opportunity to develop a computational framework and associated ontology for geophysical simulation and inversion. An ontology is an embodiment of concepts, relationships, and behaviours in a specific scientific domain and can be (a) captured in special purpose languages (e.g. Web Ontology Language, Resource Description Framework), or (b) captured in general purpose computer programming languages (e.g. Python, Java, C++) . To research a geophysical simulation and inversion framework, I have chosen the latter approach for the purposes of utility, testing, and creating a framework/ontology that can be openly used and evolved by the geoscience community.
References
1. Sharman, R., Kishore, R., & Rames, R. (2004). Computational Ontologies and Information Systems: II. Formal Specification. Communications of AIS, 2004(14), 184–205. ;%7DP=AN%7B%5C&%7DK=16744556%7B%5C&%7DS=R%7B%5C&%7DD=buh%7B%5C&%7DEbscoContent=dGJyMNLe80Sep7A4yOvqOLCmr0qeprNSsq%7B%5C%25%7D2B4TLSWxWXS%7B%5C&%7DContentCustomer=dGJyMPGnr0m0r7JJuePfgeyx44Dt6fIA%7B%5C%25%7D5Cnhttp://www.redi-bw.de/db/ebsco.php/search.ebscohost.com/login.aspx
2. Ma, X. (2011). Ontology Spectrum for Geological Data Interoperability [Phdthesis].
3. Gruber, T. R. (1993). A Translation Approach to Portable Ontology Specifications. Knowledge Acquisition, 5(2), 199–220.
4. Noy, N. F., & McGuinness, D. L. (2002). Ontology Development 101: A Guide to Creating Your First Ontology. In Stanford Medical Informatics Report. Stanford University.
5. Fensel, D., Motta, E., Decker, S., & Zdrahal, Z. (1997). Using ontologies for defining tasks, problem-solving methods and their mappings. Lecture Notes in Computer Science, 1319, 113–128. 10.1007/BFb0026781 | 2023-03-28 08:19:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.507439374923706, "perplexity": 3838.683851902327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00126.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-5-polynomials-and-factoring-5-6-factoring-a-general-strategy-5-6-exercise-set-page-344/74 | ## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)
$2t^2(s^3+2t)(s^3+3t)$
Factoring the $GCF= 2t^2$, then the given expression, $2s^6t^2+10s^3t^3+12t^4$ is equivalent to \begin{array}{l} 2t^2(s^6+5s^3t+6t^2) .\end{array} The two numbers whose product is $ac= 1(6)=6$ and whose sum is $b= 5$ are $\{ 2,3 \}$. Using these two numbers to decompose the middle term of the expression, $2t^2(s^6+5s^3t+6t^2) ,$ then the factored form is \begin{array}{l} 2t^2(s^6+2s^3t+3s^3t+6t^2) \\\\= 2t^2[(s^6+2s^3t)+(3s^3t+6t^2)] \\\\= 2t^2[s^3(s^3+2t)+3t(s^3+2t)] \\\\= 2t^2[(s^3+2t)(s^3+3t)] \\\\= 2t^2(s^3+2t)(s^3+3t) .\end{array} | 2018-06-22 02:10:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9953941702842712, "perplexity": 2321.3033812397175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864337.41/warc/CC-MAIN-20180622010629-20180622030629-00035.warc.gz"} |
https://www.linstitute.net/archives/616529 | # Edexcel IGCSE Maths 复习笔记 2.9.1 Substitution
Edexcel IGCSE Maths 复习笔记 2.9.1 Substitution
#### What is substitution?
• Substitution is where we replace letters in a formula with their values
• This allows you to find one other value that is in the formula
#### How do we substitute?
• Write down the FORMULA if not clearly stated in question
• SUBSTITUTE the numbers given – use ( ) around negative numbers
• SIMPLIFY if you can
• REARRANGE if necessary – it is usually easier to substitute first
• Do the CALCULATION – use a calculator if allowed | 2022-08-09 22:30:10 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9691396951675415, "perplexity": 2518.740330620297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571090.80/warc/CC-MAIN-20220809215803-20220810005803-00079.warc.gz"} |
https://asmedigitalcollection.asme.org/appliedmechanicsreviews/article/56/4/B53/463876/Hyperbolic-Systems-of-Conservation-Laws-The-Theory | 7R6. Hyperbolic Systems of Conservation Laws: The Theory of Classical and Nonclassical Shock Waves. - PG LeFloch (Center de Math Appliquees and CNRS, Ecole Polytechnique, Palaiseau, 91128, France). Birkhauser Verlag AG, Basel, Switzerland. 2002. 294 pp. Softcover. ISBN 3-7643-6687-7. \$34.95.
Reviewed by J Novotny (Inst of Thermomech, Dolejskova 5, Prague, 182 00, Czech Republic).
The book presents a self-contained modern mathematical theory of hyperbolic systems of nonlinear partial differential equations of first order in divergence form, which are also called hyperbolic systems of conservation laws. These equations arise in many areas of continuum physics (compressible fluid dynamics, phase transition dynamics, nonlinear elastodynamics…), where fundamental balance laws are formulated for mass, momentum, total energy of fluid, or solid continuum.
Solutions to these systems may lead to singularities (shock waves) appearing even when smooth initial data are given. As established, weak solutions are not unique unless some entropy condition is imposed.
The text contains existence, uniqueness, and continuous dependence of classical (compressive) entropy solutions on initial data. The latest results of the author and his collaborators on uniqueness of entropy solutions with bounded variations and continuous dependence are included.
Part one of the book describes scalar conservation laws and part two, systems of conservation laws. The Riemann problem, classical and nonclassical Riemann solvers are studied. Also the developing theory of nonclassical (under compressive) entropy solutions is presented. Existence theory for the Cauchy problem for classical entropy solutions, for both convex and general flux, and nonclassical entropy solutions are studied in detail. Continuous dependence of the solutions in $L1$ norm is proved.
The study of nonclassical shock waves is based on the concept of a kinetic relation introduced by the author for general hyperbolic systems and derived from singular limits of hyperbolic conservation laws with balanced diffusion and dispersion terms.
Basic courses of functional analysis and modern methods for partial differential equations are necessary for studying of this book. No preliminary knowledge of continuum physics is required, however, basic knowledge is useful for better understanding.
The book contains a number of pertinent figures completing well the theoretical explanations. The book does not contain a subject index, nevertheless it contains bibliographical notes to each chapter and a large bibliography.
Up to now, no book clearly presented the most important principles of classical and modern theory of hyperbolic conservation laws together with recent developments in this field. This book, Hyperbolic Systems of Conservation Laws: The Theory of Classical and Nonclassical Shock Waves, can be considered as a concise and comprehensive monograph and at the same time a textbook for graduate students. The book should be particularly suitable for graduate students, courses for PhD students, and also for researchers working in the fields of modern theory and numerical analysis of nonlinear hyperbolic partial differential equations, and in theoretical continuum physics. It is suitable especially for young researchers, who want to become familiar with the basic principles, the current state of knowledge, and the latest, most important results in the mathematical theory of hyperbolic conservation laws.
This book is recommended for purchase by university libraries, departments of mathematics and physics, and seriously interested individuals. | 2019-10-20 09:53:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.647190272808075, "perplexity": 572.5129214298087}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986705411.60/warc/CC-MAIN-20191020081806-20191020105306-00311.warc.gz"} |
https://quant.stackexchange.com/questions?tab=votes&pagesize=15 | All Questions
14,391 questions
Filter by
Sorted by
Tagged with
204k views
What data sources are available online?
What sources of financial and economic data are available online? Which ones are free or cheap? What has your experience been like with these data sources?
165k views
How can I go about applying machine learning algorithms to stock markets?
I am not very sure, if this question fits in here. I have recently begun, reading and learning about machine learning. Can someone throw some light onto how to go about it or rather can anyone share ...
12k views
What concepts are the most dangerous ones in quantitative finance work?
There are a few things that form the common canon of education in (quantitative) finance, yet everybody knows they are not exactly true, useful, well-behaved, or empirically supported. So here is the ...
17k views
Video lectures and presentations on quantitative finance
What are your favourite video lectures, presentations and talks available online? A few rules: Must be related to quantitative finance. No Economics 101 courses, please. Try to avoid DIY lectures ...
18k views
Innovative ways of visualizing financial data
Finance is drowning in a deluge of data. Humans are not very good at comprehending large amounts of data. One way out may be visualization. Traditional ways of visualizing patterns, complexities and ...
33k views
Efficiently storing real-time intraday data in an application agnostic way
What would be the best approach to handle real-time intraday data storage? For personal research I've always imported from flat files only into memory (historical EOD), so I don't have much ...
26k views
Is R being replaced by Python at quant desks?
I know the title sounds a little extreme but I wonder whether R is phased out by a lot of quant desks at sell side banks as well as hedge funds in favor of Python. I get the impression that with ...
91k views
I have a very basic data question: how to get a list of all common stocks traded on NYSE, NASDAQ and AMEX? I would need to be able to get the approximate list of common stocks as is available in ...
21k views
Switching from C++ to R - limitations/applications
I've only recently begun exploring and learning R (especially since Dirk recommended RStudio and a lot of people in here speak highly of R). I'm rather C(++) oriented, so it got me thinking - what are ...
30k views
How useful is the genetic algorithm for financial market forecasting?
There is a large body of literature on the "success" of the application of evolutionary algorithms in general, and the genetic algorithm in particular, to the financial markets. However, I feel ...
27k views
Building Financial Data Time Series Database from scratch
My company is starting a new initiative aimed at building a financial database from scratch. We would be using it in these ways: Time series analysis of: a company's financial data (ex: IBM's total ...
70k views
How to annualize Sharpe Ratio?
I have a basic question about annualized Sharpe Ratio Calculation: if I know the daily return of my portfolio, the thing I need to do is multiply the Sharpe Ratio by $\sqrt{252}$ to have it annualized....
5k views
Everyone seems to agree that the option prices predicted by the Black-Merton-Scholes model are inconsistent with what is observed in reality. Still, many people rely on the model by using "the wrong ...
55k views
73k views
A simple formula for calculating implied volatility?
We all know if you back out of the Black Scholes option pricing model you can derive what the option is "implying" about the underlyings future expected volatility. Is there a simple, closed form, ...
9k views
Machine Learning vs Regression and/or Why still use the latter?
I come from a different field (Machine learning/AI/data science), but aim to ask a philosophical question with the utmost respect: Why do quantitative financial analysts (analysts/traders/etc.) prefer ...
5k views
What are the popular methodologies to minimize data snooping?
Are there common procedures prior or posterior backtesting to ensure that a quantitative trading strategy has real predictive power and is not just one of the thing that has worked in the past by pure ...
9k views
What types of neural networks are most appropriate for trading?
What types of neural networks are most appropriate for forecasting returns? Can neural networks be the basis for a high-frequency trading strategy? Types of neural networks include: Radial Basis ...
12k views
How useful is Markov chain Monte Carlo for quantitative finance?
Naively, it seems that Bayesian modeling, structural models particularly, would be quite useful in finance because of their ability to incorporate market idiosyncrasies and produce accurate ...
70k views
Where to get long time historical intraday data?
I am looking for long time historical intraday day data on the S&P500 composite for a time horizon like 10 years with a - for example 10-minutes tick - or prices for call/put options on the S&...
4k views
Has high frequency trading (HFT) been a net benefit or cost to society?
Various studies have demonstrated the very large and growing influence of high frequency trading (HFT) on the markets. HFT firms are clearly making a great deal of money from somewhere, and it stands ...
992 views
How to show that this weak scheme is a cubature scheme?
Weak schemes, such as Ninomiya-Victoir or Ninomiya-Ninomiya, are typically used for discretization of stochastic volatility models such as the Heston Model. Can anyone familiar with Cubature on ...
21k views
Usage of NoSQL storage in Finance
I am wondering if anyone has used NoSQL (mongodb, cassandra, etc.) to store and analyze data. I tried searching the web but was not able to see if the financial firms had gotten in to using nosql ... | 2020-02-17 19:27:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31376123428344727, "perplexity": 2120.333293383635}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143079.30/warc/CC-MAIN-20200217175826-20200217205826-00094.warc.gz"} |
https://aatila.com/ | # Biography
Achraf Atila studied physics at the University of Hassan II in Casablanca, from which he obtained his Bachelor of Science in physics and applications in 2015 and Master of Science in physics and new technologies in 2017. After finishing his M.Sc, he moved to Germany and started a Ph.D. in the group of Prof. Dr.-ing. Erik Bitzek at the Institute I: General Materials Properties, Department of Materials Science and Engineering, FAU in Erlangen.
Achraf is a computational material scientist, focusing on amorphous materials, with a strong interest in the relationship between composition, mechanical, and other physical properties. His research mostly focuses on the mechanical behavior and deformation mechanisms in oxide glasses.
### Interests
• Tailoring and Enhancing the Mechanical Properties of Glasses
• Relaxation Phenomena in Glasses
• Anisotropy in Oxide Glasses
### Education
• PhD in Materials Science and Engineering, Expected in 2022
Friedrich-Alexander-Universität Erlangen-Nürnberg
• MSc in Physics and New Technologies, 2017
Hassan II University Casablanca
• BSc Physics and Applications, 2015
Hassan II University Casablanca
70%
100%
90%
100%
50%
# Experience
#### Friedrich-Alexander-Universität Erlangen-Nürnberg
Apr 2018 – Present Erlangen, Germany
“PhD thesis: Atomistic simulations of the mechanical behavior of anisotropic oxide glasses.”
• Analysis of glass structure and properties.
• Presenting scientific posters and talks in international conferences.
• Writing and publishing papers in top journals.
• Precourse MatLab/Octave and Linux.
• Introduction to atomistic simulation methods.
• Student supervision.
• Student exams supervision.
• System administrator for the group linux machines and as a contact person for high-performance cluster.
#### Hassan II university
Feb 2017 – Jun 2017 Casablanca, Morocco
Master thesis: “Molecular dynamics simulation of thermodynamics and structural properties of calcium aluminosilicate glasses.”
• Molecular dynamics simulations of glass transition and glass structure
• Presenting scientific posters and giving talks at national and international conferences.
• Writing and publishing scientific papers.
# Projects
#### Bioactive Glasses
Study the bioactivity of glasses
#### Metallic Glasses
Investigate the properties of metallic glasses
#### Oxide Glasses
Study the properties of oxide glasses
# Recent Publications
Quickly discover relevant content by filtering publications.
### Atomistic insights into the mixed-alkali effect in phosphosilicate glasses
Oxide glasses have proven useful as bioactive materials, owing to their fast degradation kinetics and tunable properties. Hence, in recent years tailoring the properties of bioactive glasses through …
### Atomistic insights into the structure and elasticity of densified 45S5 bioactive glass
Glasses have applications in regenerative medicine due to their bioactivity, enabling interactions with hard and soft tissues. Soda-lime phosphosilicate glasses, such as 45S5, represent a model system …
### On the presence of nanoscale heterogeneity in Al$_{70}$Ni$_{15}$Co$_{15}$ metallic glass under pressure
We used molecular dynamics simulations to investigate the dependence of the atomic-scale structure on the temperature and pressure conditions of Al${70}$Ni${15}$Co${15}$ metallic glass. The effect of …
### Ionic Self-Diffusion and the Glass Transition Anomaly in Aluminosilicates
The glass transition temperature (T$_g$) is the temperature, after which the supercooled liquid undergoes a dynamical arrest. Usually, the glass network modifiers (e.g., Na$_2$O) affect the behavior … | 2022-05-24 13:19:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3029738664627075, "perplexity": 8955.856314244355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662572800.59/warc/CC-MAIN-20220524110236-20220524140236-00162.warc.gz"} |
https://www.nature.com/articles/s41598-020-80506-8?error=cookies_not_supported&code=fc983d7e-129e-4e8b-82c0-0d3cef35b9c1 | ## Introduction
Nursery areas have been shown to be important for many elasmobranch species1,2. These discrete areas have biotic and abiotic features that can be important for pupping and for enhancing the survival of neonates, and juveniles2. For an area to be considered an elasmobranch nursery, it must follow at least three criteria: (1) neonates, and juveniles are more commonly encountered within the area compared to adjacent areas, (2) individuals tend to remain or return to the area over weeks or months, and (3) the area is used in a similar manner repeatedly across years3,4.
While many studies have identified the importance of nursery areas for sharks3,5,6 little is known about nursery areas for batoids7,8,9. Indeed, only three important juvenile habitats for manta rays have been identified in the Gulf of Mexico9,10, in Florida11 (Mobula birostris and Mobula. cf. birostris for both areas), and in Indonesia12 (Mobula alfredi). In addition, a potential pupping ground for Mobula mobular in the Northern Gulf of California13, has been suggested, but more research is needed to confirm.
Mobulids (manta and devil rays) are planktivorous filter feeders with vulnerable life histories14,15 that include the lowest fecundity of all elasmobranchs (one pup per litter)16,17, and delayed, aplacental viviparous matrotrophic reproduction cycles of 1–3 years18,19,20,21. Such low reproductive rates make mobulids extremely vulnerable to anthropogenic impacts including targeted small-scale fisheries18,22,23 and bycatch in small- and large-scale fisheries22,24. As a result, all mobulid species are IUCN Red list, Endangered or Vulnerable25, with all species experiencing population declines26,27.
Pygmy devil rays (5 of the 10 mobulid species)28, include the smaller species reaching < 1.3 m disc width as adults with more restricted distribution than the larger mobulid species15. Munk’s pygmy devil ray (Mobula munkiana) is endemic to the Eastern Pacific, found in neritic and coastal habitats that extend from the Gulf of California, Mexico to Peru29. In the Gulf of California, M. munkiana feed predominantly upon Mysidacea spp. with the euphausiid, Nyctiphanes simplex, as a second prey item18,30.
While size at birth remains unknown, estimations and comparisons with other pygmy devils rays indicate that disc width could range from 3518 to 42.3 cm31, and reach up to 112 cm as an adult32. Mobula munkiana is particularly known for its social behavior18, often congregating in large aggregations of thousands of individuals, presumedly for mating purposes15. M. munkiana is currently classified as “Vulnerable” on the IUCN Red List of Threatened Species29. While the species is nationally protected in Mexican waters under the NOM-029-PESC-2006 and NOM-059-SEMARNAT-2010 regulations, illegal targeted fishing still exists in several areas in the Gulf of California33.
When M. munkiana was first described in the Southern Gulf of California, segregation by size was described18,34,35, leading to the potential for differential habitat use between juvenile and adult stages. Since 2013, local fishermen and tour operators in the Southern Gulf of California, have known of a well-established aggregation of M. munkiana in Ensenada Grande, a shallow bay with sandy bottom seafloor, located on the northwest side of the Espiritu Santo Archipelago (Fig. 1). These anecdotal observations prompted us to examine whether pygmy mobulid rays utilize nursery areas for mating, pupping, and foraging of juveniles.
Here we report the reproductive seasons (mating and parturition) for adults, and residency linked to environmental factors of early life stages of M. munkiana in a shallow bay at the Espiritu Santo Archipelago, Mexico. We used a combination of nonlethal methodologies including traditional tagging, passive acoustic telemetry, and environmental monitoring (zooplankton biovolume and water temperature) to examine the spatial use and foraging ecology of early life history stages of M. munkiana and to determine if M. munkiana utilize the shallow bay as a nursery area.
## Results
### Conventional tagging
A total of 95 Munk’s pygmy devil rays were captured at Ensenada Grande from August 2017 to June 2018 during five capture periods (Supplementary Information Table S1). Mobula munkiana catches and life stage varied seasonally, with greater captures occurring during late summer and fall than during winter, spring, and early summer (Fig. 2). Disc width was not normally distributed (W94 = 0.925, P = 0.0004), and we found no significant difference in size by sex (W93 = 905, P = 0.18). Juveniles (65%, n = 62) and neonates (19%, n = 18) dominated the sampled population with a 1:1 sex ratio (X2 = 0.05, P = 0.8) with 39 females and 41 males.
Neonates (n = 18) were identified by the presence of the umbilical scar on the ventral side below the gills (Supplementary Information Figure S1). Neonate size ranged from 49.5 to 56 cm disc width and were only captured inside Ensenada Grande during August, at depths between 2 and 5 m. Juveniles (n = 62) ranged from 49 to 85 cm disc width and were captured during all sampling months at Ensenada Grande. Neonates and juveniles were only caught with individuals of the same life stage, indicating size segregation of the schools. All neonate and juvenile males had undeveloped claspers without calcification or rotation (Supplementary Information Figure S1), while neonate and juvenile females showed no evidence of mating scars and the state of the cloaca was not distended.
Adults (15%, n = 14) and pregnant females (1%, n = 1) were only captured during spring and early summer (April and June) at > 15 m depth in Ensenada Grande. The adults (n = 4) captured in April 2018 were females with swollen distended cloaca evidenced with a reddish coloration indicating possible recent mating or parturition (Supplementary Information Figure S1) as it has been interpreted in other elasmobranch species36,37.
During June 2018, we captured a group of adults composed of one female and four males displaying courtship behavior at the surface (initiation and endurance) as described for Mobula alfredi and M. birostris38. All four males had developed claspers with sperm. Courtship behavior was also observed during April 2018, but those animals were not captured. A female in an advanced state of pregnancy was captured at Ensenada Grande during June 2018 showing distended abdominal region on both the dorsal and ventral surface (Supplementary Information Figure S1). Pregnancy was confirmed on another individual with the same characteristics captured at Espiritu Santo Archipelago in April 2018 using ultrasound techniques, with a single and well-developed term-embryo present (Ramírez-Macías unpub. data). This corroborated the estimation of the litter size of a single pup for M. munkiana39 and other mobulid species14,40.
During this study we had seven recaptures (6.23%) of six individuals, four juveniles and two neonates. The straight-line capture/recapture distance for all recaptured devil rays was between 0.1 and 0.5 km, with recapture durations ranging from 1 day to 8 months from initial capture.
### Acoustic telemetry
#### Detection summary
All seven acoustic tags deployed on M. munkiana (four neonates and three juveniles) (Table 1) were recorded by at least two receivers around the Espiritu Santo Archipelago. We recorded 38,275 detections for all individuals at five of the six receivers placed around Espiritu Santo Archipelago during the monitoring period (643 days) and no other detections were recorded on the rest of the acoustic array (n = 15) (La Paz Bay, Isla San Jose and Isla Cerralvo) (Fig. 3a).
Females accounted for 64.5% of detections (two neonates with 63.9% and two juveniles with 0.6% of total detections), while males accounted for 35.5% of detections (two neonates with 27% and one juvenile with 8.5% of total detections).
#### Residency
Overall residency indices for Espiritu Santo Archipelago-tagged individuals ranged from 1 to 99% (27 ± 33%, mean ± SD). The tracking duration for individual M. munkiana ranged from 151 to 631 days (435 ± 195 days, mean ± SD). Detections on consecutive days were found in receivers both within (maximum 145 consecutive days) and outside Ensenada Grande (maximum of three consecutive days). Neonates were present at Ensenada Grande during 26 to 145 successive days while juveniles were present from 1 to 17 successive days. There were no significant differences in the residency index between sexes (W6 = 4, P = 0.63), maturity stages (W6 = 2, P = 0.23) or sizes (S = 88.59, P = 0.17).
#### Habitat preference and spatial movements
Areas of high activity as determined by the number of detections of tagged Munk’s pygmy devil rays were in coastal waters inside Ensenada Grande where 98.6% of the validated receiver detections were registered (Fig. 3b). The other receivers around the Espiritu Santo Archipelago were categorized as offshore and accounted for just 1.4% of the detections, while no detections were registered in the remainder of the receiver array (Fig. 3a).
As a result, the Ensenada Grande receivers had a statistically greater residency index compared to other receivers placed around the Espiritu Santo Archipelago (W13 = 44, P = 0.01). Individuals moved throughout the Espiritu Santo Archipelago with a travelling minimum linear dispersal distance of 18.5 ± 7.6 km (mean ± SD) and a maximum of 21.4 km based on detections around the archipelago. One single individual (neonate, 50 cm disc width) was never detected outside of Ensenada Grande, and had a minimum linear dispersal distance of only 1.22 km.
#### Seasonality
Acoustic detections occurred at the Espiritu Santo Archipelago throughout the year for most devil rays, with no statistically significant differences in residency index between warm and cold seasons (W53 = 249, P = 0.07). The largest residency indices included September, October, November (warm season), and December (start of cold season) 2017 (Fig. 4). Detection rates for all tagged neonates and juveniles decreased during March and April when adults tend to be more frequent at Ensenada Grande and Espiritu Santo Archipelago. Larger juveniles also appear to recruit into the adult population sometime between April and June, supported by our field observation of a tagged (conventional tag) juvenile (≈ 85 cm disc width) swimming in the deeper part of Ensenada Grande (> 20 m) as part of a large school of M. munkiana adults.
#### Diel change
All detections at Ensenada Grande showed that the spatial distribution of Munk’s pygmy devil rays varied by time of the day (Fig. 5a). Tagged M. munkiana were detected by the shallow receiver, RS1 (5 m depth) during all hours, but detections were significantly more frequent during daytime (U = 359.6, P < 0.05). We found three peaks in detections: between 0400–0500 h (nighttime), 0700–0800 h (daytime) and 1600–1700 h (daytime). We also found significantly greater detections during the daytime at the receiver placed in a deeper area within Ensenada Grande, RS2 (26 m depth) (U = 359.39, P < 0.05) with almost no detections during nighttime when M. munkiana appear to move to shallower areas.
### Environmental factors
#### Temperature
Sea water temperature from Ensenada Grande was recorded from August 2017 until April 2018. Temperature values followed seasonal patterns previously described41 with maximum temperatures from June to November (24.1–29.6 °C) and minimum values from December to May (18.1–26.5 °C). We found a statistically significant correlation between water temperature and the mean monthly residency index of tagged M. munkiana at Ensenada Grande (S = 2.496e+09, P < 0.0001, rho = 0.643). Detections of tagged individuals were consistently greater (up to 145 days of consecutive detections) between August to April of the first year of the study (2017–2018) when water temperature ranged from 18.8 to 29.6 °C, suggesting that they may range less widely during those months of the year. About 77% of the detections in Ensenada Grande occurred when water temperature ranged 25.5–29.6 °C (total range 18.1 to 29.6 °C) (Fig. 4).
#### Zooplankton
Zooplankton was primarily composed of major taxonomic groups of holoplankton (Copepoda, Cladocera, Euphausiids, Chaetognatha, Mysidacea and Decapoda).
Zooplankton biovolume was significantly greater during the night compared to day (W126 = 1175, P = 0.0001549) (Fig. 5b) across all sampling months, with a peak value of 36.27 ± 8.25 mL 100 m−3 (mean ± SEM) during nighttime samples in December. We found a significantly greater mean zooplankton biovolume during the cold season (December to May) (W126 = 2454.5, P = 0.027) as well as between months (Kruskal–Wallis X2 = 23.1, df = 5, P = 0.0003), with maximum zooplankton biovolume values observed during December (31.12 ± 4.98 mL 100 m−3, mean ± SEM) and lowest values in June (10.91 ± 1.64 mL 100 m−3, mean ± SEM). We also found significant differences of zooplankton biovolume across our three sampling stations inside Ensenada Grande (Kruskal–Wallis X2 = 13.478, df = 2, P = 0.00118; Dunn test, P < 0.05). The deeper station had significantly greater nighttime zooplankton biovolume (29.13 ± 4.89 mL 100 m−3, mean ± SEM) even though we mainly detected devil rays during night hours at the shallower station where mean zooplankton biovolume values were lower (19.55 ± 5.08 mL 100 m−3, mean ± SEM). Nevertheless, mean monthly residency index and the zooplankton biovolume within Ensenada Grande were significantly positively correlated (S = 221,340, P = 5.046e−05, rho = 0.3516104).
## Discussion and conclusions
Our results indicate that M. munkiana utilize nursery areas following the definition proposed for elasmobranch nursery areas. The Ensenada Grande bay of the Espiritu Santo Archipelago can be considered a nursery area for M. munkiana following the three criteria:
1. 1.
Neonate and juvenile rays are more commonly encountered in Ensenada Grande than in other areas due to their high relative abundance, 84% (n = 80) compared with other studies18,32 where proportions for neonates (8.3%, n = 2) and juveniles (15%, n = 22) captured were much lower in adjacent areas.
2. 2.
Neonates and juveniles exhibited greater residency indices in Ensenada Grande, being detected almost daily for up to 7 of the 22 months monitoring period in the bay. Individuals resided inside this inshore area from 1 to 145 consecutive days. Moreover, recapture data from traditional tagging demonstrated a site fidelity of 2 to 8 months inside Ensenada Grande for neonates and juveniles.
3. 3.
Mobula munkiana neonates and juveniles use Ensenada Grande as a nursery area across multiple years. Using anecdotal professional photographs from 2013 to 2016 (Fig. 6), there is evidence that since ecotourism activities started, sightings of M. munkiana, including juveniles and neonates, are common each year from September to December.
Furthermore, our results provide compelling evidence that M. munkiana use Ensenada Grande as a primary nursery area42 due to the presence of neonates and near-term pregnant females, and as a secondary nursery area42 due to the presence of juveniles (non-newborn). Therefore, overlapping primary and secondary nursery areas for pygmy devil ray species occurs, similar to that observed for other elasmobranch species3.
The Southern Gulf of California was previously thought to be a wintering ground for M. munkiana, with them disappearing from the region during the warmer season for mating and pupping18. However, we instead propose the use of shallow bays adjacent to high secondary production, such as Ensenada Grande, as a nursery area where neonates and juveniles likely remain throughout the year. We suggest that there are likely other similar, yet undiscovered, nursery areas elsewhere in the Gulf of California and Eastern Pacific for this species. We found that early life stage M. munkiana exhibited a higher residency index during warmer water temperatures. This warm temperature residency may provide an ecological advantage by accelerating the metabolic rates and thus growth of juveniles and thereby reducing the duration of these vulnerable life-history stages3,43. The habitat preference of one of its main prey Mysidacea spp., in shallower parts of the neritic zone44, combined with protection of M. munkiana early life stages from large predators, could partially explain the higher detection rate recorded at shallower receivers. As a result, there appears to be an advantage for M. munkiana neonates and juveniles to behave as residents with a high fidelity to shallow-coastal habitats in contrast to adults which range widely in oceanic waters.
We observed a clear ontogenetic spatio-temporal segregation among neonates, juveniles, and adults since these different life stages were all caught during different seasons and areas within Ensenada Grande. Size segregation appears to be a common feature for this and other species of mobulids18,45. Although sex segregation has been reported in the southern part of the Gulf of California across different years for primarily adult M. munkiana18,32,39, we found a 1:1 sex ratio for neonates and juveniles, a typical feature in elasmobranch nursery areas1,46. This suggests that M. munkiana does not segregate by sex during early stages but perhaps may initiate sex segregation when they reach sexual maturity.
Reproductive seasonality has been documented for several mobulid species38,40. Based on our information we suggest that the mating and pupping season for M. munkiana begins in April and ends in June when water temperatures range between 18 and 29 °C. Parturition for M. munkiana in La Paz Bay has been previously reported between May and June39, however based on our observations of near term pregnant females in April and June, females with signs of possible parturition in April, and the neonate sizes in August we believe that an extended pupping season is feasible.
This time frame coincides with a transition around June from the cold season when the euphausiid, N. simplex, one of the two main M. munkiana prey items30, attains its maximum abundance and reproductive period in the Gulf of California47,48,49. A gestation period of 10 to 12 months has been reported for another pygmy devil ray, Mobula eregoodootenkee31 (originally cited as M. kuhlii cf. eregoodootenkee) with very similar body size15,22, therefore is very likely that Munk’s pygmy devil ray gestation period is the same. Indeed, we observed courtship and pregnancy in the same area and time period in La Paz Bay. The timing of parturition and mating are further supported by observations of M. alfredi in captivity50 and wild individuals38.
This is the first description of a pygmy devil ray nursery area and the habitat used by neonates and juveniles within it. Individuals of early life stages displayed a high level of residency to the area, more correlated to warmer temperatures than to zooplankton abundance. Nursery and mating grounds for devil rays are highly likely to overlap in temporal and geographic space. Ultimately, since devil rays have the lowest fecundity of all elasmobranchs17, this information may be useful in the design of spatial and temporal management strategies to mitigate bycatch in artisanal fishing and to regulate ecotourism activities not only within the Southern Gulf of California, but elsewhere throughout their range. In addition, the information presented here will be useful in identifying nursery areas for other devil ray species world-wide.
## Methods
### Study area
The Espiritu Santo Archipelago is located in the south west region of the Gulf of California and is the eastern limit of La Paz Bay (Fig. 1). The Archipelago was declared a Marine National Park in 2007, only allowing artisanal fisheries and ecotourism activities in some restricted areas. The bathymetry of the eastern Espiritu Santo Archipelago is characterized by steep terracing, with water over 100 m occurring just a few meters from the shore, particularly off the eastern side of the archipelago. Our main study area at Ensenada Grande is located on the western coast of the Espiritu Santo Archipelago and is comprised of several sandy bottom embayment’s (< 40 m depth) (Fig. 1c). Productivity of the Espiritu Santo Archipelago is influenced by the monsoonal wind pattern of the Gulf of California with northwesterly winds that cause strong upwelling events during the cold season (December to May), with primary production rates ranging between 1.16 and 1.91 g C md−151. Strong thermal stratification occurs during the warm season (June to November), when upwelling is weak along the east coast of Baja California peninsula52 with low primary production rates (0.39 to 0.49 g C md−1)51.
### Data collection
Mobula munkiana, were caught between August 2017 and June 2018 at Ensenada Grande during 5 capture trips. Individuals were captured with encircling surface cotton twine nets 150 m long, 15 m deep, with 25 cm mesh. Captured individuals were maintained in the water, allowing water to pass over their gills to reduce stress levels before transferring them into a holding tank onboard the boat. Individuals were sexed, measured (total length and disc width), and evaluated for mating scars on pectoral fins, cloacal state (females) and development state of claspers (males). Release was typically completed < 5 min after capture and all devil rays were released in good condition.
### Life stages description
Mobula munkiana maturity was classified in four states, according to estimates of their disc width size at maturity as either neonate (< 97 cm female or 98 cm male disc width with umbilical scars present), juvenile (< 97 cm female or 98 cm male disc width with no umbilical scar) and adult (> 97 cm female or 98 cm male disc width)32. Adult females with a noticeably distended abdominal region on both the dorsal and ventral surfaces were classified as likely pregnant females20.
### Conventional tagging
Individuals were tagged with conventional fish tags (FLOY TAG & Mfg., Inc.) in the dorsal part of the pectoral fin with a special applier for future identification purposes.
The data collected from captures and conventional tagging were used to characterize the overall size and demographic composition of the population captured in Ensenada Grande. A X2 test was used to test for skewed sex ratios in captured juveniles and neonates in Ensenada Grande.
Size data set did not meet the normality assumptions according to the Shapiro–Wilk test (n = 95, W = 0.92567, P = 4.446e−05), therefore a nonparametric Wilcoxon test was performed to compare disc width and sex distribution. Capture locations were plotted using Surface Mapping System (Golden Software, Inc., 1993–2012, https://www.goldensoftware.com/products/surfer) and the coastline data was extracted from GEODAS-NG (National Geophysical Data Center, 2000).
### Acoustic telemetry
Mobula munkiana were fitted with internal acoustic transmitters (Vemco Ltd. V13; n = 7) with an expected battery life of 991 days in August 2017 at Ensenada Grande. Transmitters were coated with a beeswax/paraffin wax mixture and internally placed by surgically inserting them into a 3 cm incision in the abdominal cavity. The incision was closed with synthetic surgical sutures. Transmitters operated at 69 kHz and were coded to pulse randomly once every 40–80 s allowing the simultaneous monitoring of multiple individuals without continuous signal overlap. Acoustic receivers (model VR2w and VR2Tx Vemco Ltd; n = 6) were moored at depths between 5 and 26 m at locations previously known to be frequented by Munk’s pygmy devil rays within the Espiritu Santo Archipelago as part of a larger receiver array (n = 21 receivers) installed within La Paz Bay, Cerralvo Island, and San Jose Island, providing a much greater coverage of our main Ensenada Grande study site and adjacent areas (Fig. 1). We tested acoustic array range and found a maximum detection range of 350 m for the receivers at the Espiritu Santo Archipelago. Receivers recorded the transmitter code, time, and date of tagged M. munkiana that swam within the detection range of the receivers. Movements of neonates and juveniles M. munkiana were monitored on the array between August 2017 and May 2019.
Receiver data in this network were downloaded and batteries are changed at least annually, and data were processed using the VUE Software (Vemco Inc., https://support.vemco.com/s/downloads). We filtered the data to include only detections with two or more consecutive detections as a means to avoid false positive detections that could arise from background noise53.
The distribution and residency of detections throughout the receiver array were visualized and analyzed using the package “VTrack” (https://CRAN.R-project.org/package=VTrack) in R (https://www.r-project.org/). A residency index54 for each individual captured in the Espiritu Santo Archipelago was calculated with the formula (1).
$$Residency\, Index (\%) = \frac{No.\, of\, days\, detected}{No.\, of\, days\, between\, first\, and\, last \, detection}$$
(1)
The sequential series of detections over time throughout the receiver array from the first detection to the last is referred to as the “track” for each individual.
Daily presence data were analyzed to determine the number of consecutive days that an individual was resident (continuous presence) at a location. Since the acoustic data set did not meet the normality assumptions according to the Shapiro–Wilk test (n = 7; W = 0.78852, P = 0.03148) a nonparametric Spearman correlation and Wilcoxon tests were carried out to determine whether residency indices differed significantly with disc width, sex, and maturity stage of tagged M. munkiana. Habitat preference was studied by grouping the acoustic receivers of Ensenada Grande (n = 2) as inside-bay receivers and the rest of the Espiritu Santo Archipelago acoustic array (n = 4) as offshore receivers. A Wilcoxon test was used to compare the residency index found inside-bay receivers versus offshore. Differences in residency between seasons was examined by comparing monthly residences of warm months (June to November) against cold months (December to May) using a Wilcoxon test. To quantify diel changes in the M. munkiana presence of Ensenada Grande we produced circular plots of the number of detections during daytime (0600–1900 h) versus nighttime (1900–0600 h); limits of diel times were determined using defined cutoffs for dawn and dusk for the Ensenada Grande location. We used Rao’s test to analyze the uniformity of the detections for the receivers inside Ensenada Grande. We calculated the minimum linear dispersal distance for each individual defined as the distance between the two furthest receivers at which an individual was ever detected using Surface Mapping System (Golden Software, Inc., 1993–2012, https://www.goldensoftware.com/products/surfer).
### Environmental factors
Water temperature data was collected every 2 h by a temperature logger (Onset HOBO Water Temperature, Pendant 64 k) deployed at Ensenada Grande at 13 m depth during 9 months from August 2017 to April 2018. Temperature records were averaged over each day of the study period and aligned with the acoustic detection data to examine temperature effects on mobulid presence/absence.
Zooplankton was sampled during day and night at three locations inside Ensenada Grande (Fig. 5a). A total of 125 zooplankton samples were collected from August 2017 to June 2018 (25 samples per monitored month). Zooplankton was collected during a three minute oblique tow with a 60 cm mouth diameter zooplankton net (300 μm mesh), equipped with a calibrated flow meter (G.O. 2030R) mounted in the mouth of the net to estimate the filtered seawater volume55. Samples were preserved with 4% formalin. Zooplankton biovolume (mL 100 m−3) was estimated for each sample using the displacement volume method56.
Temperature (n = 3466 W = 0.91839, P = 1.848e−05) and zooplankton biovolume (n = 128, W = 0.75869, P = 3.424e−11) data sets did not meet the normality assumptions according to the Shapiro–Wilk test respectively, therefore nonparametric Wilcoxon tests were used to compare seawater temperatures among seasons and zooplankton biovolume between day/night and between warm/cold seasons. We tested the correlation between the seawater temperature and zooplankton biovolume with the mean monthly residency index at Ensenada Grande using Spearman correlations. Kruskal Wallis non-parametric tests were used to compare the zooplankton biovolume among months and sampling stations and post-hoc Dunn test were used to determine which months and sampling stations significantly differed.
### Ethical approval
The methods were approved under the research permit PPF/DGOPA-133/17 issued by Comisión Nacional de Acuacultura y Pesca with authorization of Comisión Nacional de Áreas Naturales Protegidas. The tagging and surgical procedures followed the Institutional Animal Care and Use Committee of the University of California, Davis (IACUC, Protocol No. 16022). | 2023-03-21 09:12:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3042280673980713, "perplexity": 7480.248969067055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943637.3/warc/CC-MAIN-20230321064400-20230321094400-00032.warc.gz"} |
http://newsgroups.derkeiler.com/Archive/Comp/comp.text.tex/2009-02/msg00083.html | # Re: \label in \footnote(s) in LaTeX
Dave Walden wrote:
Hi,
Can someone point me a the spec for the scope of a \label declaration
in a \footnote{}. Sometimes this works for me and sometimes it
doesn't. Let's say I have
\footnote{\label{fnfoo} text}
and I later refer to it via $^{\ref{fnfoo}}$ in the text, i.e., I want
to refer to the same footnote number again. This works when the \ref
is in the main text but not when it is in another footnote. On the
other hand, if I turn the footnotes into endnotes, then the \ref in
the endnote works. I can get the equivalent of the \ref in the later
footnote to work by doing the following in the first footnote:
\label{fnfoo} text}\gdef\fnzoo{\ref{fnfoo}}
and then later using $^{\fnzoo}$ in the second footnote, assuming the
footnotes have been defined in the right order. Is there some more
"official" way for doing what I want to do?
Best regards, Dave
well, it might help if you would provide a minimal example, such that we can see the problems for ourselfs
This works fine
\documentclass[a4paper]{memoir}
\begin{document}
test\footnote{test\label{a}}
test\footnote{Via footnote $^{\ref{a}}$}
\end{document}
--
/daleif (remove RTFSIGNATURE from email address)
LaTeX FAQ: http://www.tex.ac.uk/faq
LaTeX book: http://www.imf.au.dk/system/latex/bog/ (in Danish)
Remember to post minimal examples, see URL below
http://www.tex.ac.uk/cgi-bin/texfaq2html?label=minxampl
http://www.minimalbeispiel.de/mini-en.html
.
## Relevant Pages
• Re: MT VOID, 03/14/14 -- Vol. 32, No. 37, Whole Number 1797
... when I used 'Oriental' to refer people of typical ... The correct term is now apparently ... There is a footnote to this effect in The Cambridge Guide to Fantasy ... Chicago a couple of years ago that advertised "hot Asian buns". ...
(rec.arts.sf.fandom)
• Re: Reuse a footnote reference
... the way to refer to footnote 1 more than once is to set up a cross ... the same Screen Tip (yellow pop-up boxes when you mouseover) as the note, ...
(microsoft.public.word.docmanagement)
• Re: label in footnote(s) in LaTeX
... to refer to the same footnote number again. ... is in the main text but not when it is in another footnote. ... at least one of the problems seems (in limited testing) to be avoided ...
(comp.text.tex)
• Multiple entries in a list to refer to a single footnote?
... I'm running Word 2007, and I want to show that several items I have in a ... list within a table refer to _one single_ footnote, ... create a new footnote, with a different symbol, for each referenced item. ... I just want to have an asterisk by several items in my list, ...
(microsoft.public.word.docmanagement)
• label in footnote(s) in LaTeX
... Can someone point me a the spec for the scope of a \label declaration ... to refer to the same footnote number again. ... is in the main text but not when it is in another footnote. ...
(comp.text.tex) | 2014-11-24 16:17:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7771593928337097, "perplexity": 2171.4183635846325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380866.29/warc/CC-MAIN-20141119123300-00146-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://www.mathemafrica.org/?cat=120 | ## The Gradient Vector
Introduction
In this post we introduce two important concepts in multivariate calculus: the gradient vector and the directional derivative. These both extend the idea of the derivative of a function of one variable, each in a different way. The aim of this post is to clarify what these concepts are, how they differ and show that the directional derivative is maximised in the direction of the gradient vector.
The gradient vector, is, simply, a vector of partial derivatives. So to find this, we can 1) find the partial derivatives 2) put them into a vector. So far so good. Let’s start this on some familiar territory: a function of 2 variables.
That is, let $f: \mathbb{R}^2 \rightarrow \mathbb{R}$ be a function of 2 variables, x,y. Then the gradient vector can be written as:
$\nabla f(x,y) = \left [ {\begin{array}{c} \frac{\partial f(x,y)}{\partial x} \\ \frac{\partial f(x,y)}{\partial y} \\ \end{array} } \right]$
For a more tangible example, let $f(x,y) = x^2 + 2xy$, then:
$\nabla f(x,y) = \left [ {\begin{array}{c} 2x + 2y \\ 2x \\ \end{array} } \right]$
So far, so good. Now we can generalise this for a function $f: \mathbb{R}^n \rightarrow \mathbb{R}$ taking in a vector $\mathbf{x} = x_1, x_2, x_3, \dots, x_n$.…
## p-values (part 2) : p-Hacking Why drinking red wine is not the same as exercising
What is p-hacking?
You might have heard about a reproducibility problem with scientific studies. Or you might have heard that drinking a glass of red wine every evening is equivalent to an hour’s worth of exercise.
Part of the reason that you might have heard about these things is p-hacking: ‘torturing the data until it confesses’. The reason for doing this is mostly pressure on researchers to find positive results (as these are more likely to be published) but it may also arise from misapplication of Statistical procedures or bad experimental design.
Some of the content here is based on a more serious video from Veritasium: https://www.youtube.com/watch?v=42QuXLucH3Q. John Oliver has also spoken about this on Last Week Tonight, for those who are interested in some more examples of science that makes its way onto morning talk shows.
p-hacking can be done in a number of ways- basically anything that is done either consciously or unconsciously to produce statistically significant results where there aren’t any.…
## A quick argument for why we don’t accept the null hypothesis
Introduction
When doing hypothesis testing, an often-repeated rule is ‘never accept the null hypothesis’. The reason for this is that we aren’t making probability statements about true underlying quantities, rather we are making statements about the observed data, given a hypothesis.
We reject the null hypothesis if the observed data is unlikely to be observed given the null hypothesis. In a sense we are trying to disprove the null hypothesis and the strongest thing we can say about it is that we fail to reject the null hypothesis.
That is because observing data that is not unlikely given that a hypothesis is true does not make that hypothesis true. That is a bit of a mouthful, but basically what we are saying is that if we make some claim about the world and then we see some data that does not disprove this claim, we cannot conclude that the claim is true.…
## p-values: an introduction (Part 1)
The starting point
This is the first of (at least) 3 posts on p-values. p-values are everywhere in statistics- especially in fields that require experimental design.
They are also pretty tricky to get your head around at first. This is because of the nature of classical (frequentist) statistics. So to motivate this I am going to talk about a non-statistical situation that will hopefully give some intuition about how to think when interpreting p-values and doing hypothesis testing.
My New Car
I want to buy a car. So I go down to the second hand car dealership to get one. I walk around a bit until I find one that I like.
I think to myself: ‘this is a good car’.
Now because I am at a second-hand car dealership I find it appropriate to gather some data. So I chat to the lady there (looks like a bit of a scammer, but I am here for a deal) about the car.…
## R-squared values for linear regression
What we are talking about
Linear regression is a common and useful statistical tool. You will have almost certainly come across it if your studies have presented you with any sort of statistical problems.
The pros of regression are that it is relatively easy to implement and that the relationship between inputs and outputs is linear (it’s in the name, but this simplifies the interpretation of the relationship significantly). On the downside, it relies fairly heavily on frequentist interpretation of probability (which is a little counterintuitive) and it’s very easy to draw erroneous conclusions from different models.
This post will deal with a measure of how good a model is: $R^2$. First, I will go through what this value means and what it measures. Then, I will discuss an example of how reliance on $R^2$ is a dangerous game when it comes to linear models.
What you should know
Firstly, let’s establish a bit of context.…
## The definite integral
I realise now, in all the excitement of the FTC that I hadn’t written a post about the definite integral…that’s shocking! ok, here we go…the plan for this post:
• Look at our Riemann sums and think about taking a limit of them
• Define the definite integral
• Look at a couple of theorems about the definite integral
• Do an example
• Look at properties of definite integrals
That’s quite a lot, but we are more or less going to follow along with Stewart. Stewart just has a slightly different style to mine, so I recommend reading his for more detail, and mine for potentially a bit more intuition.
So, let’s begin…
We have seen in previous lectures/sections/semesters/lives that we can approximate the area under a curve by splitting it up into rectangular regions. Here are examples of splitting up one function into rectangles (and, in the last way trapezoids, but you don’t have to worry about this).…
## The Fundamental Theory of Calculus part 2 (part ii)
OK, get ready for some Calculus-Fu!
We have now said that rather than taking pesky limits of Riemann sums to calculate areas under curves (ie. definite integrals), all we need is to find an antiderivative of the function that we are looking at.
As a reminder, to calculate the definite integral of a continuous function, we have:
$\int_a^b f(x)dx=F(b)-F(a)$
where $F$ is any antiderivative of $f$
Remember that to calculate the area under the curve of $f(x)=x^4$ from, let’s say 2 to 5, we had to write:
$\int_2^5 x^4 dx=\lim_{n\rightarrow \infty}\sum_{i=1}^n f(x_i)\Delta x=\lim_{n\rightarrow \infty} f\left(2+\frac{3i}{n}\right)\frac{3}{n}=\lim_{n\rightarrow\infty}\frac{3}{n}\left(2+\frac{3i}{n}\right)^4$
And at that point we had barely even started because we still had to actually evaluate this sum, which is a hell of a calculation…then we have to calculate the limit. What a pain.
Now, we are told that all we have to do is to find any antiderivative of $f(x)=x^4$ and we are basically done.
Can we find a function which, when we take its derivative gives us $x^4$?…
## The Fundamental Theory of Calculus part 2 (part i)
OK, now we come onto the part of the FTC that you are going to use most. We are finally going to show the direct link between the definite integral and the antiderivative. I know that you’ve been holding your breaths until this moment. Get ready to breath a sign of relief:
The Fundamental Theorem of Calculus, Part 2 (also known as the Evaluation Theorem)
If $f$ is continuous on $[a,b]$ then
$\int_a^b f(x) dx=F(b)-F(a)$
where $F$ is any antiderivative of $f$. Ie any function such that $F'=f$.
————-
This means that, very excitingly, now to calculate the area under the curve of a continuous function we no longer have to do any ghastly Riemann sums. We just have to find an antiderivative!
OK, let’s prove this one straight away.
We’ll define:
$g(x)=\int_a^x f(t)dt$
and we know from the FTC part 1 how to take derivatives of this. It’s just $g'(x)=f(x)$. This says that $g$ is an antiderivative of $f$.…
## The Fundamental Theorem of Calculus part 1 (part iii)
So, we are now ready to prove the FTC part 1. We’re going to follow the proof in Stewart and add in some discussion as we go along to motivate what we are doing. What we are going to prove is that:
$\frac{d}{dx} \int_a^x f(t) dt=f(x)$
for $x\in [a,b]$ when $f$ is continuous on $[a,b]$.
Proof:
we define $g(x)=\int_a^x f(t)dt$ and we want to find the derivative of $g$. We will do this by using the fundamental definition of the derivative, so let’s look at calculating this function at $x$ and $x+h$ – ie. how much does it change when we change $x$ by a little bit?
$g(x+h)-g(x)=\int_a^{x+h}f(t) dt-\int_a^x f(t) dt$
But remember that the definite integral is just the area, so this difference is the area between a and x+h minus the area between a and x. Which is just the area between x and x+h. Using the properties of integrals, we can write this formally as:
$g(x+h)-g(x)=\int_a^{x+h}f(t) dt-\int_a^x f(t) dt=\left(\int_a^{x}f(t)+\int_x^{x+h}f(t)\right)-\int_a^{x}f(t)=\int_x^{x+h}f(t)dt$
and we can write, for $h\ne 0$:
$\frac{g(x+h)-g(x)}{h}=\frac{1}{h}\int_x^{x+h}f(t)dt$
Restated, we can think of this as the area between x and x+h divided by h.… | 2019-12-16 03:33:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 38, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7586358189582825, "perplexity": 371.4164002718805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541315293.87/warc/CC-MAIN-20191216013805-20191216041805-00016.warc.gz"} |
http://www.aanda.org/index.php?option=com_article&access=standard&Itemid=129&url=/articles/aa/full/2003/14/aa3422/aa3422.right.html | A&A 401, 483-489 (2003)
DOI: 10.1051/0004-6361:20030138
## HI observations of nearby galaxies V. Narrow (HI) line galaxies
W. K. Huchtmeier 1 - I. D. Karachentsev 2 - V. E. Karachentseva 3
1 - Max-Plank-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany
2 - Special Astrophysical Observatory, Russian Academy of Sciences, N.Arkhyz, KChR 369167, Russia
3 - Astronomical Observatory of Kiev University, Observatorna, 3, 04053 Kiev, Ukraine
Received 17 December 2002 / Accepted 22 January 2003
Abstract
In this paper we present new HI observations with high velocity resolution for 104 nearby narrow-line galaxies with half power line widths smaller than 50 km s-1 most of which are well approximated by a Gaussian. The FWHM line width of 30 of these integrated HI profiles is less than 25 km s-1. We present global parameters of these nearby galaxies and discuss the size dependence, the luminosity dependence, and the dependence of these parameters with the observed line width. Our sample essentially is a subsample of the Local Volume (i.e. galaxies within 10 Mpc) with declinations -30 and only a few galaxies at greater distances. It is described by the following median values of global physical parameters: total absolute blue magnitude ; linear diameter A0 = 1.63 kpc (this corresponds to the de Vaucouleurs diameter D25); half power line width km s-1; total HI mass ; distance D = 5.1 Mpc; HI mass-to-luminosity ratio ; total mass-to luminosity ratio .
Key words: galaxies: distances and redshifts - galaxies: dwarf - galaxies: general
### 1 Introduction
Since the early days of extragalactic spectroscopy it has been known that disk galaxies are dominated by rotation. A spread in observed line widths, i.e. half power line widths of integrated HI profiles or profiles from slit spectroscopy along the major axis of galaxies, from smaller than 50 km s-1 to several hundred km s-1, has been observed. However, the number of observed narrow lines has been small in the past. For example, the first systematic catalog of nearby galaxies ( km s-1) by Kraan-Korteweg & Tammann (1979) contained 54 galaxies with line widths smaller than 50 km s-1 out of 179 galaxies. These numbers do not contain the faint dwarf spheroidals which are not accessible to spectroscopic emission line observations due to their low gas content.
In our HI line search for nearby dwarf galaxy candidates (Paper I to Paper IV) we used a velocity resolution of 6.2 km s-1 (or 10.4 km s-1after Hanning smoothing if applied). This resolution resulted from a setup of the autocorrelator optimized for velocity coverage and sensitivity. For the most narrow profiles observed, this yielded only two or three data points per profile, which does not allow a determination of the profile shape. Spikes of radio interference may be mistaken as narrow emission profiles. In order to confirm the detection of these narrow lines and to study their line shapes we repeated HI observations of those galaxies with half power line widths smaller than 50 km s-1. Using a velocity resolution of 1.4 km s-1 (2.4 km s-1 after Hanning smoothing if applied) we have at least 10 channels with HI emission to describe the line shape; this allows us to simulate the line shape numerically. It turned out that nearly all galaxies in this sample could be approximated by a Gauss-like function. Hence, the Gauss fit was used to derive the systemic velocity and the half power line width for all profiles. All narrow line features observed earlier with coarse velocity resolution could be confirmed and their profile parameters could be measured with much better precision.
In this paper we present the HI observations for 104 narrow-line galaxies in Sect. 2. The data and derived parameters are discussed in Sect. 3. Deriving the total mass will be discussed in Sect. 4 followed by a discussion in Sect. 5.
### 2 Observations
Observations were performed with the 100-m radio telescope at Effelsberg which has a half power beam width (HPBW) of 9.3' at the wavelength of 21-cm. A dual channel receiver was followed by a 1024 autocorrelator which was split into four banks of 256 channels each. A bandwidth of 1.56 MHz yielded a channel separation of 1.2 km s-1 (a velocity resolution of 1.4 km s-1 (2.4 km s-1 after Hanning smoothing)). The system noise was around 30 K. Using such a narrow bandwidth resulted in rather flat baselines even in daytime. Observations were performed in the total power mode (ON-OFF) combining the source position with an empty field generally 5 min away in RA.
The Toolbox software of the MPIfR has been used for the data reduction and presentation. The resulting HI profiles are presented in Fig. 1.
Figure 1: HI profiles of the dwarf galaxies of our narrow line sample and their Gauss fits. The profiles are arranged in ascending RA starting at the top left corner. The flux scale is in Jy, the heliocentric radial velocity (optical convention) in km s-1. The velocity resolution of most spectra is 1.4 km s-1. The profiles of KK 17, KKSG 6, D564-08, D565-06, KKH51, U5186, KK151, and KK152 have been Hanning smoothed (velocity resolution 2.4 km s-1). Open with DEXTER
Figure 1: continued. Open with DEXTER
Figure 1: continued. Open with DEXTER
Figure 1: continued. Open with DEXTER
In general the signal-to-noise ratio is high for the observed HI profiles. The ON-OFF observing procedure reduced the local HI emission around 0 km s-1 to a weak and narrow residual spike. However, for galaxies in the zone of avoidance in the Galactic plane the HI emission itself and its changes are stronger than elsewhere. Here the higher velocity resolution helped in separating the HI emission of a given galaxy from the Local HI. There is still some blending of the Local and the galactic HI emission on the low velocity side of a few galaxies like DDO 53, Cassiopeia 1, KKH 12, Leo A, and KKR 55.
Some low level radio frequency interference (RFI) is present at the Effelsberg site producing occasionally spikes in the observed 21-cm band. Even if the RFI occurs within the velocity range of the HI emission of a given galaxy this is not essential for deriving observational parameters, see e.g. KK 148, KK 152, UGC 7584. These spikes generally disappear after Hanning filtering.
### 3 The data
In Table 1 we give the galaxy name in Col. 1 followed by the 1950.0 position in Col. 2. The integrated HI flux (line integral in Jy km s-1) in Col. 3 is followed by the maximum flux density and its error in mJy (Col. 4), the heliocentric systemic velocity and its error (Col. 5), and the half power line width and its error (Col. 6). The distance and a code for the method used is given in Col. 7; here the code (with the estimated error of the method in parantheses) means c = cepheids (10%), r = red giant branch stars (12%), s = surface brightness fluctuations (15%), m = group membership (20%), b = bright stars (25%), t = Tully-Fisher relation (30%), h = Hubble distance (ca. 30%) assuming H0=70 km s-1 Mpc-1. The absolute magnitude follows in Col. 8 (the total blue apparent magnitude was corrected for Galactic absorption following Schlegel et al. 1998). The relative HI content normalized by the total blue luminosity, in solar units, follows in Col. 9. The integrated HI mass and the total mass were calculated as in Paper III, i.e.
(1)
where D is the distance in Mpc and the line integral (integrated HI flux) in Jy km s-1, and
(2)
where a is the optical diameter (D25) in arcmin, D the distance in Mpc and the observed half power line width corrected for instrumental broadening but not for inclination.
The "total" mass-to-luminosity ratio in solar units M/LB follows in Col. 10. Comments are given in Col. 11.
We admitted galaxies with line widths slightly larger than 50 km s-1 when this limit was within 3 times the r.m.s. error of the line width. Tifft & Huchtmeier (1990) compared HI data from the 100-m radio telescope at Effelsberg and the 91-m radio telescope at Green Bank and discussed systematic errors due to the observing procedure and the data reduction software. For narrow profiles they deduce systematic errors smaller than 0.1 km s-1, the same is true for pointing errors. Values in Cols. 4 to 6 of Table 1 are the result of a Gauss-fit procedure with stable results in the presence of noise.
Viewing the HI profiles in Fig. 1 it is obvious that most of them are fitted well by a Gaussian curve. A few exceptions are noticed among the brighter galaxies in our sample. Asymmetries with respect to a Gauss curve are observed for the galaxies UGC 1176, Cassiopeia 1, DDO 165 (UGC 8201), and in the faint Pegasus dwarf irregular galaxy DDO 216. This might partially be due to structure in the HI distribution.
Significant deviations from Gaussian curves were also noted in KK41 (CamA), UGC 5889, UGC 8651 where we observe a rudimentary double peaked profile typical of spiral galaxies.
For most galaxies the integrated HI flux given in Col. 3 of Table 1 should be the correct value as the median apparent blue diameter of this sample is only 1.2 arcmin, which is small compared to the main beam (9.3 arcmin HPBW). The range of apparent blue diameter is from 0.4 arcmin to 5.1 arcmin. There are nine galaxies with apparent blue diameter larger than 2.4 arcmin, up to about 5 arcmin for the Pegasus dwarf irregular, Leo A, and Sextans B. For these galaxies our observations will not provide the total integrated HI flux (Broeils & Rhee 1997).
Only few of the galaxies in this sample have been observed before with higher velocity resolution and higher spatial resolution. VLA observations of LGS-3 (Young & Lo 1997) with a velocity resolution of 1.28 km s-1 and a spatial resolution from 36'' to 64'' yield an HI extend of 6 arcmin. Hence the tapering of the Effelsberg beam yields an integrated HI flux about 30% lower (i.e. 1.7 Jy km s-1) compared to 2.7 Jy km s-1 observed with the VLA. The agreement of the derived systemic radial velocities is excellent, km s-1 and km s-1 (VLA).
VLA observations of faint dwarf galaxies by Lo et al. (1993) yield similar integrated fluxes for the galaxies in common except for Leo A for which they derive an HI extent of 7 arcmin and an HI flux of 68.3 Jy km s-1 compared with the 48.3 Jy km s-1 observed with the 100-m radio telescope.
### 4 Total mass
In their study of the HI structure of nine intrinsically faint dwarf galaxies based on VLA observations, Lo et al. (1993) found that tiny dwarf galaxies fainter than about show little sign of rotation, typically km s-1. In these galaxies the rotational velocity is less than the three dimensional random velocity dispersion. More massive dwarfs, like Holmberg I and Cas 1, definitely are dominated by rotation. In the case of a system with chaotic motion dominating rotation, such as the HI in faint dwarf galaxies, one can apply the Virial theorem on the mass determination. Here we assume the HI gas in a steady state within the gravitational potential.
From their HI maps Lo et al. could derive the velocity dispersion at different points in the galaxy and measure the HI extent to derive the total mass applying the Virial theorem. For the present data we only have the integrated HI profiles and cannot separate rotation from the velocity dispersion. We therefore normalized the expression for the total mass based on the square of the observed line width [km s-1], the angular extent a ['] of the galaxy and its distance D [Mpc] by the total masses derived by Lo et al. (1993) which yields MVT in solar masses:
(3)
This global value for the total mass is uncertain by a factor of 2 at least. In view of this error and the uncertainty of the inclination we used the same relation for the total mass we have been using in previous papers of this series (see Eq. (2)).
Figure 2: The linear extent A0 in kpc (corresponding to the de Vaucouleurs diameter D25) is plotted versus the absolute blue magnitude of the galaxies of our sample. It is obvious that the smaller the galaxies, the fainter they are. Open with DEXTER
### 5 Discussion
Given the relative resolution into stars of their optical image and their different optical radial velocities eight (i.e. KKH 1, KKH 26, KKH 37, KKSG13, KKH 70, KKH 75, KKSG 37, and KKH 90) of the 104 observed HI profiles - mostly with negative radial velocities - have been considered to be of local origin (Galactic foreground), leaving 96 galaxies to be discussed further.
Most of the galaxies in our sample are classified as dwarf irregulars. There are a few late-type spirals and five dwarf spheroidals classified as dSph or dSph/Irr. 30 galaxies have line widths smaller than 25 km s-1. Global parameters like , , , and A0correlate with each other as known from many studies of spiral and irregular galaxies. In Fig. 2 we present a plot of the linear diameter versus absolute blue magnitude. This shows the range of size and luminosity of galaxies in our sample.
For the tiny galaxies in our sample it is impossible to correct for inclination as inclinations derived from axial ratios are extremely uncertain for galaxies with "irregular'' structure. In addition the intrinsic axial ratio probably increases with decreasing mass; this value is not well defined either.
For the larger galaxies in our sample the inclination may be derived from the shape of the HI distribution assuming rotation to be the dominant motion, as in Cas1. Here the mass-to-luminosity ratio increases by a factor of 2 when the inclination is taken into account.
It is evident that very narrow line widths of the integrated HI profiles are associated with small and faint galaxies. This is demonstrated in Figs. 3 to 5 where the line width is plotted versus the linear extent A0 (Fig. 3),
Figure 3: The linear extent A0 is plotted versus the HI half power line width (corrected for intrumental broadening). On average, small galaxies are characterized by narrow lines. Open with DEXTER
the absolute blue magnitude (Fig. 4), and the total HI mass (Fig. 5) of the galaxies in our sample. The definite correlation of the line width with the other global parameters yields the conclusion that the galaxies with the smallest line widths in our sample are on average also the smallest in linear size (Fig. 3), the faintest in absolute magnitude (Fig. 4) and those with the smallest HI mass (Fig. 5).
The total mass-to-light ratio is not correlated with inclination, linar extent A0, HI mass or total mass .
Since the earliest compilation of nearby galaxies (Kraan-Korteweg & Tammann 1979) the number of nearby galaxies has doubled and the number of gas-rich galaxies at the faint end of the luminosity function increased by about the same factor. As the actual sensitivity for faint galaxies (optically and in HI) is limited, we are by far not complete for the local volume (10 Mpc).
Figure 4: The HI half power line width (corrected for instrumental broadening) is plotted versus absolute blue magnitude for the galaxies in our narrow line sample. On average faint galaxies have narrow line widths. Open with DEXTER
Figure 5: The HI half power line widths (corrected for instrumental broadening) is plotted versus the integrated HI mass of the galaxies in our narrow line sample. On average galaxies with narrow lines have small HI masses. Open with DEXTER
### 6 Conclusion
In this paper we presented HI observations with high velocity resolution for 96 galaxies with line widths smaller than 50 km s-1. These dwarf galaxies have small apparent blue diameters (median: 1.2 arcmin). The median values for linear extent ( A0=1.63 kpc), absolute blue magnitude ( ), and the integrated HI mass ( ) show they are dwarfish galaxies, indeed. Most profiles (except 5) could be well approximated by a Gaussian function. The correlations between global parameters ( , A0, , , and that have been determined for spiral galaxies are seen to extend to the faint end of the galaxy luminosity function.
Acknowledgements
This paper is based on observations with the 100-m radio telescope of the MPIfR (Max-Planck-Institut für Radioastronomie) at Effelsberg. We have made extensive use of the NASA/IPAC Extragalactic Database (NED, which is operated by the Jet Propulsion Laboratory, Caltech, under contract with the National Aeronautics and Space Administration), and the Digitized Sky Survey (DSS-1) produced at the Space Telescope Science Institute under U.S. Goverment grant NAG W-2166. Our project is partially supported by DFG grant No 436 RUS 113/470/0.
## References
• Broeils, A. H., & Rhee, M.-H. 1997, A&A, 324, 877 In the text NASA ADS
• Huchtmeier, W. K., Karachentsev, I. D., & Karachentseva, V. E. 2000, A&AS, 141, 469 (Paper II) NASA ADS
• Huchtmeier, W. K., Karachentsev, I. D., & Karachentseva, V. E. 2001, A&A, 377, 801 (Paper IV) NASA ADS
• Huchtmeier, W. K., Karachentsev, I. D., Karachentseva, V. E., & Ehle, M. H. 2000, A&AS, 141, 469 (Paper I) NASA ADS
• Karachentsev, I. D., Karachentseva, V. E., & Huchtmeier, W. K. 2001 A&A, 366, 428 (Paper III) NASA ADS
• Kraan-Korteweg, R. C., & Tammann, G. A. 1979, Astron. Nachr., 300, 181 In the text NASA ADS
• Lo, K. Y., Sargent, W. L. W., & Young, K. 1993, AJ, 106, 507 In the text NASA ADS
• Makarov, D. I., & Karachentsev, I. D. 2003, A&A, submitted
• Marvel, K. B., & Wilcots, E. M. 2000, AJ, 120, 2038 NASA ADS
• Sargent, W. L. W., Sancisi, R., & Lo, K. Y. 1983, ApJ, 265, 711 NASA ADS
• Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525 In the text NASA ADS
• Tifft, W. G., & Huchtmeier, W. K. 1990, A&AS, 84, 47 In the text NASA ADS
• Young, L. M., & Lo, K. Y. 1997, ApJ, 490, 710 In the text NASA ADS | 2013-05-24 09:38:31 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8560974597930908, "perplexity": 2462.035417663849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704433753/warc/CC-MAIN-20130516114033-00084-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://math.rutgers.edu/news-events/seminars-colloquia-calendar/icalrepeat.detail/2022/11/29/11729/133/adm-mass-for-metrics-and-distortion-under-ricci-deturck-flow | # Seminars & Colloquia Calendar
Geometric Analysis Seminar
## ADM mass for metrics and distortion under Ricci-DeTurck flow
#### Paula Burkhardt-Guim
Location: Hill Center Room 705
Date & time: Tuesday, 29 November 2022 at 2:50PM - 3:50PM
Abstract: We show that there exists a quantity, depending only on $C^0$data of a Riemannian metric, that agrees with the usual ADM mass at infinity whenever the ADM mass exists, but has a well-defined limit at infinity for any continuous Riemannian metric that is asymptotically flat in the $C^0$sense and has nonnegative scalar curvature in the sense of Ricci flow. Moreover, the $C^0$mass at infinity is independent of choice of $C^0$-asymptotically flat coordinate chart, and the $C^0$local mass has controlled distortion under Ricci-DeTurck flow when coupled with a suitably evolving test function.
## Special Note to All Travelers
Directions: map and driving directions. If you need information on public transportation, you may want to check the New Jersey Transit page.
Unfortunately, cancellations do occur from time to time. Feel free to call our department: 848-445-6969 before embarking on your journey. Thank you. | 2023-03-27 18:15:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3218216300010681, "perplexity": 1405.8947496608153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948673.1/warc/CC-MAIN-20230327154814-20230327184814-00753.warc.gz"} |
https://www.semanticscholar.org/paper/Charged-particle-multiplicities-in-pp-interactions-Collaboration/7f426c2a4b2489d59939393c7090856504f4406e | # Charged-particle multiplicities in pp interactions measured with the ATLAS detector at the LHC
@inproceedings{Collaboration2010ChargedparticleMI,
title={Charged-particle multiplicities in pp interactions measured with the ATLAS detector at the LHC},
author={The Atlas Collaboration},
year={2010}
}
• The Atlas Collaboration
• Published 2010
• Physics
• Measurements are presented from proton–proton collisions at centre-of-mass energies of √ s = 0.9, 2.36 and 7 TeV recorded with the ATLAS detector at the LHC. Events were collected using a single-arm minimumbias trigger. The charged-particle multiplicity, its dependence on transverse momentum and pseudorapidity and the relationship between the mean transverse momentum and charged-particle multiplicity are measured. Measurements in different regions of phase space are shown, providing diffraction… CONTINUE READING
183 Citations
#### References
SHOWING 1-10 OF 86 REFERENCES | 2021-03-01 12:39:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8811578750610352, "perplexity": 4001.034504430383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362513.50/warc/CC-MAIN-20210301121225-20210301151225-00408.warc.gz"} |
https://www.ms.u-tokyo.ac.jp/seminar/topology/past_12.html | ## トポロジー火曜セミナー
開催情報 火曜日 17:00~18:30 数理科学研究科棟(駒場) 056号室 河野 俊丈, 河澄 響矢, 北山 貴裕, 逆井卓也 http://park.itc.u-tokyo.ac.jp/MSF/topology/TuesdaySeminar/index.html Tea: 16:30 - 17:00 コモンルーム
### 2012年10月23日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
A geometric approach to the Johnson homomorphisms (JAPANESE)
[ 講演概要 ]
ジョンソン準同型を、完備化されたゴールドマン・トゥラエフ・リー双代数への
トレリ群の
### 2012年10月16日(火)
17:10-18:10 数理科学研究科棟(駒場) 056号室
Tea: 16:50 - 17:10 コモンルーム
Analytic torsion of log-Enriques surfaces (JAPANESE)
[ 講演概要 ]
Log-Enriques surfaces are rational surfaces with nowhere vanishing
pluri-canonical forms. We report the recent progress on the computation
of analytic torsion of log-Enriques surfaces.
### 2012年10月09日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
The growth series of pure Artin groups of dihedral type (JAPANESE)
[ 講演概要 ]
In this talk, I consider the kernel of the natural projection from
the Artin group of dihedral type to the corresponding Coxeter group,
that we call a pure Artin group of dihedral type,
and present rational function expressions for both the spherical and
geodesic growth series
of the pure Artin group of dihedral type with respect to a natural
generating set.
Also, I show that their growth rates are Pisot numbers.
This talk is partially based on a joint work with Takao Satoh.
### 2012年10月02日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
Geometric flows and their self-similar solutions
(JAPANESE)
[ 講演概要 ]
In the first half of this expository talk we consider the Ricci flow and its self-similar solutions,
namely the Ricci solitons. We then specialize in the K\\"ahler case and discuss on the K\\"ahler-Einstein
problem. In the second half of this talk we consider the mean curvature flow and its self-similar
solutions, and see common aspects of the two geometric flows.
### 2012年09月04日(火)
17:00-18:00 数理科学研究科棟(駒場) 002号室
Tea: 16:30 - 17:00 コモンルーム
Piotr Nowak 氏 (the Institute of Mathematics, Polish Academy of Sciences)
Poincare inequalities, rigid groups and applications (ENGLISH)
[ 講演概要 ]
Kazhdan’s property (T) for a group G can be expressed as a
fixed point property for affine isometric actions of G on a Hilbert
space. This definition generalizes naturally to other normed spaces. In
this talk we will focus on the spectral (aka geometric) method for
proving property (T), based on the work of Garland and studied earlier
by Pansu, Zuk, Ballmann-Swiatkowski, Dymara-Januszkiewicz
(“lambda_1>1/2” conditions) and we generalize it to to the setting of
all reflexive Banach spaces.
As applications we will show estimates of the conformal dimension of the
boundary of random hyperbolic groups in the Gromov density model and
present progress on Shalom’s conjecture on vanishing of 1-cohomology
with coefficients in uniformly bounded representations on Hilbert spaces.
### 2012年07月24日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
Greg McShane 氏 (Institut Fourier, Grenoble)
Orthospectra and identities (ENGLISH)
[ 講演概要 ]
The orthospectra of a hyperbolic manifold with geodesic
boundary consists of the lengths of all geodesics perpendicular to the
boundary.
We discuss the properties of the orthospectra, asymptotics, multiplicity
and identities due to Basmajian, Bridgeman and Calegari. We will give
a proof that the identities of Bridgeman and Calegari are the same.
### 2012年07月17日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
Contact structure of mixed links (JAPANESE)
[ 講演概要 ]
A strongly non-degenerate mixed function has a Milnor open book
structures on a sufficiently small sphere. We introduce the notion of
{\\em a holomorphic-like} mixed function
and we will show that a link defined by such a mixed function has a
canonical contact structure.
Then we will show that this contact structure for a certain
holomorphic-like mixed function
is carried by the Milnor open book.
### 2012年07月10日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
Marcus Werner 氏 (Kavli IPMU)
Topology in Gravitational Lensing (ENGLISH)
[ 講演概要 ]
General relativity implies that light is deflected by masses
due to the curvature of spacetime. The ensuing gravitational
lensing effect is an important tool in modern astronomy, and
topology plays a significant role in its properties. In this
talk, I will review topological aspects of gravitational lensing
theory: the connection of image numbers with Morse theory; the
interpretation of certain invariant sums of the signed image
magnification in terms of Lefschetz fixed point theory; and,
finally, a new partially topological perspective on gravitational
light deflection that emerges from the concept of optical geometry
and applications of the Gauss-Bonnet theorem.
### 2012年06月19日(火)
17:10-18:10 数理科学研究科棟(駒場) 056号室
Tea: 16:50 - 17:10 コモンルーム
On the universal degenerating family of Riemann surfaces
over the D-M compactification of moduli space (JAPANESE)
[ 講演概要 ]
It is usually understood that over the Deligne-
Mumford compactification of moduli space of Riemann surfaces of
genus > 1, there is a family of stable curves. However, if one tries to
construct this family precisely, he/she must first take a disjoint union
of various types of smooth families of stable curves, and then divide
them by their automorphisms to paste them together. In this talk we will
show that once the smooth families are divided, the resulting quotient
family contains not only stable curves but virtually all types of
degeneration of Riemann surfaces, becoming a kind of universal
degenerating family of Riemann surfaces.
### 2012年06月12日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
Topological interpretation of the quandle cocycle invariants of links (JAPANESE)
[ 講演概要 ]
Carter et al. introduced many quandle cocycle invariants
combinatorially constructed from link-diagrams. For connected quandles of
finite order, we give a topological meaning of the invariants, without
some torsion parts. Precisely, this invariant equals a sum of "knot
colouring polynomial" and of a Z-equivariant part of the Dijkgraaf-Witten
invariant. Moreover, our approach involves applications to compute "good"
torsion subgroups of the 3-rd quandle homologies and the 2-nd homotopy
groups of rack spaces.
### 2012年06月05日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
A generalization of Dehn twists (JAPANESE)
[ 講演概要 ]
We introduce a generalization
of Dehn twists for loops which are not
necessarily simple loops on an oriented surface.
Our generalization is an element of a certain
enlargement of the mapping class group of the surface.
A natural question is whether a generalized Dehn twist is
in the mapping class group. We show some results related to this question.
This talk is partially based on a joint work
with Nariya Kawazumi (Univ. Tokyo).
### 2012年05月29日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
Triple linking numbers and triple point numbers
of torus-covering $T^2$-links
(JAPANESE)
[ 講演概要 ]
The triple linking number of an oriented surface link was defined as an
torus-covering $T^2$-link $\\mathcal{S}_m(a,b)$ is a surface link in the
form of an unbranched covering over the standard torus, determined from
two commutative $m$-braids $a$ and $b$.
In this talk, we consider $\\mathcal{S}_m(a,b)$ when $a$, $b$ are pure
$m$-braids ($m \\geq 3$), which is a surface link with $m$-components. We
present the triple linking number of $\\mathcal{S}_m(a,b)$ by using the
linking numbers of the closures of $a$ and $b$. This gives a lower bound
of the triple point number. In some cases, we can determine the triple
point numbers, each of which is a multiple of four.
### 2012年05月22日(火)
17:10-18:10 数理科学研究科棟(駒場) 056号室
Tea: 16:50 - 17:10 コモンルーム
Gamma Integral Structure in Gromov-Witten theory (JAPANESE)
[ 講演概要 ]
The quantum cohomology of a symplectic
manifold undelies a certain integral local system
defined by the Gamma characteristic class.
This local system originates from the natural integral
local sysmem on the B-side under mirror symmetry.
In this talk, I will explain its relationships to the problem
of analytic continuation of Gromov-Witten theoy (potentials),
including crepant resolution conjecture, LG/CY correspondence,
modularity in higher genus theory.
### 2012年05月08日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
Infinite examples of non-Garside monoids having fundamental elements (JAPANESE)
[ 講演概要 ]
The Garside group, as a generalization of Artin groups,
is defined as the group of fractions of a Garside monoid.
To understand the elliptic Artin groups, which are the fundamental
groups of the complement of discriminant divisors of the semi-versal
deformation of the simply elliptic singularities E_6~, E_7~ and E_8~,
we need to consider another generalization of Artin groups.
In this talk, we will study the presentations of fundamental groups
of the complement of complexified real affine line arrangements
and consider the associated monoids.
It turns out that, in some cases, they are not Garside monoids.
Nevertheless, we will show that they satisfy the cancellation condition
and carry certain particular elements similar to the fundamental elements
in Artin monoids.
As a result, we will show that the word problem can be solved
and the center of them are determined.
### 2012年05月01日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
Minimal models, formality and hard Lefschetz property of
solvmanifolds with local systems (JAPANESE)
[ 講演概要 ]
For a simply connected solvable Lie group G with a
cocompact discrete subgroup {\\Gamma}, we consider the space of
differential forms on the solvmanifold G/{\\Gamma} with values in certain
flat bundle so that this space has a structure of a differential graded
algebra(DGA). We construct Sullivan's minimal model of this DGA. This
result is an extension of Nomizu's theorem for ordinary coefficients in
the nilpotent case. By using this result, we refine Hasegawa's result of
formality of nilmanifolds and Benson-Gordon's result of hard Lefschetz
properties of nilmanifolds.
### 2012年04月24日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
Dylan Thurston 氏 (Columbia University)
Combinatorial Heegaard Floer homology (ENGLISH)
[ 講演概要 ]
Heegaard Floer homology is a powerful invariant of 3- and 4-manifolds.
In 4 dimensions, Heegaard Floer homology (together with the
Seiberg-Witten and Donaldson equations, which are conjecturally
equivalent), provides essentially the only technique for
distinguishing smooth 4-manifolds. In 3 dimensions, it provides much
geometric information, like the simplest representatives of a given
homology class.
In this talk we will focus on recent progress in making Heegaard Floer
homology more computable, including a complete algorithm for computing
it for knots.
### 2012年04月17日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
Eriko Hironaka 氏 (Florida State University)
Pseudo-Anosov mapping classes with small dilatation (ENGLISH)
[ 講演概要 ]
A mapping class is a homeomorphism of an oriented surface
to itself modulo isotopy. It is pseudo-Anosov if the lengths of essential
simple closed curves under iterations of the map have exponential growth
rate. The growth rate, an algebraic integer of degree bounded with
respect to the topology of the surface, is called the dilatation of the
mapping class. In this talk we will discuss the minimization problem
for dilatations of pseudo-Anosov mapping classes, and give two general
constructions of pseudo-Anosov mapping classes with small dilatation.
### 2012年04月10日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
On homology of symplectic derivation Lie algebras of
the free associative algebra and the free Lie algebra (JAPANESE)
[ 講演概要 ]
We discuss homology of symplectic derivation Lie algebras of
the free associative algebra and the free Lie algebra
with particular stress on their abelianizations (degree 1 part).
Then, by using a theorem of Kontsevich,
we give some applications to rational cohomology of the moduli spaces of
Riemann surfaces and metric graphs.
This is a joint work with Shigeyuki Morita and Masaaki Suzuki.
### 2012年02月21日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
Property (TT)/T and homomorphism superrigidity into mapping class groups (JAPANESE)
[ 講演概要 ]
コンパクトで向きづけられた曲面(パンクがあってもよい)の写像類群には,多くの謎めいた性質があることが知られている:写像類群はある場合には高ランク格子(つまり,高ランク代数群の既約格子)に近いふるまいをするが,別の場合にはランク1格子に近いふるまいをする.次に述べる定理はFarb--Kaimanovich--Masur超剛性と呼ばれており,写像類群のランク1格子に近いふるまいの顕著な例である:「高ランク格子(例えばSL(3,Z)や,SL(3,R)の余コンパクト格子など)から写像類群への任意の群準同型は有限の像をもつ.」
### 2012年01月17日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
On the Johnson cokernels of the mapping class group of a surface (joint work with Naoya Enomoto) (JAPANESE)
[ 講演概要 ]
In general, the Johnson homomorphisms of the mapping class group of a surface are used to investigate graded quotients of the Johnson filtration of the mapping class group. These graded quotients are considered as a sequence of approximations of the Torelli group. Now, there is a broad range of remarkable results for the Johnson homomorphisms.
In this talk, we concentrate our focus on the cokernels of the Johnson homomorphisms of the mapping class group. By a work of Shigeyuki Morita and Hiroaki Nakamura, it is known that an Sp-irreducible module [k] appears in the cokernel of the k-th Johnson homomorphism with multiplicity one if k=2m+1 for any positive integer m. In general, however, to determine Sp-structure of the cokernel is quite a difficult preblem.
Our goal is to show that we have detected new irreducible components in the cokernels. More precisely, we will show that there appears an Sp-irreducible module [1^k] in the cokernel of the k-th Johnson homomorphism with multiplicity one if k=4m+1 for any positive integer m.
### 2011年12月20日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
5次元球面上のLawson 葉層の葉向シンプレクティック構造について (JAPANESE)
[ 講演概要 ]
We are going to show that Lawson's foliation on the 5-sphere
admits a smooth leafwise symplectic sturcture. Historically, Lawson's
foliation is the first one among foliations of codimension one which are
constructed on the 5-sphere. It is obtained by modifying the Milnor
fibration associated with the Fermat type cubic polynominal in three
variables.
Alberto Verjovsky proposed a question whether if the Lawson's
foliation or slighty modified ones admit a leafwise smooth symplectic
structure and/or a leafwise complex structure. As Lawson's one has a
Kodaira-Thurston nil 4-manifold as a compact leaf, the question can not
be solved simultaneously both for the symplectic and the complex cases.
The main part of the construction is to show that the Fermat type
cubic surface admits an `end-periodic' symplectic structure, while the
natural one as an affine surface is conic at the end. Even though for
the other two families of the simple elliptic hypersurface singularities
almost the same construction works, at present, it seems very limited
where a Stein manifold admits an end-periodic symplectic structure. If
the time allows, we also discuss the existence of such structures on
globally convex symplectic manifolds.
### 2011年12月13日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
Mircea Voineagu 氏 (IPMU, The University of Tokyo)
Remarks on filtrations of the singular homology of real varieties. (ENGLISH)
[ 講演概要 ]
We discuss various conjectures about filtrations on the singular homology of real and complex varieties. We prove that a conjecture relating niveau filtration on Borel-Moore homology of real varieties and the image of generalized cycle maps from reduced Lawson homology is false. In the end, we discuss a certain decomposition of Borel-Haeflinger cycle map. This is a joint work with J.Heller.
### 2011年11月29日(火)
17:00-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:40 - 17:00 コモンルーム
Athanase Papadopoulos 氏 (IRMA, Univ. de Strasbourg)
Mapping class group actions (ENGLISH)
[ 講演概要 ]
I will describe and present some rigidity results on mapping
class group actions on spaces of foliations on surfaces, equipped with various topologies.
### 2011年11月22日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
Quantum and homological representations of braid groups (JAPANESE)
[ 講演概要 ]
Homological representations of braid groups are defined as
the action of homeomorphisms of a punctured disk on
the homology of an abelian covering of its configuration space.
These representations were extensively studied by Lawrence,
Krammer and Bigelow. In this talk we show that specializations
of the homological representations of braid groups
are equivalent to the monodromy of the KZ equation with
values in the space of null vectors in the tensor product
of Verma modules when the parameters are generic.
To prove this we use representations of the solutions of the
KZ equation by hypergeometric integrals due to Schechtman,
Varchenko and others.
In the case of special parameters these representations
are extended to quantum representations of mapping
class groups. We describe the images of such representations
and show that the images of any Johnson subgroups
contain non-abelian free groups if the genus and the
level are sufficiently large. The last part is a joint
work with Louis Funar.
### 2011年11月15日(火)
16:30-18:00 数理科学研究科棟(駒場) 056号室
Tea: 16:00 - 16:30 コモンルーム
Francois Laudenbach 氏 (Univ. de Nantes)
Singular codimension-one foliations
and twisted open books in dimension 3.
(joint work with G. Meigniez)
(ENGLISH)
[ 講演概要 ]
The allowed singularities are those of functions.
According to A. Haefliger (1958),
such structures on manifolds, called $\\Gamma_1$-structures,
are objects of a cohomological
theory with a classifying space $B\\Gamma_1$.
The problem of cancelling the singularities
(or regularization problem)
arise naturally.
For a closed manifold, it was solved by W.Thurston in a famous paper
(1976), with a proof relying on Mather's isomorphism (1971):
Diff$^\\infty(\\mathbb R)$ as a discrete group has the same homology
as the based loop space
$\\Omega B\\Gamma_1^+$.
For further extension to contact geometry, it is necessary
to solve the regularization problem
without using Mather's isomorphism.
That is what we have done in dimension 3. Our result is the following.
{\\it Every $\\Gamma_1$-structure $\\xi$ on a 3-manifold $M$ whose
normal bundle
embeds into the tangent bundle to $M$ is $\\Gamma_1$-homotopic
to a regular foliation
carried by a (possibily twisted) open book.}
The proof is elementary and relies on the dynamics of a (twisted)
pseudo-gradient of $\\xi$.
All the objects will be defined in the talk, in particular the notion
of twisted open book which is a central object in the reported paper. | 2022-09-24 19:01:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7213233113288879, "perplexity": 3060.171874749461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00751.warc.gz"} |
https://stats.stackexchange.com/questions/417653/non-linear-dimensionality-reduction-for-detecting-coordinate-systems | # Non-linear dimensionality reduction for detecting coordinate systems [closed]
I am trying to find a way to automatically find the appropriate coordinate system for a physical problem.
For example, in the case of a simple pendulum, polar coordinates are the most appropriate ones. I have the data for the x,y cartesian coordinates of the pendulum at various times. I would like to be able to jump from this to the angular data at a given time.
I feel that this is a form of dimensionality reduction. Could we use auto-encoders for this or are methods such as ISOMAP or CDA more appropriate?
I feel that PCA would not work well for this problem as I need to perform a non-linear dimensionality reduction.
• Is this task a learning exercise or are you really pondering about using autoencoders to obtain phase information from pendulum simulations? – Firebug Jul 18 '19 at 0:19
• Because there's an analytical solution to that, no need for fancy machine learning. – Firebug Jul 18 '19 at 0:19
• How is this dimensionality reduction? You start with two parameters (angle and length) and you end up with two (x and y)> What is it that you want to automate? – Peter Flom Jul 18 '19 at 11:34
• Yes but the problem has one degree of freedom. Ok dimensionality reduction may not be the correct word. But I feel that autoencoders could work for this. – TriposG Jul 18 '19 at 13:32
For example, in the case of a simple pendulum, polar coordinates are the most appropriate ones. I have the data for the x,y cartesian coordinates of the pendulum at various times. I would like to be able to jump from this to the angular data at a given time. I feel that this is a form of dimensionality reduction. Could we use auto-encoders for this or are methods such as ISOMAP or CDA more appropriate?
And you're correct. An autoencoder could work here, since it's a nonlinear problem, but would not simply tell you that the latent dimensionality of the two-dimensional position distribution is $$\theta=\operatorname{tg}((x-\bar x)/(y-\bar y))$$. It would only allow you to estimate the intrinsic dimensionality of your problem, one, since the radius is invariant.
I feel that PCA would not work well for this problem as I need to perform a non-linear dimensionality reduction.
Your intuition is right, since PCA is a linear dimensionality reduction method.
• Ok, I understand that. So it isn't possible to get the value of the angular positions just from the x,y cartesian coordinates? Perhaps by varying the architecture of the autoencoder neural network. – TriposG Jul 17 '19 at 14:09
• @TriposG it's not uniquely defined. A 1-D space where $t=\theta$ is just as good at explaining your data as a 1-D space where $t = 1000 \theta$ – Firebug Jul 17 '19 at 23:02
• right but atleast the ratios would be the same? – TriposG Jul 17 '19 at 23:26
• @TriposG no, the autoencoder could learn a nonlinear transformation of the phase as well, such as $t=\theta^3$, which has equal explanatory power to all the other approximations – Firebug Jul 17 '19 at 23:28
• ok. Do you have any suggestion on how to go from x,y coordinates to theta using autoencoders? Or is this a very difficult task? – TriposG Jul 17 '19 at 23:46 | 2021-07-29 10:13:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7941053509712219, "perplexity": 336.2879345908511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153854.42/warc/CC-MAIN-20210729074313-20210729104313-00022.warc.gz"} |
https://kullabs.com/classes/subjects/units/lessons/notes/note-detail/8445 | Notes on Latitudes and Longitudes | Grade 6 > Social Studies > Our Earth | KULLABS.COM
• Note
• Things to remember
• Videos
• Exercise
• Quiz
We can see several horizontal and vertical lines drawn on the globe and map. These lines are the imaginary lines. They are also called Latitudes and Longitudes. These imaginary lines are used for locating places and country on the map of the world.
Latitudes
Latitude is defined as an equatorial reference plane. This plane passes through the center of the sphere, and also contains the great circle representing the equator. Latitudes are also known as the imaginary lines drawn parallel to the equator from the east and west. Latitudes are the angular distance of the north place or south of the equator. The latitude has the symbol of phi, and it shows the angle between the straight line in the certain point and the equatorial plane. The latitude is specified by degrees, starting from 0° and ending up with 90° to both sides of the equator. It makes latitude of Northern and Southern. The equator is the line with 0° latitude.
The Equator is an imaginary line drawn in the middle of the Earth from east to west. It divides the earth into two equal halves. The equator is the line with 0° latitude. There are 90o latitudes in the north of the equator. They are called north latitudes. There are 90o latitudes in the south of the equator called South latitudes. Similarly, the latitudes of 90oare just a point. They are called North pole and South pole in the Northern and Southern hemisphere respectively.
Latitudes are expressed in degree, minute and second.
The five important latitudes are:
1. Equator (0o)
2. Tropic of Cancer (23$$\frac{1}{2}$$)o N
3. Tropic of Capricorn (23 $$\frac{1}{2}$$)o S
4. Article Circle (66 $$\frac{1}{2}$$)o N
5. Antarctic Circle (66 $$\frac{1}{2}$$)o S
Longitudes
The longitude has the symbol of lambda and is another angular coordinate defining the position of a point on a surface of the earth. The longitude is defined as an angle pointing west or east from the Greenwich Meridian, which is taken as the Prime Meridian. The longitude can be defined maximum as 180° east from the Prime Meridian and 180° west from the Prime Meridian. 0o longitude is called Prime Meridian. It passes through the Greenwich in London. Prime Meridian divides the earth into two equal parts. The part east of the meridian is called Eastern Hemisphere and west of the meridian is called Western Hemisphere. Longitudes are the vertical and semicircle lines. All the lines of longitudes meet at the two poles of the earth.
Both latitude and longitude are measured in degrees, which are in turn divided into minutes and seconds. For example, the tropical zone which is located to the south and to the north from the Equator is determined by the limits of 23°26'13.7'' S and 23°26'13.7'' N. Or. For example, the latitude of Nepal is 26O 22' N to 30O 27' and the longitude is 80O 4' to 88O 12' E.
• Latitudes are also known as the imaginary lines drawn parallel to the equator from the east and west.
• Latitudes are the angular distance of the north place or south of the equator.
• The latitude has the symbol of phi, and it shows the angle between the straight line in the certain point and the equatorial plane.
• The longitude is defined as an angle pointing west or east from the Greenwich Meridian, which is taken as the Prime Meridian.
• The longitude can be defined maximum as 180° east from the Prime Meridian and 180° west from the Prime Meridian.
• 0o longitude is called Prime Meridian.
.
### Very Short Questions
The longitude has the symbol of lambda and is another angular coordinate defining the position of a point on a surface of the earth
The Equator is an imaginary line drawn in the middle of the earth from east and west. It divides the earth into two equal halves. The equator is the line with 0° latitude
Latitude is defined as an equatorial reference plane. This plane passes through the center of the sphere, and also contains the great circle representing the equator. Latitudes are also known as the imaginary lines drawn parallel to the equator from the east and west. Latitudes are the angular distance of the north place or south of the equator.
0%
• ### _______________ is the imaginary line drawn in the middle of the earth from east to west.
Equator
Verticals lines
Axis
Parallel
• ### There are 90 latitudes in the north of equator. They are called __________
West latitudes
South latitudes
East latitudes
north latitudes
lines
axis
distance
plane
lamda
phi
P
degree
meters
Centimeters
Kilometer
degrees
phi
lamda
degree
point
earth
map
globe
Axis
360
60
180
90
map
globe
book
atlas
map
globe
equator
axis
## ASK ANY QUESTION ON Latitudes and Longitudes
No discussion on this note yet. Be first to comment on this note | 2019-08-26 06:00:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6370915174484253, "perplexity": 936.3195424057147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330968.54/warc/CC-MAIN-20190826042816-20190826064816-00189.warc.gz"} |
http://mymathforum.com/topology/345707-exercise-tangent-bundle.html | My Math Forum Exercise on Tangent bundle
Topology Topology Math Forum
February 2nd, 2019, 09:24 AM #1 Newbie Joined: Jan 2019 From: italy Posts: 6 Thanks: 0 Exercise on Tangent bundle Hi everybody, I have to verify that $S'\times S^2$ doesn't have a frame. That's what I would do: $S'$ has a frame so $T_S'= S'\times \mathbb R$. $S^2$ doesn't have a frame so $T_{S^2}\ne S^2 \times \mathbb {R}^2$. $S'\times S^2$ has a frame if and only if $T_{S'\times S^2}= S'\times S^2 \times \mathbb {R}^3$. But since $T_{S^2}\ne S^2 \times \mathbb {R}^2$ that's impossibile. Am I right? thank you all
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post ManosG Real Analysis 3 March 26th, 2013 02:11 PM Aqil Economics 0 November 17th, 2009 08:14 AM bortkiew Real Analysis 1 October 27th, 2009 09:14 AM me Trigonometry 3 January 31st, 2008 09:30 AM
Contact - Home - Forums - Cryptocurrency Forum - Top | 2019-09-19 19:52:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4540773034095764, "perplexity": 3223.115964099644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573570.6/warc/CC-MAIN-20190919183843-20190919205843-00397.warc.gz"} |
https://sidhartharya.me/braindump/20210402102641-text_classification/ | # Text Classification
Contents
Text classification is the task of assigning a sentence or document an appropriate category. The categories depend on the chosen dataset.
## Formal Definition
Input:
• a document d
• A set of classes C= c_1, c_2, c_3 …, c_j}
Output: a predicted class $c \in C$
## Classification methods
### Hand coded rules
spam: black list address OR (“dollars” AND “have been selected”)
• High accuracy
• building maintaining and scaling these rules is expensive
### Supervised Machine Learning
Input:
• a document d
• A set of classes C= c_1, c_2, c_3 …, c_j
• a training set of m hand labelled
Output: a predicted class $c \in C$ | 2021-10-24 03:15:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8811796307563782, "perplexity": 10785.220749153605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585837.82/warc/CC-MAIN-20211024015104-20211024045104-00267.warc.gz"} |
http://physics.stackexchange.com/tags/tensor-calculus/new | # Tag Info
1
When it comes to nonsymmetric tensors, the order of indices matter, even between covariant and contravariant indices. Let us take the difference between $T^a{}_b$ and $T_b{}^a$, multiplied by the metric $g_{ab}$ to raise and lower indices: $$g_{ac}(T^a{}_b-T_b{}^a)=g_{ac}T^a{}_b-g_{ac}T_b{}^a=T_{cb}-T_{bc},$$ which is zero only if $T_{bc}=T_{cb}$, i.e., if ...
0
Gradient is covariant. Let's consider gradient of a scalar function. The reason is that such a gradient is the difference of the function per unit distance in the direction of the basis vector. We often treat gradient as usual vector because we often transform from one orthonormal basis into another orthonormal basis. And in this case matrix transpose and ...
1
The components of $\text{Ric}$ transform during coordinate change $x^\mu\mapsto \tilde{x}^\mu$ as $\tilde{R}_{\mu\nu}=\frac{\partial x^\sigma}{\partial \tilde{x}^\mu}\frac{\partial x^\rho}{\partial \tilde{x}^\nu}R_{\sigma\rho}$. This is just the usual transformation rule for coordinate-components of tensors. Contracting over the two indices gives $$\tilde{... 2 Note that you can use the entanglement entropy to calculate the amount of entanglement in a bipartite pure state, but this is not a good measure for a general bipartite (mixed) state. In the general case there are several different entanglement measures currently used, which have certain desiderata: https://quantiki.org/wiki/axiomatic-approach. Invariance ... 12 The answer is no: whether or not the state can be written as a product state does not depend on the basis. And you are precisely correct: there is indeed a basis-independent invariant that characterizes the entanglement. It is called the "entanglement spectrum": the eigenvalue spectrum of the reduced density matrix produced by taking the partial trace over ... 4 No, the entanglement (yes/no) doesn't depend on the basis of the two subsystems, only on the way how the two subsystems are separated from one another. A non-entangled state is a state of the form |j\rangle \otimes |\alpha\rangle for some states |j\rangle,|\alpha\rangle of the two subsystems; all other states in the composite Hilbert space are entangled.... 0 We can do \small0=i in the single form equation$$\epsilon^{ijk}\partial_j F_{0k} + \epsilon^{ijk}\partial_0 F_{jk} + \epsilon^{ijk}\partial_i F_{jk} =0$$and because \epsilon^{ijk} permutes we can write$$\epsilon^{jik}\partial_j F_{ik} + \epsilon^{ijk}\partial_i F_{jk} + \epsilon^{kij}\partial_k F_{ij} =0 From this we can divide out $\epsilon^{ijk}$...
3
$\Lambda_{\mu\nu} = {\Lambda_\mu}^\sigma\eta_{\sigma\nu}$. It doesn't "do" anything. $\delta_{\mu\nu}$ and $\delta^{\mu\nu}$ are not tensors, as I explain at length in this answer of mine. The matrix elements of the identity are $\delta_\mu^\nu$, which you could have determined by thinking about the fact that the identity must send vectors $v^\mu$ to other ...
Top 50 recent answers are included | 2016-07-24 22:21:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9155344367027283, "perplexity": 502.2797493722326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824185.14/warc/CC-MAIN-20160723071024-00147-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://apboardsolutions.in/ap-board-8th-class-maths-solutions-chapter-1-intext-questions/ | # AP Board 8th Class Maths Solutions Chapter 1 Rational Numbers InText Questions
## AP State Syllabus 8th Class Maths Solutions 1st Lesson Rational Numbers InText Questions
AP State Syllabus AP Board 8th Class Maths Solutions Chapter 1 Rational Numbers InText Questions and Answers.
### 8th Class Maths 1st Lesson Rational Numbers InText Questions and Answers
Do this
Question 1.
Consider the following collection of numbers 1, $$\frac{1}{2}$$, -2, 0.5, 4$$\frac{1}{2}$$, $$\frac{-33}{7}$$, 0, $$\frac{4}{7}$$, $$0 . \overline{3}$$, 22, -5, $$\frac{2}{19}$$, 0.125. Write these numbers under the appropriate category. [A number can be written in more than one group] (Page No. 2)
i) Natural numbers 1, 22
ii) Whole numbers 0, 1, 22
iii) Integers 0, 1, 22, -5, -2
iv) Rational numbers 1, $$\frac{1}{2}$$, -2, 0.5, 4$$\frac{1}{2}$$, $$\frac{-33}{7}$$, 0, $$\frac{4}{7}$$, $$0 . \overline{3}$$, 22, -5, $$\frac{2}{19}$$, 0.125 etc.
Would you leave out any of the given numbers from rational numbers? No
Is every natural number, whole number and integer is a rational number? Yes
Question 2.
Fill the blanks in the table. (Page No. 6)
Question 3.
Complete the following table. (Page No. 9)
Question 4.
Complete the following table. (Page No. 13)
Question 5.
Complete the following table. (Page No. 16)
Question 6.
Complete the following table. (Page No. 17)
Question 7.
Represent – $$\frac{13}{5}$$ on the number line. (Page No. 22)
Representing – $$\frac{13}{5}$$ on the number line.
Try These
Question 1.
Hamid says $$\frac{5}{3}$$ is a rational number and 5 is only a natural number. Shikha says both are rational numbers. With whom do you agree? (Page No. 3)
I would not agree with Hamid’s argument. Since $$\frac{5}{3}$$ is a rational number. But ‘5’ is not only
a natural number, it is also a rational number.
Since every natural number is a rational number,
According to Shikha’s opinion $$\frac{5}{3}$$, 5 are rational numbers.
∴ I agree with Shikha’s opinion.
Question 2.
Give an example to satisfy the following statements. (Page No.3)
i) All natural numbers are whole numbers but all whole numbers need not be natural numbers.
ii) All whole numbers are integers but all integers are not whole numbers.
iii) All integers are rational numbers but all rational numbers need not be integers.
i) ‘0’ is not a natural number.
∴ Every whole number is not a natural number. (∵ N ⊂ W)
ii) -2, -3, -4 are not whole numbers.
∴ All integers are not whole numbers. (∵ W ⊂ Z)
iii) $$\frac{2}{3}$$, $$\frac{7}{4}$$ are not integers.
∴ Every rational number is not an integer. (∵ Z ⊂ Q)
Question 3.
If we exclude zero from the set of integers is it closed under division? Check the same for natural numbers. (Page No. 6)
If ‘0’ is subtracted from the set of integers then it becomes Z – {0}.
Closure property under division on integers.
Ex: -4 ÷ 2 = -2 is an integer.
3 ÷ 5 = $$\frac{3}{5}$$ is not an integer.
∴ Set of integers doesn’t satisfy closure property under division.
Closure property under division on natural numbers.
Ex: 2 ÷ 4 = $$\frac{1}{2}$$ is not a natural number.
∴ Set of natural numbers doesn’t satisfy closure property under division.
Question 4.
Find using distributivity. (Page No. 16)
A) $$\left\{\frac{7}{5} \times\left(\frac{-3}{10}\right)\right\}+\left\{\frac{7}{5} \times\left(\frac{9}{10}\right)\right\}$$
B) $$\left\{\frac{9}{16} \times 3\right\}+\left\{\frac{9}{16} \times-19\right\}$$
Distributive law: a × (b + c) = ab + ac
A)
B)
Question 5.
Write the rational number for the points labelled with letters, on the number line. (Page No. 22)
i)
ii)
i) A = $$\frac{1}{5}$$, B = $$\frac{4}{5}$$, C = $$\frac{5}{5}$$ = 1, D = $$\frac{7}{5}$$, E = $$\frac{8}{5}$$, F = $$\frac{10}{5}$$ = 2.
ii) S = $$\frac{-6}{4}$$, R = $$\frac{-6}{4}$$, Q = $$\frac{-3}{4}$$, P = $$\frac{-1}{4}$$
Think, discuss and write
Question 1.
If a property holds good with respect to addition for rational numbers, whether it holds good for integers? And for whole numbers? Which one holds good and which doesn’t hold good? (Page No. 15)
Under addition the properties which are followed by set of rational numbers are also followed by integers.
Question 2.
Write the numbers whose multiplicative inverses are the numbers themselves. (Page No. 15)
The number T is multiplicative inverse of itself.
∵ 1 × $$\frac{1}{1}$$ = 1 ⇒ 1 × 1 = 1
∴ The multiplicative inverse of 1 is 1.
Question 3.
Can you find the reciprocal of ‘0’ (zero)? Is there any rational number such that when it is multiplied by ‘0’ gives ‘1’?
(Page No. 15)
The reciprocal of ‘0’ is $$\frac{1}{0}$$.
But the value of $$\frac{1}{0}$$ is not defined.
∴ There is no number is found when it is multiplied ‘0’ gives 1.
∵ 0 × (Any number) = 0
∴ No, there is no number is found in place of ‘A’.
Question 4.
Express the following in decimal form. (Page No. 28)
i) $$\frac{7}{5}$$, $$\frac{3}{4}$$, $$\frac{23}{10}$$, $$\frac{5}{3}$$,$$\frac{17}{6}$$,$$\frac{22}{7}$$
ii) Which of the above are terminating and which are non-terminating decimals?
iii) Write the denominators of above rational numbers as the product of primes.
iv) If the denominators of the above simplest rational numbers has no prime divisors other than 2 and 5 what do you observe?
i) $$\frac{7}{5}$$ = 0.4,
$$\frac{3}{4}$$ = 0.75,
$$\frac{23}{10}$$ = 2.3,
$$\frac{5}{3}$$ = 1.66… = $$1 . \overline{6}$$,
$$\frac{17}{6}$$ = 2.833… = $$2.8 \overline{3}$$,
$$\frac{22}{7}$$ = 3.142
ii) From the above decimals $$\frac{7}{5}$$, $$\frac{3}{4}$$, $$\frac{23}{10}$$ are terminating decimals.
While $$\frac{5}{3}$$,$$\frac{17}{6}$$,$$\frac{22}{7}$$ are non-terminating decimals
iii) By writing the denominators of above decimals as a product of primes is
iv) If the denominators of integers doesn’t have factors other than 2 or 5 and both are called terminating decimals.
Question 5.
Convert the decimals $$0 . \overline{9}$$, $$14 . \overline{5}$$ and $$1.2 \overline{4}$$ to rational form. Can you find any easy method other than formal method? (Page No. 31)
Let x = $$0 . \overline{9}$$
⇒ x = 0.999 ……. (1)
The periodicity of the above equation is ‘1’. So it is to be multiplied by 10 on both sides.
⇒ 10 × x = 10 × 0.999
10x = 9.999 …….. (2)
From (1) & (2)
∴ x = 1 or $$0 . \overline{9}$$ = 1
Second Method:
$$0 . \overline{9}$$ = 0 + $$0 . \overline{9}$$
= 0 + $$\frac{9}{9}$$
= 0 + 1
= 1
Let x = $$14 . \overline{5}$$
⇒ x = 14.55 …….. (1)
The periodicity of the equation (1) is 1.
So it should be multiplied by 10 on both sides.
⇒ 10 × x = 10 × 14.55
10x = 145.55 …….. (2)
Second Method:
Let x = $$1.2 \overline{4}$$
⇒ x= 1.244 …….. (1)
Here periodicity of equation (1) is 1. So it should be multiplied by 10 on both sides.
⇒ 10 × x = 10 × 1.244
10 x = 12.44 …….. (2)
Second Method: | 2022-07-01 05:16:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7574858069419861, "perplexity": 2619.911132291108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103920118.49/warc/CC-MAIN-20220701034437-20220701064437-00640.warc.gz"} |
http://prereleases-origin.llvm.org/6.0.0/rc2/tools/clang/tools/extra/docs/clang-tidy/checks/misc-throw-by-value-catch-by-reference.html | # misc-throw-by-value-catch-by-reference¶
“cert-err09-cpp” redirects here as an alias for this check. “cert-err61-cpp” redirects here as an alias for this check.
Finds violations of the rule “Throw by value, catch by reference” presented for example in “C++ Coding Standards” by H. Sutter and A. Alexandrescu.
Exceptions:
• Throwing string literals will not be flagged despite being a pointer. They are not susceptible to slicing and the usage of string literals is idomatic.
• Catching character pointers (char, wchar_t, unicode character types) will not be flagged to allow catching sting literals.
• Moved named values will not be flagged as not throwing an anonymous temporary. In this case we can be sure that the user knows that the object can’t be accessed outside catch blocks handling the error.
• Throwing function parameters will not be flagged as not throwing an anonymous temporary. This allows helper functions for throwing.
• Re-throwing caught exception variables will not be flragged as not throwing an anonymous temporary. Although this can usually be done by just writing throw; it happens often enough in real code.
## Options¶
CheckThrowTemporaries
Triggers detection of violations of the rule Throw anonymous temporaries. Default is 1. | 2020-09-21 07:38:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26367777585983276, "perplexity": 4315.168498657364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198942.13/warc/CC-MAIN-20200921050331-20200921080331-00330.warc.gz"} |
http://crypto.stackexchange.com/questions?page=2&sort=active&pagesize=30 | # All Questions
50 views
### Is it safe to use PBEWithMD5AndDES?
I'm encrypting some of my (not so much important) files with PBEWithMD5AndDES. Is it a strong method or it can be de-crypt easily without knowing my selected ...
23 views
### Key Confirmation Attack [Key Distribution Center (KDC)]
Okay this is how I understood it according to this: Alice $A$ establishes a connection to $KDC$ and prepares for session key Exchange $k_{ses}$ $A$ encrypts the request with her key $k_A(A, B)$ ...
42 views
### Is it safe to encrypt data by simply randomizing chunks based on a permutation?
Say I want to encrypt the following text: "Today is March the 22nd, 2015". And say want to encrypt 8-byte blocks of data using the following permutation [6,3,0,1,5,7,2,4] to swap characters at those ...
68 views
### credit card number encryption using aes-ctr mode
I want to encrypt credit card numbers. I want to apply AES-CTR mode. Is it suitable for that? How can I store nonce and counter values for an individual credit card number? How can I send the ...
30 views
### security of pairing based cryptography
while going through some papers on cryptographic accumulators i find the following statement "several pairing-based accumulators have been proposed in the past. However, due to continuous and recent ...
75 views
### Why would using a random seed with other variables be bad for ecrypting if you can't guess the key?
I asked this on stack exchange with it being code based, but someone suggested to ask here. Basically, I've done a script in python and wanted an option to encrypt the input, but I couldn't get any ...
8 views
### Using ssh agent to sign a string [migrated]
I am trying to learn more about asymmetric keys. I am writing a program that uses ssh-agent server to sign a message. This works well, but it generates a RSA-SHA1 signature of the message, which I am ...
23k views
### How can I use SSL/TLS with Perfect Forward Secrecy?
I'm new to the field of cryptography, but I want to make the web a better web by setting up the sites that I host with Perfect Forward Secrecy. I have a list of questions regarding the setup of ...
4k views
### Is there a simple hash function that one can compute without a computer?
I am looking for a hash function that is computable by hand (in reasonable time). The function should be at least a little bit secure: There should be no trivial way to find a collision (by hand). For ...
54 views
### How are RSA and ElGamal compatible in PGP?
I'm starting to play with PGP and I don't understand how if your key pair is RSA you can encrypt a message for someone whose key is for example ElGamal. How does the asymmetric key exchange work if ...
434 views
### SHA-1:Is there any mathematical result that gives us the minimum number of 1's in a 160-bit SHA-1 hash output?
Is there any mathematical result that gives us the minimum number of 1's in a 160-bit SHA-1 hash output? What is the probability that a 160-bit SHA-1 hash output contains at least 128 1's?
27 views
### chaining multiple key derivation functions together
I was looking at PBKDF2, bcrypt and scrypt as options for key derivation; and would like to try using them all together in order to get the cryptographic strength of the strongest one (which seems to ...
71 views
### How do you implement a cipher as one lookup table?
I am reading up on whitebox cryptography and have trouble understanding how are ciphers implemented as one lookup table? assuming my plaintext is just 4 bits so the size of my lookup table should be ...
271 views
### Practical Attack on RSA
Currently I am designing an RSA based application, and I am thinking of how long should the key be in order to be secure against attacks. I know that RSA 4096 bit key can be recovered using Sound ...
357 views
### Purpose of expanding then shrinking in SHA-1
What is the purpose of expanding then shrinking in SHA-1? Does it serves any security purposes?
18 views
### Choosing a cipher in SSH
When and how do the client/server agree on what cipher and MAC will be used during an SSH connection?
46 views
### Replacing the PRF in PBKDF2 with Keccak
I am unable to find a reliable, tested library for a decent password based key derivation function e.g. Scrypt in the programming language I am using, but I have a reliable library for PBKDF2 (which ...
27 views
### What does invertibility of the cryptographic primitive mean?
source: http://www.cosic.esat.kuleuven.be/publications/thesis-152.pdf ,page 32 Note that key recovery implies invertibility of the cryptographic primitive (because due to the Kerckhoffs’ ...
675 views
### Estimating random number entropy for input into 256 bit hash
Assuming a random number generation process outputs lots of numbers between 0-9. First I gathered up a bunch of the numbers, converted them to binary and created a bitmap. Not so random as you can ...
84 views
+50
### Naive implementation of Rainbow Table and/or Hellman's trade-off
Is there any naive implementation of Hellman's cryptanalytic time memory tradeoff in C and/or a naive implementation of Oechslin's rainbow table algorithm in C as well? I have seen some ...
32 views
### AES product function
Small description: consider a polynomial in GF($2^8$) , which has the form $$f(x) = b_7x^7 + b6x^6 + b_{5}x^5 + b_4x^4 + b_3x^3 + b_2x^2 + b_1x + b_0$$ If we multiply by x , we have x * f(x) = ...
297 views
122 views
### How many attempts does it take to crack a 32-bit password hash with this scenario?
How many attempts does it take to crack (match) a 32-bit password hash from a database of 4 million password hashes? Correct me if I'm wrong, but to crack a 32-bit password hash would take roughly ...
31 views
### About the Security of PCBC Encryption Mode
We are designing an encrypted file system. We plan to use the PCBC (Plaintext Cipher Block Chaining) encryption mode for encryption. This is because we desire the feature ("small changes in the ...
91 views
### Moral dilemma on releasing a new E2EE Method [closed]
Please forgive me if this is not an appropriate question but I wanted the opinion of those working in this field as I kind of stumbled into this problem. Im not a cryptographer but I think it's fair ...
56 views
### Are factorization algorithms parallelizable?
I was reading about the Blum-Blum-Shub random number generator, and its security depends on the hardness of factoring very large numbers (like many things in crypto do). I'm just wondering, if I have ...
20 views
### Security of simple Skein PBKDF mentioned in the paper
From the Skein 1.3 paper section 4.8, Skein as a Password-Based Key Derivation Function (PBKDF), it mentions the following as a simple PBKDF (S = seed and P = password): An even simpler PBKDF is ...
46 views
### Is this an example of a zero knowlege proof?
My Math Structures professor mentioned the strange concept of a zero knowledge proof to me after class one day, and I decided to do some reading about it. After reading the relatively famous "How to ...
54 views
### Coding of unsigned int to prevent guessing next ID
Suppose we are assigning records an unsigned integer ID from a N-bit space (say 32-bit) in a sequential manner. Is there a way we can code this ID before showing it to the public such that someone ...
51 views
### Is it safe to prefix the a key with a known value?
Almost every encryption algorithm is based on a secret key. I wonder when the algorithm is considered safe, does that also imply that I can prefix (or suffix) the key with a known value, as long as ...
35 views
### Creating a new cipher! [closed]
http://flashbackcipher.blogspot.ae/2015/03/introducing-flashback.html Hey guys the above is the blog for a new cipher, if you guys can take a look and suggest problems.. it would be of great help!! ...
46 views
### RC4 , Is it possible to find the key if we know the plaintext and ciphertext?
Is it possible to find the key if we know the plaintext and ciphertext with RC4. How should I write the algorithm?
83 views
### Rijndael S-boxes: Where do the $\mu$ and $\nu$ polynomial ring elements come from?
I've asked some other questions before about Rijndael's S-boxes, and step by step I'm coming to an understanding; but those steps often guide me to new questions. I did some lines of code to ...
49 views
### Information-theoretic bound on leakage by timing measurement
I'm looking for an information-theoretic upper bound on leakage by timing measurement. I'm assuming that an attacker wants to leak out of a black box a secret key of $k$ bits that is secretly ...
28 views
### Open and Closed Hasing Algorithm [closed]
I have been looking for both open and closed hashing algorithms for over two weeks now. I can't seem to find any that are general and could be applied to any language. I would really appreciate it if ...
57 views
### How can ECDSA signatures be shortened (to be used as a product key)?
So I made my own serial key generation software, using ECDSA, for use in my own applications and it works great so far! To keep the serial key short enough I use a 128 bit EC curve. My final signature ...
58 views
### How dangerous is it to encrypt with AES 256 if the end user knows the unencrypted value?
I am implementing a bit of security in a system that was originally built without encryption on a specific piece of data. The plan was to encrypt this piece of data and include it as part of the ...
131 views
### Why is diffie hellman sha1 used instead of diffie hellman
For SSH why is diffie-hellmangroup1-sha1 used instead of just diffie-hellman? In other words why is the hash function used?
3k views
### Encrypting small values with RSA private key
I'm looking for best practices when it comes to encrypting small (< 128 bytes) amounts of data with the RSA private key. Signing it would make the resulting payload too large.
197 views
### What is the most light-weight symmetric cipher thats still usefull?
I need a very light encryption scheme which costs almost no performance during encryption (while decryption may be slow). I found 5 algorithms: RC4, DES, LED, PRESENT, and Piccolo. But how do I ...
64 views
### MAC in SSH packet encryption, benefits to not including it?
What are the benefits to not including MAC in a SSH packet encryption? I understand what the MAC is there for, but if it was not included would there be an advantage? Is the MAC somewhat redundant ...
15 views
### Behaviour of LDPC code as density of check matrix increases [closed]
My assignment is to implement a Loopy Belief Propagation algorithm for Low-density Parity-check Code. This code uses a parity-check matrix H which is rather sparse (say 750-by-1000 binary matrix with ...
47 views
### Do asymmetric signatures require constant-time verification?
To avoid a timing attack, HMAC signatures are usually compared in constant-time (every byte is compared, and the results aggregated). Is the same necessary for asymmetric signature algorithms such ...
22 views
### Construct rainbow table to cracking DES
I'm facing to a project that is build a rainbow table to cracking DES. I've already collected information about Hellman, DP, rainbow table method. Finally, i choose rainbow table. I've read topic ...
17 views
### Substitution cipher with hexidecimal key and encrypted plaintext? [duplicate]
This is a challenge for a past CTF that I was unable to find a writeup for. The challenge gives two ciphers (http://pastebin.com/raw.php?i=Fq98trFw). The "key" has a length of 32, and the flag has a ...
163 views
### What happens when a RC4 stream gets corrupted?
I want to encrypt a large file using RC4. But what happens if the encrypted file gets corrupted (bytes modified or lost)? Can I still decrypt the rest of the file correctly? If not, what is the best ... | 2015-03-27 19:17:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5543432831764221, "perplexity": 1958.1624793584654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296603.6/warc/CC-MAIN-20150323172136-00130-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://escholarship.org/uc/item/4413j82b | Open Access Publications from the University of California
## Uniqueness/nonuniqueness for nonnegative solutions of second-order parabolic equations of the form $u\sb t=Lu+Vu-\gamma u\sp p$ in $\bold R\sp n$
• Author(s): Englander, Janos
• Pinsky, Ross G
• et al.
Abstract
The authors investigate the uniqueness and nonuniqueness of nonnegative solutions of the Cauchy problem (with nonnegative initial data f) for the second order parabolic differential equation −ut +aijDiju+biDiu+V u−u^p = 0 when the coefficients aij , bi, V , and are Holder continuous with gamma > 0 and p > 1. A key step is to prove the existence of maximal and minimal solutions of this Cauchy problem and then to derive comparison principles. Some conditions on aij and bi are given which imply the uniqueness for suitable data and some connections are given to the uniqueness (or nonuniqueness) of bounded solutions to the Cauchy problem with vanishing gamma.
Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you. | 2021-07-30 11:03:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7359773516654968, "perplexity": 795.2702607019116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.52/warc/CC-MAIN-20210730091645-20210730121645-00023.warc.gz"} |
http://www.map.mpim-bonn.mpg.de/index.php?title=Parametric_connected_sum&diff=15039&oldid=2000 | # Parametric connected sum
(Difference between revisions)
## 1 Introduction
Parametric connected sum is an operation on compact connected n-manifolds
Tex syntax error
$== Introduction == ; Parametric connected sum is an operation on compact connected n-manifolds M and N equipped with codimension 0-embeddings \phi: T \to M and \psi : T \to N of a compact connected manifold T. It generalises the usual connected sum operation but is more subtle since the isotopy classes of the embeddings \phi and \psi may be significantly more complicated than the isotopy classes of embeddings of n-discs need for connected sum: these last are determined by (local) orientations. == Connected sum == ; Let M be a compact connected n-manifold with base point x \in \mathrm{int}. Recall that that a local orientation for M is a choice of orientation of TM_m, the tangent space to M at m. We write -M for M with the opposition orientation at m. Of course, if M is orientable then a local orientation for M defines an orientation on M. If M and N are locally oriented n-manifolds then their [[Wikipedia:Connected_sum|connected sum]] is defined by M \sharp N = ((M - m) \cup (N - n))/ simeq where \simeq is defined using the local orientations to identify small balls about k and n. The diffeomorphism type of M \sharp N is well-defined: in fact M \sharp N is the outcome of 0-surgery on M \sqcup N. The essential point is \cite{Hirsch} which states, for any M and any two compatibly oriented embeddings f_0: D^n_1 \to M and \phi_1 : D^n \to M, that \phi_0 is isotopic to f_1. If M and N are oriented manifolds the connected sum M \sharp N is a well-defined up to diffeomorphism. Note that orientation matters! The canoical example is \CP^2 \sharp \CP^2 \neq \CP^2 \sharp (-\CP^2). The manifolds are not even homotopy equivalent: the first has signature 2 the other signature 0. The following elementary lemma is often useful to remember. {{beginthm|Lemma}} Let M and N be locally oriented manifolds such that N there is a diffeomoprhism N \cong -N, then M \sharp N \cong M \sharp (-N). {{endthm}} == Parametric connected sum along k-spheres == ; We say above that to define connected sum for connected k-manifolds M and N it is sufficient to equip them with an isotopy class of embeddings of the k-disc. Moreove, the disjoint union D^n \sqcup D^n is the unique [[thickening]] of S^0. This motivates the following {{beginthm|Defintion}} An S^k-oriented manifold is a pair (M, \phi) where M is a compact connected manifold and \phi : S^k \times D^{n-k} \to \mathrm{int}(M) is an embedding. {{endthm}} {{beginthm|Defintion}} Let M = (M, \phi) and N = (N, \psi) by S^k-oriented manifolds. Define M \sharp_k N = (M - \phi(S^k \times \{ 0 \}) \cup (N - \psi(S^k \times \{ 0 \})/simeq where \simeq is defined via the embeddings \phi and \psi. {{endthm}} Is is clear that we have the following {{beginthm|Observation}} The diffeomorphism type of M \sharp_k N depends only upon the the isotopy classes of the embeddings \phi and \psi (which of course includes the diffeomorphism types of M and N). {{endthm}} == Applications of k-sphere conneted sum == ; The operation of S^k-connected sum was used in \cite{Ajala1984} and \cite{Ajala1987} to describe the set of smooth structures on the product of spheres \Pi_{i=1}^r S^{n_i}. It is also used in \cite{Skopenkov} to define, for appropriate values of p, q and m groups stuctures on E^m(S^p \times S^q) the set of smooth isotopy classes of embeddings of S^p \times S^q into \Rr^m. It also appears in \cite{Sako1981}. == Parametric connected sum along thickenings == ; Let B be a [[stable fibred vector bundle]]. A foundational theorem of modified surgery is {{beginthm|Theorem|Stable classification: \cite{Kreck1999} \cite{Kreck1985}}} NSt_{2n}(B) \cong \Omega_{2n}^B. {{endthm}} In particular, NSt_{2n}(B) has the structure of an abelian group. The question of whether there is a geometric definition of this group structure is taken up in \cite{Kreck1985|Chapter 2, pp 26-7} where it is shown how to use parametric connected sum along thickenings to define an addition of stable diffeomorphism classes of close 2n-B-manifolds. == References == {{#RefList:}} [[Category:Theory]] {{Stub}}M$ and $N$$N$ equipped with codimension 0-embeddings $\phi: T \to M$$\phi: T \to M$ and $\psi : T \to N$$\psi : T \to N$ of a compact connected manifold
Tex syntax error
$T$. It generalises the usual connected sum operation
which is the special case when $T = D^n$$T = D^n$ is the $n$$n$-disc. The parametric connected sum operation is more complicated than the usual connected
sum operation since the isotopy classes of the embeddings of
Tex syntax error
$T$ into
Tex syntax error
$M$ may be significantly more complicated than the isotopy classes of embeddings of n-discs need for connected sum: these last are determined by (local) orientations.
## 2 Connected sum along k-spheres
We say above that to define connected sum for connected k-manifolds
Tex syntax error
$M$ and $N$$N$ it is sufficient to equip them with an isotopy class of embeddings of the k-disc. Moreover, the disjoint union $D^n \sqcup D^n$$D^n \sqcup D^n$ is the unique thickening of $S^0$$S^0$. This motivates the following
Defintion 2.1.
A manifold with an $S^k$$S^k$-thickening, an $S^k$$S^k$-thickened manifold for short, is a pair $(M, \phi)$$(M, \phi)$ where
Tex syntax error
$M$ is a compact connected manifold and $\phi : S^k \times D^{n-k} \to \mathrm{int}(M)$$\phi : S^k \times D^{n-k} \to \mathrm{int}(M)$ is an embedding.
Defintion 2.2. Let $M = (M, \phi)$$M = (M, \phi)$ and $N = (N, \psi)$$N = (N, \psi)$ by $S^k$$S^k$-thickened manifolds. Define
$\displaystyle M \sharp_k N = (M - \phi(S^k \times \{ 0 \}) \cup (N - \psi(S^k \times \{ 0 \})/\simeq$
where $\simeq$$\simeq$ is defined via the embeddings $\phi$$\phi$ and $\psi$$\psi$.
It is clear that we have the following
Observation 2.3.
The diffeomorphism type of $M \sharp_k N$$M \sharp_k N$ depends only upon the the isotopy classes of the embeddings $\phi$$\phi$ and $\psi$$\psi$ (which of course includes the diffeomorphism types of
Tex syntax error
$M$ and $N$$N$).
### 2.1 Applications
The operation of $S^k$$S^k$-connected sum was used in [Ajala1984] and [Ajala1987] to describe the set of smooth structures on the product of spheres $\Pi_{i=1}^r S^{n_i}$$\Pi_{i=1}^r S^{n_i}$. This construction also appears in [Sako1981].
The analogue of such a construction for embeddings, the $S^k$$S^k$-parametric connected sum of embeddings, is used
• to define, for $m\ge 2p+q+3$$m\ge 2p+q+3$, a group stucture on the set $E^m(S^p \times S^q)$$E^m(S^p \times S^q)$ of (smooth or PL) isotopy classes of embeddings $S^p \times S^q\to \Rr^m$$S^p \times S^q\to \Rr^m$ [Skopenkov2006], \S3.4, [Skopenkov2006a], \S3, [Skopenkov2015a].
• to construct an action of this group on the set of isotopy classes of embeddings of certain $(p+q)$$(p+q)$-manifolds into $\Rr^m$$\Rr^m$ [Skopenkov2014], 1.2.
• to estimate the set of isotopy classes of embeddings [Cencelj&Repovš&Skopenkov2007], [Cencelj&Repovš&Skopenkov2008], [Skopenkov2007], [Skopenkov2010], [Skopenkov2015], [Skopenkov2015a], [Crowley&Skopenkov2016] and unpublished paper [Crowley&Skopenkov2016a].
## 3 Parametric connected sum along thickenings
Let $B$$B$ be a stable fibred vector bundle. A foundational theorem of modified surgery is
Theorem 3.1 Stable classification: [Kreck1985, Theorem 2.1, p 19], [Kreck1999], [Kreck2016, Theorem 6.2].
$\displaystyle NSt_{2n}(B) \cong \Omega_{2n}^B.$
In particular, $NSt_{2n}(B)$$NSt_{2n}(B)$ has the structure of an abelian group. The question of whether there is a geometric definition of this group structure is taken up in [Kreck1985, Chapter 2, pp 25-6] where it is shown how to use parametric connected sum along thickenings to define an addition of stable diffeomorphism classes of closed $2n$$2n$-$B$$B$-manifolds. This is described in more detail (for $n>2$$n>2$) in [Kreck2016, Section 6] and uses Wall's theory of thickenings, developed in [Wall1966a]. More precisely, it depends on Wall's embedding theorem [Wall1966a, p 76] for the existence of (unique up to concordance) embedded thickenings of the $(n-1)$$(n-1)$-skeleton of $B$$B$, and Wall's classification of thickenings in the stable range [Wall1966a, Proposition 5.1] to ensure that two such embedded thickenings are diffeomorphic as $B$$B$-manifolds, so that one may cut out their interiors and glue the resulting $B$$B$-manifolds along the boundaries of the embedded thickenings. The special case of $n=2$$n=2$ is discussed separately in [Kreck2016, Section 5] under the name "connected sum along the $1$$1$-skeleton". | 2022-09-24 23:31:11 | {"extraction_info": {"found_math": true, "script_math_tex": 51, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 46, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9977347254753113, "perplexity": 3964.223555745476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00124.warc.gz"} |
https://physicstravelguide.com/advanced_notions/quantum_field_theory/instantons?rev=1525513962&do=diff | ### Site Tools
advanced_notions:quantum_field_theory:instantons [2018/04/09 10:59]tesmitekle advanced_notions:quantum_field_theory:instantons [2018/05/05 11:52] (current)jakobadmin ↷ Links adapted because of a move operation Both sides previous revision Previous revision 2018/05/05 11:52 jakobadmin ↷ Links adapted because of a move operation2018/04/09 10:59 tesmitekle 2018/03/24 14:48 ↷ Links adapted because of a move operation2018/03/17 15:59 jakobadmin [Student] 2018/03/17 15:54 jakobadmin [Student] 2018/03/17 15:52 jakobadmin [Student] 2018/03/17 15:51 jakobadmin [Layman] 2018/03/17 15:49 jakobadmin [Student] 2018/03/17 15:48 jakobadmin [Student] 2018/03/17 15:47 jakobadmin [Student] 2018/03/17 15:40 jakobadmin [Student] 2018/03/17 15:40 jakobadmin [Researcher] 2018/03/17 15:39 jakobadmin [Researcher] 2018/03/17 15:39 jakobadmin [Student] 2018/03/17 15:38 jakobadmin [Student] 2018/03/17 15:38 jakobadmin [Student] 2018/03/17 15:36 jakobadmin [Student] 2018/03/17 15:36 jakobadmin [Student] 2018/03/17 15:35 jakobadmin [Student] 2018/03/17 15:34 jakobadmin [Student] 2018/03/12 17:15 jakobadmin [Why is it interesting?] 2018/03/12 16:51 jakobadmin [Student] 2017/12/04 08:01 external edit2017/11/17 14:36 jakobadmin [Student] 2017/11/17 14:35 jakobadmin [Student] 2017/11/15 09:48 jakobadmin 2017/11/08 16:48 jakobadmin [Researcher] 2018/05/05 11:52 jakobadmin ↷ Links adapted because of a move operation2018/04/09 10:59 tesmitekle 2018/03/24 14:48 ↷ Links adapted because of a move operation2018/03/17 15:59 jakobadmin [Student] 2018/03/17 15:54 jakobadmin [Student] 2018/03/17 15:52 jakobadmin [Student] 2018/03/17 15:51 jakobadmin [Layman] 2018/03/17 15:49 jakobadmin [Student] 2018/03/17 15:48 jakobadmin [Student] 2018/03/17 15:47 jakobadmin [Student] 2018/03/17 15:40 jakobadmin [Student] 2018/03/17 15:40 jakobadmin [Researcher] 2018/03/17 15:39 jakobadmin [Researcher] 2018/03/17 15:39 jakobadmin [Student] 2018/03/17 15:38 jakobadmin [Student] 2018/03/17 15:38 jakobadmin [Student] 2018/03/17 15:36 jakobadmin [Student] 2018/03/17 15:36 jakobadmin [Student] 2018/03/17 15:35 jakobadmin [Student] 2018/03/17 15:34 jakobadmin [Student] 2018/03/12 17:15 jakobadmin [Why is it interesting?] 2018/03/12 16:51 jakobadmin [Student] 2017/12/04 08:01 external edit2017/11/17 14:36 jakobadmin [Student] 2017/11/17 14:35 jakobadmin [Student] 2017/11/15 09:48 jakobadmin 2017/11/08 16:48 jakobadmin [Researcher] 2017/11/08 16:39 jakobadmin [Researcher] 2017/11/08 16:37 jakobadmin created Line 15: Line 15: Such processes cannot be described by [[advanced_notions:quantum_field_theory:perturbation_theory|perturbation theory]], but instead only with the help of [[advanced_tools:non-perturbative_qft|non-perturbative methods]]. This follows since the wave function of tunnel processes is proportional to $e^{1/x}$ or $e^{1/x^2}$ and the Taylor expansion of such functions vanishes. Hence such effects do not appear in a perturbative expansion also, of course, these effects exist. Such processes cannot be described by [[advanced_notions:quantum_field_theory:perturbation_theory|perturbation theory]], but instead only with the help of [[advanced_tools:non-perturbative_qft|non-perturbative methods]]. This follows since the wave function of tunnel processes is proportional to $e^{1/x}$ or $e^{1/x^2}$ and the Taylor expansion of such functions vanishes. Hence such effects do not appear in a perturbative expansion also, of course, these effects exist. - The [[advanced_notions:quantum_field_theory:qcd_vacuum|ground state]] of, for example, [[models:qcd|QCD]] consists of an infinite number of degenerate states that are separated by a finite energy barrier. An instanton is a description how the field tunnels (not meant in a spatial sense) through one of these barriers into another vacuum. During the tunnel process the field, also in the ground state at the beginning and end of the process, goes continuously through a set of field configurations that do not correspond to a ground state, i.e. non-zero field energy. This is meant when we say that an instanton "has" finite field energy. + The [[advanced_notions:quantum_field_theory:qcd_vacuum|ground state]] of, for example, [[models:standard_model:qcd|QCD]] consists of an infinite number of degenerate states that are separated by a finite energy barrier. An instanton is a description how the field tunnels (not meant in a spatial sense) through one of these barriers into another vacuum. During the tunnel process the field, also in the ground state at the beginning and end of the process, goes continuously through a set of field configurations that do not correspond to a ground state, i.e. non-zero field energy. This is meant when we say that an instanton "has" finite field energy. A detailed discussion of instantons written with the needs of students in mind can be found [[http://jakobschwichtenberg.com/demystifying-the-qcd-vacuum-part-1/|here]]. A detailed discussion of instantons written with the needs of students in mind can be found [[http://jakobschwichtenberg.com/demystifying-the-qcd-vacuum-part-1/|here]]. | 2020-07-10 11:35:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7889555096626282, "perplexity": 14817.884714095759}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655908294.32/warc/CC-MAIN-20200710113143-20200710143143-00080.warc.gz"} |
http://www.flapw.de/MaX-6.0/future/F3/ | # First FLEUR run
As a first example of a FLEUR run we will calculate bulk Cu. As you already learned, we start with generating an inp.xml file by using the input generator. The corresponding basic input file is already provided in the CuBulk directory.
cd CuBulk
By now you already know how to inspect and interpret the basic input for the input generator. In this case the example is particular simple and contains only a title, the specification of the lattice and a list of atoms with only a single element:
cat CuBulk.txt
Running inpgen we now create an input for FLEUR:
inpgen -f CuBulk.txt
## Running FLEUR
One of the most basic usage info for FLEUR to learn is that the fleur or fleur_MPI executable will always read its input from an inp.xml file in the current directory.
The most basic call thus is simply
fleur_MPI
If all went well, the program should stop after some time with the message "all done". This is the message FLEUR will usually output after a sucessful run. In practice DFT is implemented as an iterative algorithm that starts with a first guess for the ground-state electron density and ends after several iterations with a self-consistent ground-state density. In Fleur by default up to 15 iterations of the self-consistency loop are performed (can be changed in inp.xml, parameter itmax). The output of the fleur calculation is available in the 'out' file and also in the 'out.xml' file.
You can observe the development of the distance between the input and output densities of each iteration in the terminal output. Alternatively this can also be obtained after the calculation by invoking 'grep dist out' to find the respective entries in the generated out file.
grep "distance of" out
The most important quantity that is directly obtainable from a DFT calculation is the total energy of the unit cell's ground state. Although this quantity cannot be measured total energy differences can be used to calculate many measurable quantities. It is written out in each iteration of the self-consistency cycle but only meaningful for the self-consistent ground-state density. How large is the total energy for the Cu example ('grep "total energy" out' to obtain the respective values for all iterations)?
grep "total energy" out
## FLEUR ouput
In general the most important output files of FLEUR are: - out: This is the most comprehensive output. It is meant to be human readable but sometimes contains a slightly overwhelming amount of information. - out.xml: This file is probably best suited for automatic processing. It contains a subset of the information in out. - cdn.hdf: This file contains the charge densities calculated. This is the key quantity you will need to run additional FLEUR calculations on top of your converged results or to restart a not-yet converged calculation. (if you use a version compiled without the HDF5 library you will find cdn* files instead.) - usage.json,juDFT_times.json: Files with technical info of the last run.
Inspect which files you now have in the directory.
ls
## Summary of your first FLEUR calculation
In this notebook you have learned how to * run FLEUR with an inp.xml file specifying the setup and parameters. * restart FLEUR in case you have not achieved convergence. * understand the key output files | 2023-03-24 18:15:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4708267152309418, "perplexity": 1477.2246317054871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945288.47/warc/CC-MAIN-20230324180032-20230324210032-00112.warc.gz"} |
https://www.physicsforums.com/threads/question-about-electronvolts.242977/ | 1. Jul 1, 2008
pzlded
Let there be three electronvolts of energy between a tube’s cathode and plate with a voltage of one volt. Let thermionic emission release one electron at the cathode. From the definition of electronvolt (the energy a point charge gains when it travels through one volt), that electron gains one electronvolt of kinetic energy on its journey to the plate and two electronvolts of voltage’s energy remain. Release of two more electrons will convert the remainder of the tube’s energy to electron kinetic energy. No voltage’s energy will remain in the tube. Perhaps this shows the difference between voltage's energy and energy due to attraction between masses?
If the electron bounces (with perfect elasticity) off the plate instead of landing on the plate, will that electron bounce until it gains all three electronvolts of energy. When the bouncing electron travels between the cathode and the plate, how does that change the point charges that cause the voltage?
2. Jul 1, 2008
cmos
I may be misinterpreting you, but if not then your line of thinking is flawed:
If there is 3eV between the cathode and your 1 V plate, then that means the cathode is at -2 V. Minus because you say the electron go from the cathode to the 1 V plate. So if you assume that electrons are being released by some mechanism from the -2 V cathode, then you have simply generated current.
Just to clear up the definition of the electron-volt:
1 eV is the potential energy that an electron has when placed in an electric potential of 1 V.
If want like to think in terms of kinetic energy:
1 eV is the kinetic energy gained by an electron when accelerated from rest, and displaced a distance of 1 m, by a uniform electric field of 1 V/m.
3. Jul 1, 2008
pzlded
Yes if you mean the energy required to bring an electron from infinity to the 1V spot. For example: a 1.5 volt battery does not have one contact point on an electron and another contact point near the electron. As with any unit of energy, electronvolts can be independent of any particular voltage.
An electronvolt of energy is U=QV. The energy required to charge a capacitor is U = ½ QV. Is there a difference in charging energy per charge in a capacitor vs acceleration energy per charge of an electronvolt?
A high capacitance capacitor charged with 1V has more energy than a low capacitance capacitor charged with 1V.
Last edited: Jul 1, 2008
4. Jul 1, 2008
pzlded
The energy unit ELECTRONVOLT differs from eV, (eV is volts or energy per charge. Wikipedia explains the energy unit 'electronvolt'.
http://en.wikipedia.org/wiki/Electronvolt
5. Jul 1, 2008
ZapperZ
Staff Emeritus
eV IS electronvolt. It is not different. It cannot be "volts" because it is a unit of energy (volts isn't a unit of energy). So that wikipedia link you gave contradicts what you said here.
Zz.
6. Jul 2, 2008
7. Jul 2, 2008
Cthugha
Ehm...who told you that eV is Volts? V is in Volts and e is the charge of an electron, which is given in Coulomb. So eV is Coulomb times Volts, which is Joule again.
8. Jul 2, 2008
ZapperZ
Staff Emeritus
"electronvolt" and "eV" are the same thing. I've been a physicist for years and have used this extensively. If you don't wish to listen to this, then there's nothing else to be said.
Zz.
Last edited: Jul 2, 2008
9. Jul 2, 2008
malawi_glenn
The electronvolt (symbol eV)
Is what I read on that wiki-article
10. Jul 2, 2008
pzlded
Sir, I think there is a problem with your units.
Electronvolts do not equal volts
$$qV \neq V$$
An ElectronVolt $$qV$$ is the amount of voltage's energy that converts to kinetic energy when an electron passes through 1 volt (V) of electric potential difference.
In a synchrotron, each Volt adds one electronvolt to either + and - charges; accelerating the charges AWAY from the voltage gradient. Volts caused by + charges will always attract electrons, instead of accelerating them away from the voltage gradient.
11. Jul 2, 2008
ZapperZ
Staff Emeritus
Who here actually said that "electronvolts equal volts"? From what I had read, it was YOU who are insisting that "eV" has units of volts (a post that you had edited but the original post is still preserved in a quoted comment). All of us here have been trying to tell you that "electronvolts" and "eV" are the SAME thing, and both are units of energy.
Why don't you sit down and do a dimensional analysis and satisfy yourself that eV is a unit of energy?
So are you still insisting that "electronvolt" is NOT the same as "eV", and that "eV" isn't a unit of energy?
There are WAY too many other places in physics to trip on, this is one of the most puzzling place to get stuck at.
Zz.
12. Jul 2, 2008
Cthugha
Are you just not reading, what others write or do you just want to mock us?
No one here thinks, that electronvolts are volts, but you are the only one, who thinks, that eV refer to Volts. eV is NOT volts and it is NOT energy per charge.
It is just your $$qV$$ mentioned above, where e, the charge of one electron, is used for q.
Just to make it clear again: the e in eV is the charge of an electron and NOT a prefix like in kV, MV, GV or mV.
13. Jul 2, 2008
pzlded
Sorry, I put in a draft version instead of the 'real' one. PF is a great forum and my desire is to contribute, not to detract.
I did not mean to include the first sentence (big mistake). The sentence was an overview to help me focus on providing a response to your post. I fully intended to remove the sentence, I used the word sir to mean me (as if you were calling me sir), to remind myself to keep this professional. I shall try to never make such a mistake again.
eV can correctly be used to mean either volts or electronvolts, but electronvolts and volts have very different meaning. I try to distinguish the two by only using 'electronvolts' to mean qV.
14. Jul 2, 2008
malawi_glenn
pzlded: Give up
eV is the symbol of electronvolt, which is an unit for energy. Just as Js is the symbol for Joule seconds.
Thats it, no less no more.
A person who uses eV as a unit/symbol for volts, is out of his mind.
15. Jul 2, 2008
ZapperZ
Staff Emeritus
Your question has been sufficiently answered many times. It is up to you to either accept or reject it, at your own risk.
Zz. | 2016-10-26 23:57:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.647706151008606, "perplexity": 1325.7956740119896}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721008.78/warc/CC-MAIN-20161020183841-00292-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://zbmath.org/?q=an:06842115 | # zbMATH — the first resource for mathematics
Patterns and coherence resonance in the stochastic Swift-Hohenberg equation with Pyragas control: the Turing bifurcation case. (English) Zbl 1380.93230
Summary: We provide a multiple time scales analysis for the Swift-Hohenberg equation with delayed feedback via Pyragas control, with and without additive noise. An analysis of the pattern formation near onset indicates both the possibility of either standing waves (rolls) or traveling waves via Turing or Turing-Hopf bifurcations, respectively, depending on the product of the strength of the feedback and the length of the delay. The remainder of the paper is focused on Turing bifurcations, where the delay can drive the appearance of an additional time scale, intermediate to the usual slow and fast time scales observed in the modulation of rolls without delay. In the deterministic case, a Ginzburg-Landau-type modulation equation is derived that inherits Pyragas control terms from the original equation. The Eckhaus stability criteria is obtained for the rolls, with the intermediate time scale observed in the transients. In the stochastic context, slow modulation equations are derived for the amplitudes of the primary modes that are coupled to a fast Ornstein-Uhlenbeck-type equation with delay for the zero mode driven by the additive noise. By deriving an averaging approximation for the amplitude of the primary mode, we show how the interaction of noise and delay influences the existence and stability range for the noisy roll-type patterns. Furthermore, approximations for the spectral densities of the primary and zero modes show that oscillations on the intermediate times scale are sustained through the phenomenon of coherence resonance. These dynamics on the intermediate time scale are sustained through the interaction of noise and delay, in contrast to the deterministic context where dynamics on the intermediate times scale are transient.
##### MSC:
93E03 Stochastic systems in control theory (general) 93C23 Control/observation systems governed by functional-differential equations 34K35 Control problems for functional-differential equations 93B52 Feedback control
sapa
Full Text:
##### References:
[1] Cross, M. C.; Hohenberg, P. C., Pattern formation outside of equilibrium, Rev. Modern Phys., 65, 851-1112, (1993) · Zbl 1371.37001 [2] Cross, M.; Greenside, H., Pattern Formation and Dynamics in Nonequilibrium Systems, (2009), Cambridge University Press Cambridge, UK · Zbl 1177.82002 [3] Hoyle, R. B., Pattern Formation: An Introduction To Methods, (2006), Cambridge University Press Cambridge, UK · Zbl 1087.00001 [4] Pismen, L. M., (Patterns and Interfaces in Dissipative Dynamics, Springer Series in Synergetics, (2006), Springer Berlin) · Zbl 1098.37001 [5] Kraft, A.; Gurevich, S. V., Time-delayed feedback control of spatio-temporal self-organized patterns, (Schöll, E.; Klapp, S. H.L.; Hövel, P., Dissipative Systems, Control of Self-Organizing Nonlinear Systems Series Understanding Complex Systems, (2016)), 413-430 [6] Tlidi, M.; Sonnino, A.; Sonnino, G., Delayed feedback induces motion of localized spots in reaction-diffusion systems, Phys. Rev. E, 87, 042918, (2013) [7] Garcia-Ojalvo, J.; Sancho, J. M., Noise in Spatially Extended Systems, (1999), Springer-Verlag New York · Zbl 0938.60002 [8] Song, H.; Chen D. W. Li, D.; Qu, Y., Graph-theoretic approach to exponential synchronization of stochastic reaction-diffusion Cohen-Grossberg neural networks with time-varying delays, Neurocomputing, 177, 179-187, (2016) [9] Woolley, T. E.; Baker, R. E.; Gaffney, E. A.; Maini, P. K.; Seirin-Lee, S., Effects of intrinsic stochasticity on delayed reaction-diffusion patterning systems, Phys. Rev. E, 85, 051914, (2012) [10] Kilpatrick, Z. P., Delay stabilizes stochastic motion of bumps in layered neural fields, Physica D, 295-296, 30-45, (2015) · Zbl 1365.92009 [11] Sen, S.; Ghosh, P.; Ray, D. S., Reaction-diffusion systems with stochastic time delay in kinetics, Phys. Rev. E, 81, 056207, (2010) [12] Swift, J.; Hohenberg, P. C., Hydrodynamic fluctuations at the convective instability, Phys. Rev. A, 15, 319-328, (1977) [13] Pyragas, K., Continuous control of chaos by self-controlling feedback, Phys. Lett. A, 170, 421-428, (1992) [14] Tlidi, M.; Vladimirov, A. G.; Pieroux, D.; Turaev, D., Spontaneous motion of cavity solitons induced by a delayed feedback, Phys. Rev. Lett., 103, 103904, (2009) [15] Peletier, L. A.; Rottschäfer, V., Pattern selection of solutions of the Swift-Hohenberg equation, Physica D, 194, 95-126, (2004) · Zbl 1052.35076 [16] Peletier, L. A.; Troy, W. C., Spatial Patterns: Higher Order Models in Physics and Mechanics, (2001), Birkhäuser Boston · Zbl 1076.34515 [17] Avitabile, D.; Lloyd, D. J.B; Burke, J.; Knobloch, E.; Sandstede, B., To snake or not to snake in the planar Swift-Hohenberg equation, SIAM J. Appl. Dyn. Syst., 9, 704-733, (2010) · Zbl 1200.37014 [18] Budd, C. J.; Kuske, R. A., Localised periodic patterns for the non-symmetric generalized Swift-Hohenberg equation, Physica D, 208, 73-95, (2005) · Zbl 1073.35037 [19] Burke, J.; Knobloch, E., Localized states in the generalized Swift-Hohenberg equation, Phys. Rev. E, 73, 056211, (2006) [20] Chapman, S. J.; Kozyreff, G., Exponential asymptotics of localised patterns and snaking bifurcation diagrams, Physica D, 238, 319-354, (2009) · Zbl 1156.37321 [21] Dawes, J. H.P., The emergence of a coherent structure for coherent structures: localized states in nonlinear systems, Phil. Trans. R. Soc. A, 368, 3519-3534, (2010) · Zbl 1202.37112 [22] Tlidi, M.; Vladimirov, A. G.; Turaev, D.; Kozyreff, G.; Pieroux, D.; Erneux, T., Spontaneous motion of localized structures and localized patterns induced by delayed feedback, Eur. Phys. J. D, 59, 59-65, (2010) [23] Tlidi, M.; Averlant, E.; Vladimirov, A.; Panajotov, K., Delay feedback induces a spontaneous motion of two-dimensional cavity solitons in driven semiconductor microcavities, Phys. Rev. A, 86, 033822, (2012) [24] Panajotov, K.; Tlidi, M., Spontaneous motion of cavity solitons in vertical-cavity lasers subject to optical injection and to delayed feedback, Eur. Phys. J. D, 59, 67-72, (2010) [25] Montgomery, K.; Silber, M., Feedback control of traveling wave solutions of the complex Ginzburg Landau equation, Nonlinearity, 17, 2225-2248, (2004) · Zbl 1071.93022 [26] Postlethwaite, C. M.; Silber, M., Spatial and temporal feedback control of traveling wave solutions of the two-dimensional complex Ginzburg-Landau equation, Physica D, 236, 65-74, (2007) · Zbl 1136.35319 [27] Gurevich, S. V.; Friedrich, R., Instabilities of localized structures in dissipative systems with delayed feedback, Phys. Rev. Lett., 110, 014101, (2013) [28] D. Blömker, Amplitude Equations for Stochastic Partial Differential Equations, RWTH Aachen, 2005, (Habilitationsschrift). [29] Blömker, D.; Hairer, M.; Pavliotis, G. A., Modulation equations: stochastic bifurcation in large domains, Comm. Math. Phys., 258, 479-512, (2005) · Zbl 1084.60038 [30] Blömker, D.; Hairer, M.; Pavliotis, G. A., Stochastic Swift-Hohenberg equation near a change of stability, Proceedings of Equadiff-11, 27-37, (2005) [31] D. Blömker, W.W. Mohammed, Amplitude equations for SPDEs with cubic nonlinearities. Stochastics, 85, 181-215. · Zbl 1291.60127 [32] Klepel, K.; Blömker, D.; Mohammed, W. W., Amplitude equation for the generalized Swift-Hohenberg equation with noise, Z. Angew. Math. Phys., 65, 1107-1126, (2014) · Zbl 1322.60117 [33] Mohammed, W. W.; Blömker, D.; Klepel, K., Modulation equation for stochastic Swift-Hohenberg equation, SIAM J. Math. Anal., 45, 14-30, (2013) · Zbl 1264.60039 [34] Staliunas, K., Spatial and temporal spectra of noise driven stripe patterns, Phys. Rev. E, 64, 066129, (2001) [35] Viñals, J.; Hernández-García, E.; San Miguel, M.; Toral, R., Numerical study of the dynamical aspects of pattern selection in the stochastic Swift-Hohenberg equation in one dimension, Phys. Rev. A, 44, 1123, (1991) [36] Pradas, M.; Pavliotis, G. A.; Kalliadasis, S.; Papageorgiou, D. T.; Tseluiko, D., Additive noise effects in active nonlinear spatially extended systems, Euro. J. of Appl. Math, 23, 563-591, (2012) · Zbl 1279.60081 [37] Kłosek, M. M.; Kuske, R., Multi-scale analysis for stochastic differential delay equations, SIAM Multisc. Model. Simul., 706-729, (2005) · Zbl 1093.34027 [38] Pavliotis, G. A.; Stuart, A., Multiscale Methods: Averaging and Homogenization, (2008), Springer · Zbl 1160.35006 [39] Hutt, A.; Longtin, A.; Schimansky-Geier, L., Additive global noise delays Turing bifurcations, Phys. Rev. Lett., 98, 230601, (2007) [40] Trefethen, L. N., (Spectral Methods in MATLAB, Software, Environments, and Tools, (2000), Society for Industrial and Applied Mathematics (SIAM) Philadelphia, PA) [41] Buckwar, E.; Kuske, R.; Mohammed, S.; Shardlow, T., Weak convergence of the Euler scheme for stochastic differential delay equations, LMS J. Comput. Math., 11, 60-99, (2005) [42] Manneville, P., (Dissipative Structures and Weak Turbulence, Perspectives in Physics, (1990), Academic Press) · Zbl 0714.76001 [43] Eckhaus, W., (Studies in Nonlinear Stability Theory, Springer Tracts in Natural Philosophy, vol. 6, (1965), Springer) [44] Küchler, U.; Mensch, B., Langevin’s stochastic differential equation extended by a time-delayed term, Stoch. Rep., 40, 23-42, (1992), 1992 · Zbl 0777.60048 [45] Baxendale, P. H.; Greenwwood, P. E., Sustained oscillations for density dependent Markov processes, J. Math. Biol., 63, 433, (2011) · Zbl 1230.92003 [46] McKane, A. J.; Newman, T. J., Predator-prey cycles from resonant amplification of demographic stochasticity, Phys. Rev. Lett., 94, 218102, (2005) [47] Percival, D. B.; Walden, A. T., Spectral Analysis for Physical Applications, (1993), Cambridge University Press Cambridge · Zbl 0796.62077 [48] Jarque, C. M.; Bera, A. K., A test for normality of observations and regression residuals, Internat. Statist. Rev., 55, 163-172, (1987), (implemented in the jbtest function in Matlab) · Zbl 0616.62092 [49] Fischer, M.; Imkeller, P., A two state model for noise-induced resonance in bistable systems with delay, Stoch. Dyn., 5, 2, 247-270, (2005), Special Issue on Stochastic Dynamics with Delay and Memory · Zbl 1078.34060 [50] Huber, D.; Tsimring, L. S., Dynamics of an ensemble of noisy bistable elements with global time delayed coupling, Phys. Rev. Lett., 91, 260601, (2003) [51] Tsimring, L. S.; Pikovsky, A., Noise-induced dynamics in bistable systems with delay, Phys. Rev. Lett., 87, 250602, (2001) [52] Corless, R. M.; Gonnet, G. H.; Hare, D. E.G; Jeffrey, D. J.; Knuth, D. E., On the Lambert W function, Adv. Comput. Math., 5, 329-359, (1996) · Zbl 0863.65008 [53] Mallet-Paret, J., The Fredholm alternative for functional-differential equations of mixed type, J. Dynam. Differential Equations, 11, 1-47, (1999) · Zbl 0927.34049
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-01-25 04:50:30 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8053693175315857, "perplexity": 10127.204152622577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703564029.59/warc/CC-MAIN-20210125030118-20210125060118-00128.warc.gz"} |
https://ask.libreoffice.org/en/question/173725/change-size-of-inserted-image/?comment=173749 | Change size of inserted image
Hello, Each time I insert an image in a writer document it is very large. How can I change the default image setting? And can I also create a setting that gives each image the same border? Ton
edit retag close merge delete
Sort by » oldest newest most voted
Right click on the image in LO. In the menu you can e.g. Select Crop and / or Compress .... When compressing you can make various settings and then first click the button calculate new size. If you want, click on OK.
To get an border around the image, right-click on the image. Select Properties .... In the dialog box "Image" select the tab "Outline". Select your border and click OK.
EDIT_1_20181125-13.30h
It will be better if you use a graphics program, e.g. Gimp or similar. Some of these programs have a batch process.
more
Hi, Thanks, although this I know. What I would like to change is the standard size, now I need to resize every image and as I want them all the same smaller size I was hoping to make a setting for that. Not sure if that is possible, Ton
( 2018-11-25 10:48:38 +0100 )edit
( 2018-11-25 13:31:01 +0100 )edit
Ton means width and height, not file size. Haven't done that kind of thing for a long time, so can't help out here.
( 2018-11-25 13:56:39 +0100 )edit
My wrong, I meant width and length. Every picture is automatically inserted on the whole page and then need to make smaller in the way you described. It's that action I would like to see changed for a whole document. Ton
( 2018-11-26 08:05:21 +0100 )edit
Sorry, I'm not aware of that. Maybe someone else can help you.
( 2018-11-26 08:17:19 +0100 )edit
You can do both things simultaneously... sort of. First, define a Frame style with the needed width and border. Then, insert an empty frame and apply to it that style. Finally, with the cursor inside the frame, insert the picture.
In principle you can just insert the image and then apply to it the frame style, but... try it, it's weird.
more | 2020-11-23 16:55:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1864321529865265, "perplexity": 1929.039586843169}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141163411.0/warc/CC-MAIN-20201123153826-20201123183826-00245.warc.gz"} |
https://mathematica.stackexchange.com/questions/209484/fourier-transform-gives-wrong-result | # Fourier transform gives wrong result
The expression
I InverseFourierTransform[FourierTransform[1/t, t, w]/w, w, x]//FullSimplify
gives
EulerGamma + Log[Abs[x]]
while the correct result should be
EulerGamma - I Pi + Log[x]
the same as of the following:
f[n_, s_] := ((-1)^n n!)/s^(n + 1)
Limit[1/2 f[-1 + h, s] + 1/2 f[-1 - h, s], h -> 0]
• I am not an expert on this, but looking at this question, it seems to me that the additive constant is irrelevant in any setting where the Fourier transform exists at all. – Lukas Lang Nov 12 '19 at 15:42
• Also as antiderivatives, Log[x] and Log[Abs[x]] differ by piecewise constants on the real line. – Daniel Lichtblau Nov 12 '19 at 16:06
By using the additional Assumptions -> in the script leads to
I*InverseFourierTransform[FourierTransform[1/t, t, w]/w, w, x, Assumptions -> x>=0]//FullSimplify
giving the result
EulerGamma + Log[x] + I*Pi*(1 - Sign[x])/2
The last portion of the result can, essentially, be ignored.
• It is essentially the same result as in the question, which is wrong – Anixx Nov 13 '19 at 7:06 | 2020-04-02 16:37:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7410959601402283, "perplexity": 2476.4254118685053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506988.10/warc/CC-MAIN-20200402143006-20200402173006-00147.warc.gz"} |
http://www.numdam.org/articles/10.1051/ro:2002007/ | Generalized characterization of the convex envelope of a function
RAIRO - Operations Research - Recherche Opérationnelle, Tome 36 (2002) no. 1, pp. 95-100.
We investigate the minima of functionals of the form
${\int }_{\left[a,b\right]}g\left(\stackrel{˙}{u}\left(s\right)\right)\mathrm{d}s$
where $g$ is strictly convex. The admissible functions $u:\left[a,b\right]\to ℝ$ are not necessarily convex and satisfy $u\le f$ on $\left[a,b\right]$, $u\left(a\right)=f\left(a\right)$, $u\left(b\right)=f\left(b\right)$, $f$ is a fixed function on $\left[a,b\right]$. We show that the minimum is attained by $\overline{f}$, the convex envelope of $f$.
DOI : https://doi.org/10.1051/ro:2002007
Mots clés : convex envelope, optimization, strict convexity, cost function
@article{RO_2002__36_1_95_0,
title = {Generalized characterization of the convex envelope of a function},
journal = {RAIRO - Operations Research - Recherche Op\'erationnelle},
pages = {95--100},
publisher = {EDP-Sciences},
volume = {36},
number = {1},
year = {2002},
doi = {10.1051/ro:2002007},
zbl = {1003.49016},
mrnumber = {1920381},
language = {en},
url = {http://www.numdam.org/articles/10.1051/ro:2002007/}
}
Kadhi, Fethi. Generalized characterization of the convex envelope of a function. RAIRO - Operations Research - Recherche Opérationnelle, Tome 36 (2002) no. 1, pp. 95-100. doi : 10.1051/ro:2002007. http://www.numdam.org/articles/10.1051/ro:2002007/
[1] J. Benoist and J.B. Hiriart-Urruty, What Is the Subdifferential of the Closed Convex Hull of a Function? SIAM J. Math. Anal. 27 (1994) 1661-1679. | MR 1416513 | Zbl 0876.49018
[2] H. Brezis, Analyse Fonctionnelle: Théorie et Applications. Masson, Paris, France (1983). | MR 697382 | Zbl 0511.46001
[3] B. Dacorogna, Introduction au Calcul des Variations. Presses Polytechniques et Universitaires Romandes, Lausanne (1992). | MR 1169677 | Zbl 0757.49001
[4] F. Kadhi and A. Trad, Characterization and Approximation of the Convex Envelope of a Function. J. Optim. Theory Appl. 110 (2001) 457-466. | MR 1846278 | Zbl 1007.90049
[5] T. Lachand-Robert and M.A. Peletier, Minimisation de Fonctionnelles dans un Ensemble de Fonctions Convexes. C. R. Acad. Sci. Paris Sér. I Math. 325 (1997) 851-855. | Zbl 0889.47035
[6] T. Rockafellar, Convex Analysis. Princeton University Press, Princeton, New Jersey (1970). | MR 274683 | Zbl 0193.18401
[7] W. Rudin, Real and Complex Analysis, Third Edition. McGraw Hill, New York (1987). | MR 924157 | Zbl 0925.00005 | 2021-11-27 02:10:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27391448616981506, "perplexity": 2034.477969381948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358078.2/warc/CC-MAIN-20211127013935-20211127043935-00325.warc.gz"} |
https://robotics.stackexchange.com/questions/6953/how-to-calculate-euler-angles-from-gyroscope-output | # How to calculate Euler Angles from gyroscope output?
I am using a tri-axis accelerometer and tri-axis gyroscope to measure the linear acceleration of a body. I need to get the orientation of the body in euler form in order to rotate the accelerometer readings from the body frame into the earth frame. Please help I'm so stuck
Look into a complementary filter. It isn't the correct way to go out this but it will give you usable data for attitudes around level. It's also worth mentioning that you will not be able to track yaw. There is no way to account for bias/noise with the two sensors you've listed.
complementary filter: http://www.pieter-jan.com/node/11
First you need to integrate the output from the gyro to get the actual X, Y and Z angles.
angleX = gyroAngleX + gyroInputX angleY = gyroAngleY + gyroInputY
However this value will drift over time so you will need to use a complementary filter or kalman filter. Personally, I would recommend a complementary filter because it is much simpler to implement.
First you must find the angle from the accelerometer using a little bit of trigonometry.
accelAngleX = atan2(accelY, accelZ) * 180/M_PI; accelAngleY = atan2(-accelX, sqrt(accelY*accelY + accelZ*accelZ)) * 180/M_PI;
Then get the actual angle using this formula.
angleX = 0.98*angleX + 0.02*accelAngleX angleY = 0.98*angleY + 0.02*accelAngleY
The variables above must be the same variables used when calculating the gyro angle. The 0.98 and 0.02 can be tuned to get the best output, but they should always add up to one.
## Using the Gyroscope
First thing to note is that the gyroscope is reporting angular rates with respect to the sensor. So if the sensor is rotating with some rate, the data it's outputting will be in the frame of the rotating sensor, NOT with respect to the fixed global frame. In other words, the X-Y-Z coordinate axes of the sensor are spinning with the sensor, whereas the global X-Y-Z axes remain fixed.
So the first task (before integrating), is to convert the angular rates from the sensor-body frame to angular rates in the global frame.
Sections 9.1, 9.2, and 9.3 in this OCW pdf lays out how to do this pretty well: https://ocw.mit.edu/courses/mechanical-engineering/2-017j-design-of-electromechanical-robotic-systems-fall-2009/course-text/MIT2_017JF09_ch09.pdf
Once you have the angular rates in the global frame, you can then integrate and accumulate the angles as mentioned in other answers.
## Using the Accelerometer
Pitch and roll Euler angles in the global frame can also be calculated with the accelerometer.
When the sensor is sitting at rest, it will sense the force due to gravity in the negative Z direction as 9.8 m/s^2. Using this knowledge, we can find how much the sensor has pitched or rolled, by calculating what component of the gravity vector has moved from the negative Z axis, to other axes.
Equations that you can use:
Pitch Angle $$= \arctan(\frac{A_y}{\sqrt{A_x^2+A_z^2}})$$
Roll Angle $$= \arctan(\frac{A_x}{\sqrt{A_y^2+A_z^2}})$$
Note that this won't give exactly correct results when the sensor is being accelerated by an external force (when you are pushing or rotating it), because there are other unknown forces besides gravity acting on the sensor. | 2022-12-07 03:08:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5790389180183411, "perplexity": 707.094839268786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711126.30/warc/CC-MAIN-20221207021130-20221207051130-00803.warc.gz"} |
https://stats.stackexchange.com/questions/365925/testing-the-validity-of-models-for-adjusting-bookmaker-odds-into-probabilities-o | # Testing the validity of models for adjusting bookmaker odds into probabilities of real-world events
I'm looking at the use of bookmaker odds to predict the outcome of sporting events in which only two results are possible. A problem with using bookmaker odds to predict outcomes is that they include some vigorish, i.e. a profit margin, typically around 5%.
So for instance a bookmaker might offer decimal odds of 1.2 ($$\frac{1}{1.2} = 0.833$$ probability) for Team A to win, and decimal odds of 4.5 ($$\frac{1}{4.5} = 0.222$$ probability) for Team B to win. $$0.833 + 0.222 = 1.055$$, so here the margin is 5.5%. Since we can't know how bookmakers actually decide their margins, I had to model the margins and come up with estimated probabilities that sum to one.
I have four models. So for instance one model adjusted Team A's decimal odds from 1.2 to 1.2390 (0.8071 probability), and adjusted Team B's decimal odds from 4.5 to 5.1833 (0.1929 probability). Now $$0.8071 + 0.1929 = 1$$.
I then obtained bookmaker odds for about 1400 games, and ran a simulation. For each game in the simulation, $1 is bet on each team. Thus in that example I gave above, \$1 would be bet on Team A at odds of 1.2390, and \$1 would be bet on Team B at odds of 5.1833. If Team A won the game then \$1 × 1.2390 - \$1 = \$0.2390 would be gained, but the \$1 bet on Team B would be lost, and thus there would be an overall loss of \$0.7610. If Team B won the game then \$1 × 5.1833 - \$1 = \$4.1833 would be gained, but the \$1 bet on Team B would be lost, and thus there would be an overall profit of \$3.1833. This process was repeated for about 1400 games, and the results are as follows: Having done that simulation, I'm unsure how to interpret the results and assess the models. It seems the purple Model 4 is giving odds for the underdog team that are too short. Although Model 4 is also giving rather generously long odds for the favorites, the amount lost betting with poor odds on the underdogs causes the overall total profit to be negative. Meanwhile, it seems like the yellow Model 3 is doing the opposite. It's giving generously high odds for underdogs. Although this is counterbalanced to some extent by the fact that it gives relatively short odds for the favorites, the amount won winning big on the underdogs causes the overall total profit to be strongly positive. Model 1 and Model 2 are somewhere in between and thus seem to be doing a better job of estimating the true probabilities. 1. How can I decide which model is best? It seems Model 1 (in blue) and Model 2 (in red) are best at representing true odds since they're closest to$0 profit, but how can I work out how much data I need to properly establish this? Are there confidence intervals or things of that sort that can be applied to this data? A friend suggested I try some sort of bootstrapping approach, although it's not obvious to me how I would implement that.
2. Is my approach an appropriate way to validating the models? What would be other or better ways of doing that?
• I could be totally mistaken, but it sounds like you have not understood how book makers work: they set the odds so that the \$ amount of bets on either side offset so they make money either way ie the idea is that by setting high enough odds unpopular teams will have enough bets to counteract bets on the favourite Sep 8 '18 at 14:23
• I'm aware that bookmakers usually respond to incoming bets by adjusting the odds they offer so as to achieve the outcome you described. Are you able to explain the conflict you perceive between that and what I have written? Sep 8 '18 at 22:45
• You need to do multiple runs for each model. These are more or less random walks. Oct 11 '18 at 10:26
• Also, do you have a dataset of real-life games for which you know both the outcome and the odds? You can test your model by using the adjusted probabilities to predict the game outcome directly, using a proper scoring rule (1, 2, 3). Oct 11 '18 at 10:31
• I do have a dataset of such games, and so I could easily follow the Merkle & Steyvers paper you kindly linked and rank the models according to some of the proper scoring rules they mentioned (Brier Score, Logarithmic Score, etc), and then select the model which overall does the best. However, I was a little unsure how that related to your earlier comment about needing to do multiple runs for each model? Oct 11 '18 at 11:49
So for the bookies to be rational bookies and for the players to be rational players it follows that $$O_{stated}\le\frac{1-p_{callibrated}}{p_{callibrated}},$$ where $$O$$ is the stated odds and $$p$$ are well-calibrated probabilities.
You should do a Bayesian regression but condition the coefficient around the stated odds to be 1. You know that the stated odds should be a scaling of the true odds. So $$O_{stated}\pi=O_{callibrated},$$ where $$\pi$$ is $$(1+ \text{profit margin}),$$ where $$\pi>1$$.
If you assume that the margin is constant, then this is a non-simple logistic regression. I say it is not simple because you have two restrictions.
First, your scaling coefficient is just your regression constant, but it is bound between zero and one. Second, your coefficient against your data has to be conditioned to unity. So, if you would think of this as $$Y=\beta_1O_{stated}+\beta_0,$$ it must be the case that $$\beta_1=1$$ and $$\beta_0>0$$. Since you believe the margin is around 5%, your prior expectation should be around $$\beta_0\approx{\log(1.05)}$$, where $$Y$$ is the binary for a win or loss.
If you relaxed these restrictions, it would imply that someone is using something other than well-callibrated odds. From the research on parimutuel betting, that would be a surprise. You could, then do a model selection process to see which method produced a more probable result.
You should be using a Bayesian logistic regression.
• Thanks for the thoughtful response (+1). You mention that "If you assume that the margin is constant, then this is a non-simple logistic regression." As it happens the margins are expected to vary. Among other things, bookmakers will take bigger markets on particular events, and will also take bigger margins for games in which one side is a big underdog. How would that impact your suggested solution? Sep 10 '20 at 12:21
• @user1205901-ReinstateMonica I would add any regressors that you believe would cause a change in the spread, such as the prime lending rate. I would also model non-linearly in stated odds. I may add a threshold where if the odds are large enough, an additional margin would be created. The first thing I would do is run the above regression and see if I saw patterns in the graph I may not have accounted for. I might partition the sample by time as well. Sep 10 '20 at 13:20 | 2021-10-21 15:16:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6130749583244324, "perplexity": 680.9138841164057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585424.97/warc/CC-MAIN-20211021133500-20211021163500-00101.warc.gz"} |
http://www.etiquettehell.com/smf/index.php?topic=46148.msg3016677 | ### Author Topic: Not Going To Happen 'Cause I'm Not Harry Potter (Impossible Patron Requests) (Read 672302 times)
0 Members and 1 Guest are viewing this topic.
#### jedikaiti
• Swiss Army Nerd
• Hero Member
• Posts: 2715
• A pie in the hand is worth two in the mail.
##### Re: Not Going To Happen 'Cause I'm Not Harry Potter (Impossible Patron Requests)
« Reply #2910 on: September 17, 2013, 11:40:55 PM »
How did she take it?
What part of v_e = \sqrt{\frac{2GM}{r}} don't you understand? It's only rocket science!
"The problem with re-examining your brilliant ideas is that more often than not, you discover they are the intellectual equivalent of saying, 'Hold my beer and watch this!'" - Cindy Couture
#### Doll Fiend
• Member
• Posts: 987
• The Dolls are in the Garden and in my Head.
##### Re: Not Going To Happen 'Cause I'm Not Harry Potter (Impossible Patron Requests)
« Reply #2911 on: September 18, 2013, 12:26:05 AM »
that's gorgeous!
• Hero Member
• Posts: 1842
##### Re: Not Going To Happen 'Cause I'm Not Harry Potter (Impossible Patron Requests)
« Reply #2913 on: September 18, 2013, 04:26:15 AM »
I told her that the cost for each would start $100 a piece and unless it was Oct 2014, there was no way I could make them in time. She hasn't responded yet. For the curious, [Link=http://www.gourmetcrochet.com/index_files/Page563.htm]here is the pattern. [/link] that's gorgeous! Wow I really really wish I could crochet now! Its lovely #### Queen of Clubs • Hero Member • Posts: 1796 ##### Re: Not Going To Happen 'Cause I'm Not Harry Potter (Impossible Patron Requests) « Reply #2914 on: September 18, 2013, 08:19:48 AM » I told her that the cost for each would start$100 a piece and unless it was Oct 2014, there was no way I could make them in time. She hasn't responded yet.
1 month to make 5 of those?! You have to let us know if she responds.
Also, do you actually know the bride or is she just a friend of the friend who shared the pic?
It is a gorgeous pattern though.
#### PastryGoddess
• Hero Member
• Posts: 4627
##### Re: Not Going To Happen 'Cause I'm Not Harry Potter (Impossible Patron Requests)
« Reply #2915 on: September 18, 2013, 09:32:13 AM »
That's a really nice pattern. I bet a bamboo/silk blend would drape really nicely
#### VorFemme
• Super Hero!
• Posts: 12750
• Strolls with scissors! Too tired to run today!
##### Re: Not Going To Happen 'Cause I'm Not Harry Potter (Impossible Patron Requests)
« Reply #2916 on: September 18, 2013, 10:03:26 AM »
Oh, wow, that is gorgeous!!
Let sleeping dragons be.......morning breath......need I say more?
#### Doll Fiend
• Member
• Posts: 987
• The Dolls are in the Garden and in my Head.
##### Re: Not Going To Happen 'Cause I'm Not Harry Potter (Impossible Patron Requests)
« Reply #2917 on: September 18, 2013, 11:03:26 AM »
She is the friend of a friend. She did reply. Tried to weasel the impossible out of me. Gave her the "I'm sorry, but that is just impossible." line.
The FoF messaged as well. Apologizing. Turns out Bride is a crocheter and she thought to pass on the pattern, not have me make some for her.
#### Queen of Clubs
• Hero Member
• Posts: 1796
##### Re: Not Going To Happen 'Cause I'm Not Harry Potter (Impossible Patron Requests)
« Reply #2918 on: September 18, 2013, 12:28:26 PM »
That the bride does crochet herself just makes it worse, IMO. She ought to know how much work it is and how much that work would be worth. Sheesh!
#### PastryGoddess
• Hero Member
• Posts: 4627
##### Re: Not Going To Happen 'Cause I'm Not Harry Potter (Impossible Patron Requests)
« Reply #2919 on: September 18, 2013, 12:32:52 PM »
That the bride does crochet herself just makes it worse, IMO. She ought to know how much work it is and how much that work would be worth. Sheesh!
I think it was the FOF who asked not the bride. I could be wrong
#### Carotte
• Hero Member
• Posts: 1114
##### Re: Not Going To Happen 'Cause I'm Not Harry Potter (Impossible Patron Requests)
« Reply #2920 on: September 18, 2013, 12:48:52 PM »
That the bride does crochet herself just makes it worse, IMO. She ought to know how much work it is and how much that work would be worth. Sheesh!
I think it was the FOF who asked not the bride. I could be wrong
What I understood is that FoF passed the picture to Bride just as a "hey, look, I know you crochet, you might be interested in this".
Bride understood "hey, Doll Fiend made one, I'll ask her to make me 5 more at an outrageous price and deadline".
Then Bride might have complained to FoF "your friend is ruining my wedding! she doesn't want to make me 5 thingies for tomorrow!".
Then FoF apologized to Doll Fiend and told her she never intended for Bride to request anything and she's sorry for her friend behaviour.
So yeah, like Queen of clubs said, that Bride actually does crochet makes it so much worse!
#### Doll Fiend
• Member
• Posts: 987
• The Dolls are in the Garden and in my Head.
##### Re: Not Going To Happen 'Cause I'm Not Harry Potter (Impossible Patron Requests)
« Reply #2921 on: September 18, 2013, 04:02:01 PM »
Thank you Carotte. That is basically it. FoF and Friend are connected through craft groups. I am not sure if FoF does crochet or not.
I haven't heard from Bride yet, but FoF is now a Friend.
#### TootsNYC
• A Pillar of the Forum
• Posts: 30476
##### Re: Not Going To Happen 'Cause I'm Not Harry Potter (Impossible Patron Requests)
« Reply #2922 on: September 18, 2013, 11:53:23 PM »
There are many theories about who wrote the plays we attribute to Shakespeare. As far as I know, Edgar Allen Poe has never been a candidate.
After H.G. Wells invented the time machine and accidentally allowed Jack The Ripper to escape into the future, he went back in time and asked Poe for help containing the situation. Poe agreed, but he had some demands of his own. Once Wells finally convinced him that there was no way they could remake the Earth into a hollow sphere, even with a time machine, Poe decided to settle for being William Shakespeare. It was dead easy, too, since they just had to bring the complete works back in time with them and have Poe copy them out long-hand. What with all the excitement they clean forgot about Jack The Ripper, but he made the mistake of stopping in Chicago during the Capone era and brought a knife to a gunfight, so happy endings all around.
But then who is buried up in Baltimore in Poe's grave?
Ulysses S. Grant. Which leaves us with the perennial question: Who's buried in Grant's tomb?
Grant and Mrs. Grant.
#### Mediancat
• Member
• Posts: 579
##### Re: Not Going To Happen 'Cause I'm Not Harry Potter (Impossible Patron Requests)
« Reply #2923 on: September 19, 2013, 08:14:34 AM »
No one -- no one's buried in a tomb. They're entombed there, but that's not quite the same thing.
Rob
"In all of mankind's history, there has never been more damage done than by someone who 'thought they were doing the right thing'." -- Lucy, Peanuts
#### TootsNYC
• A Pillar of the Forum
• Posts: 30476
##### Re: Not Going To Happen 'Cause I'm Not Harry Potter (Impossible Patron Requests)
« Reply #2924 on: September 19, 2013, 09:30:48 AM »
In all my years, I've never heard that part of the riddle/joke!
Thanks, Rob! | 2014-08-23 13:31:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7337673306465149, "perplexity": 9084.262601433766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826025.8/warc/CC-MAIN-20140820021346-00031-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/516203/will-a-falling-rod-stay-in-contact-with-the-frictionless-floor | # Will a falling rod stay in contact with the frictionless floor?
## Question
A uniform rod of mass $$M$$ is placed almost vertically on a frictionless floor. Since it is not perfectly vertical, it will begin to fall down when released from rest.
I have seen solutions online for this problem and while solving this problem, it is assumed that the end point of the rod that is in contact with the floor will continue to stay in contact with the floor till the rod, in its entirety, hits the floor horizontally. It is this assumption that lets us determine the normal force from the floor. However, how does one show that this assumption is true? Or is it taken to be an additional constraint of the problem?
Check the figure in D1 to verify if you've got the right setup in mind.
Duplicates in SE:
I believe the OP in D1 has asked the same question (along with other questions) but it has been closed as off-topic. Simon Robinson, one of the answerers in D2, has also expressed concerns about this. I ask this question because it hasn't been addressed properly on SE. I don't feel that the answer to this question is specific only to this vertical rod problem. Instead, I feel that this question is onto something basic that I don't yet understand regarding the necessary constraints that need to be specified in a physics problem.
## My Attempt
The problem with this question is that I feel like I have given all the information that's necessary to predict the entire dynamics of the rod's motion after its release. I'm unable to accept the idea that "rod-cannot-lose-contact" constraint must be specified as an additional piece of information to solve this problem. If we accept that it's not an additional constraint, then we should be able to show that the rod's end point cannot lose contact. But, that's the problem. I've been thinking about it for days and I can't seem to find a way to show that.
I'm unable to see anything "violated" if it loses contact at some point during its fall. After it loses contact, it simply rotates about the center of mass with constant angular velocity [See $$(1)$$] and the rod's COM falls down with acceleration $$\mathbf{g}$$. $$\frac{d\mathbf{L}_{CM}}{dt} = \boldsymbol{\tau}_{CM} \Rightarrow \text{L_{CM}=I_{CM}\omega\; is constant} \tag{1}$$
Thanks for taking the time to read this question. I apologize if I have violated any code of conduct.
Any insight that addresses my question would be greatly appreciated.
### Further Clarification, If Needed
Clarifications which will hopefully help PhySE users to better understand my question are made here. Reading the following information is not necessary to answer my question.
1. It is important to note that even if the rod's bottom end point loses contact with the floor at some point during the fall, the centre of mass of the rod will continue to fall vertically straight down just as before (but now with acceleration $$\mathbf{g}$$). So, the fact that the COM falls vertically straight down cannot be used to prove that the rod's bottom end point doesn't lose contact with the floor.
COM falls vertically straight down $$\not\Rightarrow$$ the rod's bottom end point doesn't lose contact with the floor
• Generally related answer that shows the treatment of a transition from contact to the ground to no contact to the ground. – ja72 Nov 26 '19 at 18:54
• A bit ironical that while D1 was closed as off topic, this one isn't and instead has 19 upvotes as of now (within a day of posting). Curious to know why the moderators thought so... – Vivek Nov 27 '19 at 8:33
• As you probably know, "Since it is not perfectly vertical, it will begin to fall down when released from rest" that is only true if the tilt is enough to place the cog outside of the base point. – Fattie Nov 28 '19 at 15:21
The technique to use in problems like this is to assume that the rod remains in contact with the table, and to then try to figure out whether the normal force ever switches sign for some angle $$\theta$$ as the rod falls. If it does, then the rod's lower tip will have to leave the table, as a "frictionless table" cannot pull the rod downward; it can only push it upwards. Similar techniques are used in the solution to the classic "puck slides down a frictionless hemisphere" problem, as well as the "toppling ruler" problem.
Actually doing this is something of a mess, but here's a rough sketch. Let $$L$$ be the length of the rod and $$m$$ be its mass. Let $$I = \frac{1}{4} \beta m L^2$$ be the rod's moment of inertia about its center of mass; note that $$\beta = \frac{1}{3}$$ for a rod of uniform density, while $$\beta = 1$$ if the mass is concentrated at the tips. This is done to provide a bit more generality; I will assume, however, that the mass distribution is symmetric, so that the center of mass is at the geometric center of the rod.
The ingredients you'll need are:
• Geometric constraints: The vertical position of the center of mass of the rod will be $$z = \frac{1}{2} L \cos \theta$$ (taking positive $$z$$ to be upwards.) Differentiating this twice, we obtain for the velocity and acceleration of the center of mass $$v = - \frac{L}{2} \omega \sin \theta, \\ a = - \frac{L}{2} ( \alpha \sin \theta + \omega^2 \cos \theta),$$ where $$\alpha$$ is the angular acceleration of the rod.
• Conservation of energy: Since the table does no work on the tip of the rod, the mechanical energy of the rod is conserved. This gives a relationship between $$v$$ and $$\omega$$.
• Newton's Second Law (translational): Using Newton's second law, you can relate $$a$$ and $$N$$.
• Newton's Second Law (rotational): Calculating the torque about the center of mass of the rod, you can find a relationship between $$N$$ and $$\alpha$$.
This gives us a system of five equations and five unknowns $$\{N, v, a, \omega, \alpha \}$$ which can be solved. After going through it, I find that the normal force as a function of $$\theta$$ is $$N = \frac{mg \beta (\beta + (1- \cos \theta)^2)}{(\beta + \sin^2 \theta)^2}$$ which is manifestly positive for any value of $$\theta$$. Thus, the tip of the rod does not leave the table; the table continually maintains an upward normal force as it falls.
• Thank you for the neat answer. I've calculated the normal reaction for the uniform rod case ($\beta=1/3$) and it matches with your result (as expected). Also, I further calculated and plotted $N(\theta)$ for the more general question : the vertical rod is flicked so as to give an initial rotational kinetic energy when it's released (with the additional constraint that the COM of the rod can only move vertically). (contd.) – Ajay Mohan Nov 27 '19 at 7:25
• As Vivek has mentioned, I found that the rod either loses contact at the beginning (when $\omega_{\text{imparted}} > \omega_{\text{critical}}$) or it doesn't lose contact at all (when $\omega_{\text{imparted}} \leq \omega_{\text{critical}}$). [$\omega_{\text{critical}}=\sqrt{\frac{2g}{L}}$] This comment doesn't warrant a response. I just wanted to add more details about the general problem for those interested. – Ajay Mohan Nov 27 '19 at 7:30
• @AjayMohan You don't really need to impose that the initial horizontal component of velocity of COM is zero, because it's conserved here. If it's non zero, just shift to the inertial frame that's moving horizontally with the same velocity and in this frame the COM will fall vertically downwards (& the same analysis will work out). – Vivek Nov 27 '19 at 8:15
• @Vivek I gave that constraint (COM can only move vertically) so that it is easier to explain and imagine the experiment. But, I agree, the same analysis will work out without that additional constraint. – Ajay Mohan Nov 27 '19 at 8:21
• @AjayMohan Actually, I said it because would be easier practically to flick the pencil at one end (rather than have two opposite impulses at different lever arms do the job) ;-) – Vivek Nov 27 '19 at 8:24
to see what happens you have to write the equation of motions and then simulate the equations.
we have two generalized coordinate $$x$$ is the translation on the floor and the rotation of the rod.
starting with the position vector to the center of mass you get:
$$\vec{R}=\left[ \begin {array}{c} l\sin \left( \varphi \right) +x \\ l\cos \left( \varphi \right) \end {array} \right] \tag 1$$
from equation (1) you can obtain the kinetic energy $$\quad T=\frac{m}{2}\vec{\dot{R}}^T\,\vec{\dot{R}}+\frac{I_{cm}}{2}\dot{\varphi}^2$$ and the potential energy $$U=m\,g\,\vec{R}_y$$
$$\Rightarrow$$
The equations of motion:
$${\frac {d^{2}}{d{\tau}^{2}}}\varphi \left( \tau \right) +{\frac {m{l} ^{2}\cos \left( \varphi \left( \tau \right) \right) \sin \left( \varphi \left( \tau \right) \right) \left( {\frac {d}{d\tau}} \varphi \left( \tau \right) \right) ^{2}}{m{l}^{2}+{\it Icm}-m{l}^{2 } \left( \cos \left( \varphi \left( \tau \right) \right) \right) ^{ 2}}}-{\frac {mgl\sin \left( \varphi \left( \tau \right) \right) }{m{ l}^{2}+{\it Icm}-m{l}^{2} \left( \cos \left( \varphi \left( \tau \right) \right) \right) ^{2}}} =0\tag 3$$
$${\frac {d^{2}}{d{\tau}^{2}}}x \left( \tau \right) +{\frac {m{l}^{2} \cos \left( \varphi \left( \tau \right) \right) \sin \left( \varphi \left( \tau \right) \right) g}{m{l}^{2}+{\it Icm}-m{l}^{2} \left( \cos \left( \varphi \left( \tau \right) \right) \right) ^{2}}}-{ \frac { \left( {\frac {d}{d\tau}}\varphi \left( \tau \right) \right) ^{2}l\sin \left( \varphi \left( \tau \right) \right) \left( m{l}^{2}+{\it Icm} \right) }{m{l}^{2}+{\it Icm}-m{l}^{2} \left( \cos \left( \varphi \left( \tau \right) \right) \right) ^{2 }}} =0\tag 4$$
we also have to obtain the normal force (contact force rod floor). To calculate the normal force $$N$$ I add additional degree of freedom to the direction of the normal force which is $$y$$ so the position vector is now:
$$\vec{R}= \left[ \begin {array}{c} l\sin \left( \varphi \right) +x \\ l\cos \left( \varphi \right) +y\end {array} \right]$$
the "new" equations of motion are $$\ddot{\varphi}=\ldots\,,\ddot{x}=\ldots$$ and $$\ddot{y}=\ldots$$ but we also have the holonomic constraint equation (Lagrange multiplier) .
$$y=0\quad \Rightarrow\quad \dot{y}=0\,,\ddot{y}=0$$
thus we have enough equations to calculate the contact force $$N$$
$$N={\frac {{\it Icm}\,ml\cos \left( \varphi \left( \tau \right) \right) \left( {\frac {d}{d\tau}}\varphi \left( \tau \right) \right) ^{2}}{m{l}^{2}+{\it Icm}-m{l}^{2} \left( \cos \left( \varphi \left( \tau \right) \right) \right) ^{2}}}-{\frac {m{\it Icm}\,g}{m {l}^{2}+{\it Icm}-m{l}^{2} \left( \cos \left( \varphi \left( \tau \right) \right) \right) ^{2}}} \tag 5$$
Simulation
I start the simulation with the initial conditions :
$$x(0)=0,D(x)(0)=0,\varphi(0)=0.1,D(\varphi)(0)=0.3$$
I stop the simulation if the rotation of the rod reach 90 degrees .
you see that the contact force $$N$$ is greater then zero so the rod has a contact to the floor, you can avoid this situation only if you applied external torque on the rod .
Compare the Normal force with Michael Seifert normal force
with:
$$\varphi(0)=0$$ and $$Icm=\frac{1}{4}\,\beta\,m\,(2\,l)^2$$
$$N={\frac {mg\beta\, \left( \beta+ \left( 1-\cos \left( \varphi \right) \right) ^{2} \right) }{ \left( \beta+ \left( \sin \left( \varphi \right) \right) ^{2} \right) ^{2}}} \tag 6$$
red plot normal force equation (5), blue plot is the normal force equation (6) we get the same results!!!
@MichaelSeifert has a very nice answer.
I just want to describe it from a different angle here.
### A Calculation
If you only want to investigate if contact is lost at some angle $$\theta$$, then in this problem it can also be done in the following way: Only the lower end of the rod is in contact with the ground. So for the rod to remain leave contact with the ground after rotating an angle $$\theta$$, the (upwards) vertical acceleration of the point of contact (POC) due to all forces except the normal force should become non-negative at the very least. One can then imagine that the rod is no longer "falling into the floor" through the POC in this case (it's actually ready to fly away), and so the ground will not act with a non-zero normal force on the rod to slow it down; if it does, because of the geometry of the problem it will only enhance the vertically upward acceleration of POC, which is inconsistent with the constraint.
Now note that acceleration of the POC in the vertical direction at this point due to all forces except the normal force would simply be $$\frac{\Omega^2 L}{2} \cos \theta - g$$.
$$\Bigg[$$ We also know $$\Omega^2$$ in terms of $$\theta$$ from energy conservation principle (as long as the constraint is obeyed). A quick way to write down the kinetic energy is to note that the rotating rod is instantaneously rotating about an axis perpendicular to the plane of the rod, that passes through the intersection of the vertical through POC and the horizontal line through COM. This would give a kinetic energy of $$\frac{1}{2}mL^2\Big[\frac{1}{12}+\frac{\sin^2\theta}{4} \Big] \Omega^2$$, which is obtained after a fall of COM by height $$\frac{L}{2}(1-\cos\theta)$$. $$\Bigg]$$
If you now really calculate the quantity $$\frac{\Omega^2 L}{2} \cos \theta - g$$, you will find it to be like the numerator of the expression for $$N$$ found by @MichaelSeifert, except that it would have a negative sign $$-$$ this means it can never be positive and so contact can never be lost.
### Intuition
We now understand what is responsible for loss of contact $$-$$ it's the angular velocity of the rod! The larger its magnitude, the greater is the chance of losing contact from the floor. But what happens if you give the rod some initial angular velocity at the start $$-$$ will contact be lost now? Two cases arise:
1. Either contact will be lost at the top itself.
2. Or the contact will never be lost.
You should course work this out mathematically. But there is an intuitive way to understand $$-$$ assume contact is lost at some point at an angle $$\theta \neq 0$$ (at least for a small amount of time), then the point of contact has zero velocity in the vertical direction at this moment. From here on, the rod continues to rotate further for an infinitesimal moment without any change in $$\Omega$$, but then $$\frac{\Omega^2 L}{2} \cos \theta - g$$ (which was hitherto non-positive) will again become negative because $$\theta$$ is going to increase a moment later. As soon as that happens, the rod will be falling into the ground through the POC, and the ground will not take kindly to it & exert a normal force in response. And that's a contradiction!
However, if you rotate the rod too fast at the start itself, it's going to lose contact because $$\frac{\Omega^2 L}{2} \cos \theta - g$$ will be $$>0$$ at the start itself, and won't become negative an instant later.
This is actually what gives rise to the intuition that the rod will not lose contact for the problem originally posed by you $$-$$ viz., that since in the original case contact is not lost at the start, it is in fact never lost (as long as the other end of the rod doesn't hit the ground)!
Warning: Do not use this idea in any general problem, because in general the point of contact may not be the same point (for example a rolling disc on a flat plane). So, the general way of course is to implement the constraint and make sure that $$N\geq0$$ for the constraint assumption to be self-consistent in such problems. | 2020-05-25 09:10:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 69, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6642748713493347, "perplexity": 201.2571183889978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388012.14/warc/CC-MAIN-20200525063708-20200525093708-00259.warc.gz"} |
http://physics.stackexchange.com/questions/67145/photon-gas-kinetic-theory | # Photon gas kinetic theory
Suppose a black body as an enclosure of volume $V$ with a hole of section $A$. In the interior there is a photon gas, whose energy density $u$ is, at temperature $T$.
$$u=cT^4$$
How can I show that the energy emitted per second is
$$E=\sigma A T^4$$
-
You just have to use the Stefan-Boltzmann Law. S-B Law is
$j=\sigma T^{4}$
the irradiance $j$ has dimensions of energy per area per time. So, to find the power (energy per time) you just have to multiply it with its surface area of emission, $A$.
$P=j A= A\sigma T^{4}$
-
Hi, so It is possible to derive the Steffan-Boltzman law for a photon gas whose energy density is $u=cT^4$ ? because that answer you give to me doesn't use this fact – Jorge Jun 5 '13 at 18:08
It is a calculus in a continuous version of grand-canonical systems, that is : $\bar E = 2\frac{V}{(2 \Pi)^3}\int \large \frac{|\vec k|~ d^3 \vec k}{e^{\beta |\vec k|} - 1}$ (in units $\hbar = c = 1$). So, you will see easily that $\bar E$ is proportionnal to $T^4$, $\frac{\bar E}{V} = \sigma T^4$ – Trimok Jun 5 '13 at 18:56
Hi Trimok thanks. What you did is showing that $u \propto T^4$ but that was given. My claim is that the result of Nijankowski V. seems to be independent of that fact. – Jorge Jun 5 '13 at 21:46
Consider a volume of section $A$ and height $L=c_0\Delta t$ next to the hole. Here, $\Delta t$ is an arbitrarily small time.
In this volume $AL$, there is a photon energy $ALu\propto Au$
During $\Delta t$, all the photons in this volume which have a velocities aligned with the section normal will go through the hole. So their flux will also be $\propto Au$. You can then repeat this for all photons directions, meaning that there is always proportionality with $Au$. Hence your result. Now, if you want to find exactly the coefficient, then you have to carry out the integral over angles ...
- | 2015-09-03 19:20:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9075050354003906, "perplexity": 402.1920220565533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645323734.78/warc/CC-MAIN-20150827031523-00037-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://secure.sky-map.org/starview?object_type=1&object_id=1440&object_name=Dolones+Secundus&locale=DE | SKY-MAP.ORG
Home Getting Started To Survive in the Universe News@Sky Astro Photo The Collection Forum Blog New! FAQ Press Login
ψβ Aur (Dolones Secundus)
Contents
Images
DSS Images Other Images
Related articles
CHARM2: An updated Catalog of High Angular Resolution MeasurementsWe present an update of the Catalog of High Angular ResolutionMeasurements (CHARM, Richichi & Percheron \cite{CHARM}, A&A,386, 492), which includes results available until July 2004. CHARM2 is acompilation of direct measurements by high angular resolution methods,as well as indirect estimates of stellar diameters. Its main goal is toprovide a reference list of sources which can be used for calibrationand verification observations with long-baseline optical and near-IRinterferometers. Single and binary stars are included, as are complexobjects from circumstellar shells to extragalactic sources. The presentupdate provides an increase of almost a factor of two over the previousedition. Additionally, it includes several corrections and improvements,as well as a cross-check with the valuable public release observationsof the ESO Very Large Telescope Interferometer (VLTI). A total of 8231entries for 3238 unique sources are now present in CHARM2. Thisrepresents an increase of a factor of 3.4 and 2.0, respectively, overthe contents of the previous version of CHARM.The catalog is only available in electronic form at the CDS viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/431/773 Local kinematics of K and M giants from CORAVEL/Hipparcos/Tycho-2 data. Revisiting the concept of superclustersThe availability of the Hipparcos Catalogue has triggered many kinematicand dynamical studies of the solar neighbourhood. Nevertheless, thosestudies generally lacked the third component of the space velocities,i.e., the radial velocities. This work presents the kinematic analysisof 5952 K and 739 M giants in the solar neighbourhood which includes forthe first time radial velocity data from a large survey performed withthe CORAVEL spectrovelocimeter. It also uses proper motions from theTycho-2 catalogue, which are expected to be more accurate than theHipparcos ones. An important by-product of this study is the observedfraction of only 5.7% of spectroscopic binaries among M giants ascompared to 13.7% for K giants. After excluding the binaries for whichno center-of-mass velocity could be estimated, 5311 K and 719 M giantsremain in the final sample. The UV-plane constructed from these datafor the stars with precise parallaxes (σπ/π≤20%) reveals a rich small-scale structure, with several clumpscorresponding to the Hercules stream, the Sirius moving group, and theHyades and Pleiades superclusters. A maximum-likelihood method, based ona Bayesian approach, has been applied to the data, in order to make fulluse of all the available stars (not only those with precise parallaxes)and to derive the kinematic properties of these subgroups. Isochrones inthe Hertzsprung-Russell diagram reveal a very wide range of ages forstars belonging to these groups. These groups are most probably relatedto the dynamical perturbation by transient spiral waves (as recentlymodelled by De Simone et al. \cite{Simone2004}) rather than to clusterremnants. A possible explanation for the presence of younggroup/clusters in the same area of the UV-plane is that they have beenput there by the spiral wave associated with their formation, while thekinematics of the older stars of our sample has also been disturbed bythe same wave. The emerging picture is thus one of dynamical streamspervading the solar neighbourhood and travelling in the Galaxy withsimilar space velocities. The term dynamical stream is more appropriatethan the traditional term supercluster since it involves stars ofdifferent ages, not born at the same place nor at the same time. Theposition of those streams in the UV-plane is responsible for the vertexdeviation of 16.2o ± 5.6o for the wholesample. Our study suggests that the vertex deviation for youngerpopulations could have the same dynamical origin. The underlyingvelocity ellipsoid, extracted by the maximum-likelihood method afterremoval of the streams, is not centered on the value commonly acceptedfor the radial antisolar motion: it is centered on < U > =-2.78±1.07 km s-1. However, the full data set(including the various streams) does yield the usual value for theradial solar motion, when properly accounting for the biases inherent tothis kind of analysis (namely, < U > = -10.25±0.15 kms-1). This discrepancy clearly raises the essential questionof how to derive the solar motion in the presence of dynamicalperturbations altering the kinematics of the solar neighbourhood: doesthere exist in the solar neighbourhood a subset of stars having no netradial motion which can be used as a reference against which to measurethe solar motion?Based on observations performed at the Swiss 1m-telescope at OHP,France, and on data from the ESA Hipparcos astrometry satellite.Full Table \ref{taba1} is only available in electronic form at the CDSvia anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/430/165} High-Precision Near-Infrared Photometry of a Large Sample of Bright Stars Visible from the Northern HemisphereWe present the results of 8 yr of infrared photometric monitoring of alarge sample of stars visible from Teide Observatory (Tenerife, CanaryIslands). The final archive is made up of 10,949 photometric measuresthrough a standard InSb single-channel photometer system, principally inJHK, although some stars have measures in L'. The core of this list ofstars is the standard-star list developed for the Carlos SánchezTelescope. A total of 298 stars have been observed on at least twooccasions on a system carefully linked to the zero point defined byVega. We present high-precision photometry for these stars. The medianuncertainty in magnitude for stars with a minimum of four observationsand thus reliable statistics ranges from 0.0038 mag in J to 0.0033 magin K. Many of these stars are faint enough to be observable with arraydetectors (42 are K>8) and thus to permit a linkage of the bright andfaint infrared photometric systems. We also present photometry of anadditional 25 stars for which the original measures are no longeravailable, plus photometry in L' and/or M of 36 stars from the mainlist. We calculate the mean infrared colors of main-sequence stars fromA0 V to K5 V and show that the locus of the H-K color is linearlycorrelated with J-H. The rms dispersion in the correlation between J-Hand H-K is 0.0073 mag. We use the relationship to interpolate colors forall subclasses from A0 V to K5 V. We find that K and M main-sequence andgiant stars can be separated on the color-color diagram withhigh-precision near-infrared photometry and thus that photometry canallow us to identify potential mistakes in luminosity classclassification. A catalogue of calibrator stars for long baseline stellar interferometryLong baseline stellar interferometry shares with other techniques theneed for calibrator stars in order to correct for instrumental andatmospheric effects. We present a catalogue of 374 stars carefullyselected to be used for that purpose in the near infrared. Owing toseveral convergent criteria with the work of Cohen et al.(\cite{cohen99}), this catalogue is in essence a subset of theirself-consistent all-sky network of spectro-photometric calibrator stars.For every star, we provide the angular limb-darkened diameter, uniformdisc angular diameters in the J, H and K bands, the Johnson photometryand other useful parameters. Most stars are type III giants withspectral types K or M0, magnitudes V=3-7 and K=0-3. Their angularlimb-darkened diameters range from 1 to 3 mas with a median uncertaintyas low as 1.2%. The median distance from a given point on the sky to theclosest reference is 5.2degr , whereas this distance never exceeds16.4degr for any celestial location. The catalogue is only available inelectronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr(130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/393/183 CHARM: A Catalog of High Angular Resolution MeasurementsThe Catalog of High Angular Resolution Measurements (CHARM) includesmost of the measurements obtained by the techniques of lunaroccultations and long-baseline interferometry at visual and infraredwavelengths, which have appeared in the literature or have otherwisebeen made public until mid-2001. A total of 2432 measurements of 1625sources are included, along with extensive auxiliary information. Inparticular, visual and infrared photometry is included for almost allthe sources. This has been partly extracted from currently availablecatalogs, and partly obtained specifically for CHARM. The main aim is toprovide a compilation of sources which could be used as calibrators orfor science verification purposes by the new generation of largeground-based facilities such as the ESO Very Large Interferometer andthe Keck Interferometer. The Catalog is available in electronic form atthe CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/386/492, and from theauthors on CD-Rom. Lick Spectral Indices for Super-Metal-rich StarsWe present Lick spectral indices for a complete sample of 139 candidatesuper-metal-rich stars of different luminosity classes (MK type from Ito V). For 91 of these stars we were able to identify, in anaccompanying paper, the fundamental atmosphere parameters. This confirmsthat at least 2/3 of the sample consists of stars with [Fe/H] in excessof +0.1 dex. Optical indices for both observations and fiducialsynthetic spectra have been calibrated to the Lick system according toWorthey et al. and include the Fe I indices of Fe5015, Fe5270, andFe5335 and the Mg I and MgH indices of Mg2 and Mg b at 5180Å. The internal accuracy of the observations is found to beσ(Fe5015)=+/-0.32 Å, σ(Fe5270)=+/-0.19 Å,σ(Fe5335)=+/-0.22 Å, σ(Mg2)=+/-0.004 mag,and σ(Mg b)=+/-0.19 Å. This is about a factor of 2 betterthan the corresponding theoretical indices from the synthetic spectra,the latter being a consequence of the intrinsic limitations in the inputphysics, as discussed by Chavez et al. By comparing models andobservations, we find no evidence for nonstandard Mg versus Fe relativeabundance, so [Mg/Fe]=0, on the average, for our sample. Both theWorthey et al. and Buzzoni et al. fitting functions are found tosuitably match the data and can therefore confidently be extended forpopulation synthesis application also to supersolar metallicity regimes.A somewhat different behavior of the two fitting sets appears, however,beyond the temperature constraints of our stellar sample. Its impact onthe theoretical output is discussed, as far as the integratedMg2 index is derived from synthesis models of stellaraggregates. A two-index plot, such as Mg2 versus Fe5270, isfound to provide a simple and powerful tool for probing distinctiveproperties of single stars and stellar aggregates as a whole. The majoradvantage, over a classical CM diagram, is that it is both reddeningfree and distance independent. Based on observations collected at theInstituto Nacional de Astrofísica, Optica y Electrónica(INAOE) G. Haro'' Observatory, Cananea (Mexico). Catalogue of Apparent Diameters and Absolute Radii of Stars (CADARS) - Third edition - Comments and statisticsThe Catalogue, available at the Centre de Données Stellaires deStrasbourg, consists of 13 573 records concerning the results obtainedfrom different methods for 7778 stars, reported in the literature. Thefollowing data are listed for each star: identifications, apparentmagnitude, spectral type, apparent diameter in arcsec, absolute radiusin solar units, method of determination, reference, remarks. Commentsand statistics obtained from CADARS are given. The Catalogue isavailable in electronic form at the CDS via anonymous ftp tocdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcar?J/A+A/367/521 Rotation and lithium in single giant starsIn the present work, we study the link between rotation and lithiumabundance in giant stars of luminosity class III, on the basis of alarge sample of 309 single stars of spectral type F, G and K. We havefound a trend for a link between the discontinuity in rotation at thespectral type G0III and the behavior of lithium abundances around thesame spectral type. The present work also shows that giant starspresenting the highest lithium contents, typically stars earlier thanG0III, are those with the highest rotation rates, pointing for adependence of lithium content on rotation, as observed for otherluminosity classes. Giant stars later than G0III present, as a rule, thelowest rotation rates and lithium contents. A large spread of about fivemagnitudes in lithium abundance is observed for the slow rotators.Finally, single giant stars with masses 1.5 < M/Msun<=2.5 show a clearest trend for a correlation between rotational velocityand lithium abundance. Based on observations collected at theObservatoire de Haute -- Provence (France) and at the European SouthernObservatory, La Silla (Chile). Table 2 is only available electronicallywith the On-Line publication athttp://link.springer.de/link/service/00230/ Revision and Calibration of MK Luminosity Classes for Cool Giants by HIPPARCOS ParallaxesThe Hipparcos parallaxes of cool giants are utilized in two ways in thispaper. First, a plot of reduced parallaxes of stars brighter than 6.5,as a function of spectral type, for the first time separates members ofthe clump from stars in the main giant ridge. A slight modification ofthe MK luminosity standards has been made so that luminosity class IIIbdefines members of the clump, and nearly all of the class III stars fallwithin the main giant ridge. Second, a new calibration of MK luminosityclasses III and IIIb in terms of visual absolute magnitudes has beenmade. Spectral Irradiance Calibration in the Infrared. X. A Self-Consistent Radiometric All-Sky Network of Absolutely Calibrated Stellar SpectraWe start from our six absolutely calibrated continuous stellar spectrafrom 1.2 to 35 μm for K0, K1.5, K3, K5, and M0 giants. These wereconstructed as far as possible from actual observed spectral fragmentstaken from the ground, the Kuiper Airborne Observatory, and the IRAS LowResolution Spectrometer, and all have a common calibration pedigree.From these we spawn 422 calibrated spectral templates'' for stars withspectral types in the ranges G9.5-K3.5 III and K4.5-M0.5 III. Wenormalize each template by photometry for the individual stars usingpublished and/or newly secured near- and mid-infrared photometryobtained through fully characterized, absolutely calibrated,combinations of filter passband, detector radiance response, and meanterrestrial atmospheric transmission. These templates continue ourongoing effort to provide an all-sky network of absolutely calibrated,spectrally continuous, stellar standards for general infrared usage, allwith a common, traceable calibration heritage. The wavelength coverageis ideal for calibration of many existing and proposed ground-based,airborne, and satellite sensors, particularly low- tomoderate-resolution spectrometers. We analyze the statistics of probableuncertainties, in the normalization of these templates to actualphotometry, that quantify the confidence with which we can assert thatthese templates truly represent the individual stars. Each calibratedtemplate provides an angular diameter for that star. These radiometricangular diameters compare very favorably with those directly observedacross the range from 1.6 to 21 mas. A catalog of rotational and radial velocities for evolved starsRotational and radial velocities have been measured for about 2000evolved stars of luminosity classes IV, III, II and Ib covering thespectral region F, G and K. The survey was carried out with the CORAVELspectrometer. The precision for the radial velocities is better than0.30 km s-1, whereas for the rotational velocity measurementsthe uncertainties are typically 1.0 km s-1 for subgiants andgiants and 2.0 km s-1 for class II giants and Ib supergiants.These data will add constraints to studies of the rotational behaviourof evolved stars as well as solid informations concerning the presenceof external rotational brakes, tidal interactions in evolved binarysystems and on the link between rotation, chemical abundance and stellaractivity. In this paper we present the rotational velocity v sin i andthe mean radial velocity for the stars of luminosity classes IV, III andII. Based on observations collected at the Haute--Provence Observatory,Saint--Michel, France and at the European Southern Observatory, LaSilla, Chile. Table \ref{tab5} also available in electronic form at CDSvia anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Catalogs of temperatures and [Fe/H] averages for evolved G and K starsA catalog of mean values of [Fe/H] for evolved G and K stars isdescribed. The zero point for the catalog entries has been establishedby using differential analyses. Literature sources for those entries areincluded in the catalog. The mean values are given with rms errors andnumbers of degrees of freedom, and a simple example of the use of thesestatistical data is given. For a number of the stars with entries in thecatalog, temperatures have been determined. A separate catalogcontaining those data is briefly described. Catalog only available atthe CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/Abstract.html Determination of the temperatures of selected ISO flux calibration stars using the Infrared Flux MethodEffective temperatures for 420 stars with spectral types between A0 andK3, and luminosity classes between II and V, selected for a fluxcalibration of the Infrared Space Observatory, ISO, have been determinedusing the Infrared Flux Method (IRFM). The determinations are based onnarrow and wide band photometric data obtained for this purpose, andtake into account previously published narrow-band measures oftemperature. Regression coefficients are given for relations between thedetermined temperatures and the photometric parameters (B2-V1), (b-y)and (B-V), corrected for interstellar extinction through use ofHipparcos parallaxes. A correction for the effect of metallicity on thedetermination of integrated flux is proposed. The importance of aknowledge of metallicity in the representation of derived temperaturesfor Class V, IV and III stars by empirical functions is discussed andformulae given. An estimate is given for the probable error of eachtemperature determination. Based on data from the ESA HipparcosAstrometry Satellite. Towards a fundamental calibration of stellar parameters of A, F, G, K dwarfs and giantsI report on the implementation of the empirical surface brightnesstechnique using the near-infrared Johnson broadband { (V-K)} colour assuitable sampling observable aimed at providing accurate effectivetemperatures of 537 dwarfs and giants of A-F-G-K spectral-type selectedfor a flux calibration of the Infrared Space Observatory (ISO). Thesurface brightness-colour correlation is carefully calibrated using aset of high-precision angular diameters measured by moderninterferometry techniques. The stellar sizes predicted by thiscorrelation are then combined with the bolometric flux measurementsavailable for a subset of 327 ISO standard stars in order to determineone-dimensional { (T, V-K)} temperature scales of dwarfs and giants. Theresulting very tight relationships show an intrinsic scatter induced byobservational photometry and bolometric flux measurements well below thetarget accuracy of +/- 1 % required for temperature determinations ofthe ISO standards. Major improvements related to the actual directcalibration are the high-precision broadband { K} magnitudes obtainedfor this purpose and the use of Hipparcos parallaxes for dereddeningphotometric data. The temperature scale of F-G-K dwarfs shows thesmallest random errors closely consistent with those affecting theobservational photometry alone, indicating a negligible contributionfrom the component due to the bolometric flux measurements despite thewide range in metallicity for these stars. A more detailed analysisusing a subset of selected dwarfs with large metallicity gradientsstrongly supports the actual bolometric fluxes as being practicallyunaffected by the metallicity of field stars, in contrast with recentresults claiming somewhat significant effects. The temperature scale ofF-G-K giants is affected by random errors much larger than those ofdwarfs, indicating that most of the relevant component of the scattercomes from the bolometric flux measurements. Since the giants have smallmetallicities, only gravity effects become likely responsible for theincreased level of scatter. The empirical stellar temperatures withsmall model-dependent corrections are compared with the semiempiricaldata by the Infrared Flux Method (IRFM) using the large sample of 327comparison stars. One major achievement is that all empirical andsemiempirical temperature estimates of F-G-K giants and dwarfs are foundto be closely consistent between each other to within +/- 1 %. However,there is also evidence for somewhat significant differential effects.These include an average systematic shift of (2.33 +/- 0.13) % affectingthe A-type stars, the semiempirical estimates being too low by thisamount, and an additional component of scatter as significant as +/- 1 %affecting all the comparison stars. The systematic effect confirms theresults from other investigations and indicates that previousdiscrepancies in applying the IRFM to A-type stars are not yet removedby using new LTE line-blanketed model atmospheres along with the updatedabsolute flux calibration, whereas the additional random component isfound to disappear in a broadband version of the IRFM using an infraredreference flux derived from wide rather than narrow band photometricdata. Table 1 and 2 are only available in the electronic form of thispaper Classification and Identification of IRAS Sources with Low-Resolution SpectraIRAS low-resolution spectra were extracted for 11,224 IRAS sources.These spectra were classified into astrophysical classes, based on thepresence of emission and absorption features and on the shape of thecontinuum. Counterparts of these IRAS sources in existing optical andinfrared catalogs are identified, and their optical spectral types arelisted if they are known. The correlations between thephotospheric/optical and circumstellar/infrared classification arediscussed. A catalogue of [Fe/H] determinations: 1996 editionA fifth Edition of the Catalogue of [Fe/H] determinations is presentedherewith. It contains 5946 determinations for 3247 stars, including 751stars in 84 associations, clusters or galaxies. The literature iscomplete up to December 1995. The 700 bibliographical referencescorrespond to [Fe/H] determinations obtained from high resolutionspectroscopic observations and detailed analyses, most of them carriedout with the help of model-atmospheres. The Catalogue is made up ofthree formatted files: File 1: field stars, File 2: stars in galacticassociations and clusters, and stars in SMC, LMC, M33, File 3: numberedlist of bibliographical references The three files are only available inelectronic form at the Centre de Donnees Stellaires in Strasbourg, viaanonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5), or viahttp://cdsweb.u-strasbg.fr/Abstract.html Transformations from Theoretical Hertzsprung-Russell Diagrams to Color-Magnitude Diagrams: Effective Temperatures, B-V Colors, and Bolometric CorrectionsAbstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1996ApJ...469..355F&db_key=AST H-alpha measurements for cool giantsThe H-alpha line in a cool star is usually an indication of theconditions in its chromosphere. I have collected H-alpha spectra of manynorthern G-M stars, which show how the strength and shape of the H-alphaline change with spectral type. These observations detect surprisinglittle variation in absoption-line depth (Rc approximately0.23 +/- 0.08), linewidth (FWHD approximately 1.44 +/- 0.22 A), orequivalent width (EW approximately 1.12 +/- 0.17 A) among G5-M5 IIIgiants. Lines in the more luminous stars tend to be broader and strongerby 30%-40% than in the Class III giants, while the H-alpha absorptiontends to weaken among the cooler M giants. Velocities of H-alpha andnearby photospheric lines are the same to within 1.4 +/- 4.4 km/s forthe whole group. To interpret these observations, I have calculatedH-alpha profiles, Ly-alpha strengths, and (C II) strengths for a seriesof model chromospheres representing a cool giant star like alpha Tau.Results are sensitive to the mass of the chromosphere, to chromospherictemperature, to clumping of the gas, and to the assumed physics of lineformation. The ubiquitous nature of H-alpha in cool giants and the greatdepth of observed lines argue that chromospheres of giants cover theirstellar disks uniformly and are homogeneous on a large scale. This isquite different from conditions on a small scale: To obtain a highenough electron density with the theoretical models, both to explain theexitation of hydrogen and possibly also to give the observed C IImultiplet ratios, the gas is probably clumped. The 6540-6580 A spectraof 240 stars are plotted in an Appendix, which identifies the date ofobservation and marks positions of strong telluric lines on eachspectrum. I assess the effects of telluric lines and estimates that thestrength of scattered light is approximately 5% of the continuum inthese spectra. I give the measurements of H-alpha as well as equivalentwidths of two prominent photospheric lines, Fe I lambda 6546 and Ca Ilambda 6572, which strengthen with advancing spectral type. Vitesses radiales. Catalogue WEB: Wilson Evans Batten. Subtittle: Radial velocities: The Wilson-Evans-Batten catalogue.We give a common version of the two catalogues of Mean Radial Velocitiesby Wilson (1963) and Evans (1978) to which we have added the catalogueof spectroscopic binary systems (Batten et al. 1989). For each star,when possible, we give: 1) an acronym to enter SIMBAD (Set ofIdentifications Measurements and Bibliography for Astronomical Data) ofthe CDS (Centre de Donnees Astronomiques de Strasbourg). 2) the numberHIC of the HIPPARCOS catalogue (Turon 1992). 3) the CCDM number(Catalogue des Composantes des etoiles Doubles et Multiples) byDommanget & Nys (1994). For the cluster stars, a precise study hasbeen done, on the identificator numbers. Numerous remarks point out theproblems we have had to deal with. A revised effective-temperature calibration for the DDO photometric systemA revised effective-temperature calibration for the David DunlapObservatory (DDO) photometric system is presented. Recently publishedphotometric and spectroscopic observations of field and open-cluster Gand K stars allow a better definition of the solar-abundance fiducialrelation in the DDO C0(45-48) vs. C0(42-45)diagram. The ability of the DDO system to predict MK spectral types of Gand K giants is demonstrated. The new DDO effective temperaturecalibration reproduces satisfactorily the infrared temperature scale ofBell and Gustafsson (1989). It is shown that Osborn's (1979) calibrationunderestimates the effective temperatures of K giants by approximately170 K and those of late-type dwarfs by approximately 150 K. A critical appraisal of published values of (Fe/H) for K II-IV stars'Primary' (Fe/H) averages are presented for 373 evolved K stars ofluminosity classes II-IV and (Fe/H) values beween -0.9 and +0.21 dex.The data define a 'consensus' zero point with a precision of + or -0.018 dex and have rms errors per datum which are typically 0.08-0.16dex. The primary data base makes recalibration possible for the large(Fe/H) catalogs of Hansen and Kjaergaard (1971) and Brown et al. (1989).A set of (Fe/H) standard stars and a new DDO calibration are given whichhave rms of 0.07 dex or less for the standard star data. For normal Kgiants, CN-based values of (Fe/H) turn out to be more precise than manyhigh-dispersion results. Some zero-point errors in the latter are alsofound and new examples of continuum-placement problems appear. Thushigh-dispersion results are not invariably superior to photometricmetallicities. A review of high-dispersion and related work onsupermetallicity in K III-IV star is also given. Effect of improved H(-) opacity on the infrared flux method temperature scale and derived angular diameters - Use of a self-consistent calibrationThe present study uses the infrared flux method (IRFM) to derive thestellar temperatures and angular diameters derived by Blackwell et al.(1990). The more accurate calculations of the H(-) opacity recommendedby John (1988) are applied. A Vega self-consistent infrared calibrationis derived using the IRFM. Relations are given to allow temperatures tobe derived from measurements of V-K and B-V. The original temperaturesare increased by up to 1.3 percent, and the angular diameters aredecreased by up to 2.7 percent. The effect of uncertainties in the H(-)opacity and convection on determined values of angular diameter and Teis assessed. The chief remaining uncertainty arises from the absence ofa well-established infrared calibration for Vega. Photoelectric photometry of G-M stars in the Vilnius systemNot Available High-resolution spectroscopic survey of 671 GK giants. I - Stellar atmosphere parameters and abundancesA high-resolution spectroscopic survey of 671 G and K field giants isdescribed. Broad-band Johnson colors have been calibrated againstrecent, accurate effective temperature, T(eff), measurements for starsin the range 3900-6000 K. A table of polynomial coefficients for 10color-T(eff) relations is presented. Stellar atmosphere parameters,including T(eff), log g, Fe/H, and microturbulent velocity, are computedfor each star, using the high-resolution spectra and various publishedphotometric catalogs. For each star, elemental abundances for a varietyof species have been computed using a LTE spectrum synthesis program andthe adopted atmosphere parameters. Determination of temperatures and angular diameters of 114 F-M stars using the infrared flux method (IRFM)Abstract image available at:http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1990A&A...232..396B&db_key=AST Third preliminary catalogue of stars observed with the photoelectric astrolabe of the Beijing Astronomical Observatory.Not Available A search for lithium-rich giant starsLithium abundances or upper limits have been determined for 644 brightG-K giant stars selected from the DDO photometric catalog. Two of thesegiants possess surface lithium abundances approaching the 'cosmic' valueof the interstellar medium and young main-sequence stars, and eight moregiants have Li contents far in excess of standard predictions. At leastsome of these Li-rich giants are shown to be evolved to the stage ofhaving convectively mixed envelopes, either from the direct evidence oflow surface carbon isotope ratios, or from the indirect evidence oftheir H-R diagram positions. Suggestions are given for the uniqueconditions that might have allowed these stars to produce or accrete newlithium for their surface layers, or simply to preserve from destructiontheir initial lithium contents. The lithium abundance of the remainingstars demonstrates that giants only very rarely meet the expectations ofstandard first dredge-up theories; the average extra Li destructionrequired is about 1.5 dex. The evolutionary states of these giants andtheir average masses are discussed briefly, and the Li distribution ofthe giants is compared to predictions of Galactic chemical evolution. The Perkins catalog of revised MK types for the cooler starsA catalog is presented listing the spectral types of the G, K, M, and Sstars that have been classified at the Perkins Observatory in therevised MK system. Extensive comparisons have been made to ensureconsistency between the MK spectral types of stars in the Northern andSouthern Hemispheres. Different classification spectrograms have beengradually improved in spite of some inherent limitations. In thecatalog, the full subclasses used are the following: G0, G5, G8, K0, K1,K2, K3, K4, K5, M0, M1, M2, M3, M4, M5, M6, M7, and M8. Theirregularities are the price paid for keeping the general scheme of theoriginal Henry Draper classification. Stellar integrated fluxes in the wavelength range 380 NM - 900 NM derived from Johnson 13-colour photometryPetford et al. (1988) have reported measured integrated fluxes for 216stars with a wide spread of spectral type and luminosity, and mentionedthat a cubic-spline integration over the relevant Johnson 13-colormagnitudes, converted to fluxes using Johnson's calibration, is inexcellent agreement with those measurements. In this paper a list of thefluxes derived in this way, corrected for a small dependence on B-V, isgiven for all the 1215 stars in Johnson's 1975 catalog with completeentries. Narrow band 1 micron-4 micron infrared photometry of 176 starsObservations of 176 stars have been obtained by filter photometry overthe 1-4 micron range at the Observatorio del Teide in Tenerife.Measurements for Jn, Kn, and Ln relative to Vega are presented, alongwith the probable errors of those stars observed for several nightsduring two of the three observing sessions. Mean quoted probable errorsof 0.018 m for Jn, 0.016 for Kn, and 0.027 for Ln are found.Transformations between the present narrow band magnitudes and Johnsonmagnitudes are presented.
Submit a new article
• - No Links Found - | 2019-11-18 14:40:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6527435779571533, "perplexity": 6665.1784351954575}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669795.59/warc/CC-MAIN-20191118131311-20191118155311-00170.warc.gz"} |
https://www.expii.com/t/multiplying-scientific-notation-examples-practice-4446 | Expii
Multiplying Scientific Notation — Examples & Practice - Expii
We multiply two numbers in scientific notation by multiplying the coefficients and adding the exponents. | 2022-10-07 20:01:47 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9429003596305847, "perplexity": 1591.000951390059}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338244.64/warc/CC-MAIN-20221007175237-20221007205237-00615.warc.gz"} |
https://civil.gateoverflow.in/1749/gate-civil-2021-set-1-question-25 | A signalized intersection operates in two phases. The lost time is $3$ seconds per phase. The maximum ratios of approach flow to saturation flow for the two phases are $0.37$ and $0.40$. The optimum cycle length using the Webster's method (in seconds, round off to one decimal place) is ______________ | 2022-10-03 20:31:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6416299343109131, "perplexity": 962.7289427726412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00287.warc.gz"} |
https://brilliant.org/problems/is-this-a-theorem-2/ | # Is this a theorem?
Geometry Level 3
A line intersects the sides $$AB,BC,CA$$ of a triangle $$ABC$$ at $$P,Q$$ and $$R$$, respectively.
What is the value of $$\dfrac{AP}{BP} \cdot \dfrac{BQ}{CQ} \cdot \dfrac{CR}{RA}$$?
× | 2017-05-24 10:04:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7159939408302307, "perplexity": 444.5786798230415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607811.15/warc/CC-MAIN-20170524093528-20170524113528-00115.warc.gz"} |
https://cs.stackexchange.com/questions/108451/3-dnf-proves-the-algorithm-is-in-p-class | # 3-DNF proves the algorithm is in P class
After, reading the link we will take a look at how we recover our solutions to a constrained Sudoku Puzzle.
If we assume that a sudoku puzzle was generated with this procedure we can now create a "semi"-solver. I say "semi" because we need the $$3 \times 3$$ grid $$M_{2,2}$$ already solved for us. Let's assume we have this. As an example I will assume we are provided:
$$\begin{bmatrix} 5 & 9 & 6\\ 1 & 2 & 4\\ 3 & 7 & 8 \end{bmatrix}$$
Now we will flatten it into: $$[5,9,6,1,2,4,3,7,8]$$ and permute as follows:
[8, 5, 9, 6, 1, 2, 4, 3, 7]-----list 1
[7, 8, 5, 9, 6, 1, 2, 4, 3]-----list 2
[3, 7, 8, 5, 9, 6, 1, 2, 4]-----list 3
[4, 3, 7, 8, 5, 9, 6, 1, 2]-----list 4
[2, 4, 3, 7, 8, 5, 9, 6, 1]-----list 5
[1, 2, 4, 3, 7, 8, 5, 9, 6]-----list 6
[6, 1, 2, 4, 3, 7, 8, 5, 9]-----list 7
[9, 6, 1, 2, 4, 3, 7, 8, 5]-----list 8
[5, 9, 6, 1, 2, 4, 3, 7, 8]-----list 9
Now for each list, we will turn them into a $$3 \times 3$$ grid using the same mapping in step 2 above. For example list 1 would get mapped to
$$\begin{bmatrix} 8 & 5 & 9 \\ 6 & 1 & 2 \\ 4 & 3 & 7 \end{bmatrix}$$
Now we position these in the game board the same way we did as step 3 above. For example our layout would be as follows:
**list1** **list4** **list7**
**list2** **list5** **list8**
**list3** **list6** **list9**
In the prior example this would give us the correct solution:
$$M = \begin{bmatrix} 8 & 5 & 9 & 4 & 3 & 7 & 6 & 1 & 2\\ 6 & 1 & 2 & 8 & 5 & 9 & 4 & 3 & 7\\ 4 & 3 & 7 & 6 & 1 & 2 & 8 & 5 & 9\\ 7 & 8 & 5 & 2 & 4 & 3 & 9 & 6 & 1\\ 9 & 6 & 1 & 7 & 8 & 5 & 2 & 4 & 3\\ 2 & 4 & 3 & 9 & 6 & 1 & 7 & 8 & 5\\ 3 & 7 & 8 & 1 & 2 & 4 & 5 & 9 & 6\\ 5 & 9 & 6 & 3 & 7 & 8 & 1 & 2 & 4\\ 1 & 2 & 4 & 5 & 9 & 6 & 3 & 7 & 8\\ \end{bmatrix}$$
Then we have list 9 (our input) will always give you correct solution in quadratic time.
For further illustration, I intend to prove that the algorithm aforementioned is in P class in two ways.
Here, we'll take a look at 3-DNF.
(L1 ∧ L2 ∧ L3) | (L4 ∧ L5 ∧ L6) | (L7 ∧ L8 ∧ L9)
Let L1=list1, L2 = list2,...
**list1** **list4** **list7**
**list2** **list5** **list8**
**list3** **list6** **list9**
Therefore, the algorithm generates grids and recovers correct solutions easily.
Now, lets say I want to check the satsifiability of the algorithm's circular shifts. Here, I generate 3 more grids to show that there is a 3x3 positive 3-satisfying permutes.
l = [8, 5, 9, 6, 1, 2, 4, 3, 7]
[5, 9, 6, 1, 2, 4, 3, 7, 8]-l1
[9, 6, 1, 2, 4, 3, 7, 8, 5]-l2
[6, 1, 2, 4, 3, 7, 8, 5, 9]-l3
[1, 2, 4, 3, 7, 8, 5, 9, 6]-l4
[2, 4, 3, 7, 8, 5, 9, 6, 1]-l5
[4, 3, 7, 8, 5, 9, 6, 1, 2]-l6
[3, 7, 8, 5, 9, 6, 1, 2, 4]-l7
[7, 8, 5, 9, 6, 1, 2, 4, 3]-l8
[8, 5, 9, 6, 1, 2, 4, 3, 7]-l9
x = [5, 9, 6, 1, 2, 4, 3, 7, 8]
[9, 6, 1, 2, 4, 3, 7, 8, 5]-x1
[6, 1, 2, 4, 3, 7, 8, 5, 9]-x2
[1, 2, 4, 3, 7, 8, 5, 9, 6]-x3
[2, 4, 3, 7, 8, 5, 9, 6, 1]-x4
[4, 3, 7, 8, 5, 9, 6, 1, 2]-x5
[3, 7, 8, 5, 9, 6, 1, 2, 4]-x6
[7, 8, 5, 9, 6, 1, 2, 4, 3]-x7
[8, 5, 9, 6, 1, 2, 4, 3, 7]-x8
[5, 9, 6, 1, 2, 4, 3, 7, 8]-x9
y = [9, 6, 1, 2, 4, 3, 7, 8, 5]
[6, 1, 2, 4, 3, 7, 8, 5, 9]-y1
[1, 2, 4, 3, 7, 8, 5, 9, 6]-y2
[2, 4, 3, 7, 8, 5, 9, 6, 1]-y3
[4, 3, 7, 8, 5, 9, 6, 1, 2]-y4
[3, 7, 8, 5, 9, 6, 1, 2, 4]-y5
[7, 8, 5, 9, 6, 1, 2, 4, 3]-y6
[8, 5, 9, 6, 1, 2, 4, 3, 7]-y7
[5, 9, 6, 1, 2, 4, 3, 7, 8]-y8
[9, 6, 1, 2, 4, 3, 7, 8, 5]-y9
Here, I demonstrate that the 3x3 shift meets satisfiability for 9! Sudoku grids generated by the algorithm. At the end of the question I prove that the expression is always meets satisfiability when given the correct inputs.
(l1 ∨ x9 ∨ y8) ∧ (l2 ∨ x1 ∨ y9)
l1 = [5, 9, 6, 1, 2, 4, 3, 7, 8]
x9 = [5, 9, 6, 1, 2, 4, 3, 7, 8]
y8 = [5, 9, 6, 1, 2, 4, 3, 7, 8]
All the listed elements above have their defined variables within these expressions. All the expressions hold true.
(𝑙1∨𝑥9∨𝑦8)∧(𝑙2∨𝑥1∨𝑦9)∧(𝑙3∨𝑥2∨𝑦1)∧(𝑙4∨𝑥3∨𝑦2)∧(𝑙5∨𝑥4∨𝑦3)∧(𝑙6∨𝑥5∨𝑦4)∧(𝑙7∨𝑥6∨𝑦5)∧(𝑙8∨𝑥7∨𝑦6)∧(𝑙9∨𝑥8∨𝑦7)∧(𝑙1∨𝑥9∨𝑦8)∧(𝑙2∨𝑥1∨𝑦9)
Here is a chart showing the 3-satsifiability of the algorithm. Proving that the 3x3 shift overlaps all 9! valid grids that the algorithm can generate
Overall, are these proofs correct that constrained Sudoku is in P class? | 2019-12-05 23:45:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4506829082965851, "perplexity": 371.408242486663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482284.9/warc/CC-MAIN-20191205213531-20191206001531-00488.warc.gz"} |
https://www.gradesaver.com/textbooks/science/physics/college-physics-4th-edition/chapter-27-problems-page-1043/56 | ## College Physics (4th Edition)
$\lambda = 2.43\times 10^{-12}~m$
We can find the wavelength of each photon: $E = \frac{hc}{\lambda}$ $\lambda = \frac{hc}{E}$ $\lambda = \frac{(6.626\times 10^{-34}~J~s)(3.0\times 10^8~m/s)}{(511\times 10^3~eV)(1.6\times 10^{-19}~J/eV)}$ $\lambda = 2.43\times 10^{-12}~m$ | 2021-05-10 23:06:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8953897953033447, "perplexity": 1652.428667123196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00403.warc.gz"} |
https://math.stackexchange.com/questions/3086455/bounded-gradient-implies-lipschitz-proof-with-the-mean-value-theorem | # Bounded Gradient implies Lipschitz proof with the mean value theorem
Let $$f:\mathbb{R}^n \to \mathbb{R}$$ with $$|| \nabla f(x)|| \leq M$$ (say it is the Euclidean norm), then f is Lipschitz.
I have seen proofs that do this for the case where $$f:\mathbb{R} \to \mathbb{R}$$ by applying the mean value theorem. I am wondering if there is a proof available that shows how the mean value theorem is applied to the problem for a function from $$f:\mathbb{R}^n \to \mathbb{R}$$?
I don't understand where is your problem. This is exactcly the same proof in higher-dimension.
By the mean value theorem we have :
$$\| f(x) - f(y) \| \leq \sup_{x \in \mathbb{R}^n} \| \nabla f(x) \| \|x -y \| \leq M \| x - y\|$$
• I wasn't sure where the first inequality came from, now I see it is from considering the line $(1-t)x + ty$ and then applying the Cauchy-Schwarz inequality – geo17 Jan 24 at 23:03
Here's an approach using the fundamental theorem of calculus and an integral estimate in lieu of the mean value theorem:
Let
$$x, y \in \Bbb R^n; \tag 1$$
let
$$\gamma:[0, 1] \to \Bbb R^n \tag 2$$
be given by
$$\gamma(t) = x + t(y - x); \tag 3$$
then $$\gamma(t)$$ is a line segment 'twixt
$$\gamma(0) = x \; \text{and} \; \gamma(1) = y; \tag 4$$
then
$$f(y) - f(x) = f(\gamma(1)) - f(\gamma(0))$$ $$= \displaystyle \int_0^1 \dfrac{df(\gamma(t))}{dt} \; dt = \int_0^1 \nabla f(\gamma(t)) \cdot \dot \gamma(t) \; dt = \int_0^1 \nabla f(\gamma(t)) \cdot (y - x) \; dt \tag 5$$
therefore, 22nd $$\vert f(y) - f(x) \vert = \left \vert \displaystyle \int_0^1 \nabla f(\gamma(t)) \cdot (y - x) \; dt \right \vert \le \displaystyle \int_0^1 \vert \nabla f(\gamma(t)) \vert \vert y - x \vert \; dt \le \vert y - x \vert \int_0^1 M \; dt = M\vert y - x \vert, \tag 6$$
which shows that $$f(x)$$ is in fact globally Lipschitz continuous with Lipschitz constant $$M$$. $$OE\Delta$$.
It will be observed that there is more than one similarity 'twixt this and the MVT approach; both are based on "one-dimensionalizing" the problem by restriction to a path joining $$x$$ and $$y$$, and both exploit the global bound $$\vert \nabla f(x) \vert \le M$$ to obtain the global Lipschitz constant $$M$$. Formally, by way of the mean value theorem we would write
$$f(y) - f(x) = f(\gamma(1)) - f(\gamma(0)) = (f(\gamma(r))'(1 - 0) = (f(\gamma(r))', 0 < r < 1; \tag 7$$
and note that
$$f(\gamma(r))' = \nabla f(\gamma(r)) \cdot \dot \gamma(r) = \nabla f(\gamma(r)) \cdot (y - x); \tag 8$$
if we combine (7) and (8) and take norms, the desired result is obtained. | 2019-04-22 15:59:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9665120840072632, "perplexity": 148.63606704679628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578558125.45/warc/CC-MAIN-20190422155337-20190422181337-00363.warc.gz"} |
http://math.stackexchange.com/questions/181267/determing-if-two-k-subsets-are-disjoint-given-the-product-of-their-elements?answertab=oldest | # Determing if two k subsets are disjoint given the product of their elements
Consider the following problem (phrased with the use of a black box).
You choose $n$ numbers $X = \{x_1,\ldots,x_n\}$ and pass it to a black box that returns a list $Y = \{y_1,\ldots,y_m\}$ where each of the elements is a product of the elements of some $k-subset$ of $X$. That is for every $1 \leq i \leq m$ the black box picks an element $Y_i \in {X \choose k}$ and sets $$y_i = \prod_{p \in Y_i} p.$$
Given that I am able to choose the elements of $X$ I am wondering how should I pick them in order to be able to compute the number of intersecting $(Y_i,Y_j)$ pairs as efficiently as possible.
One thing is take $X$ to contain only pairwise coprime elements and then check the if the pairs $(y_i,y_j)$ are coprime or not.
I am wondering, if there is something even more slicker that can be done that would then allow to compute the number of intersecting pairs in say $O(m)$ or at least faster than $O(m^2)$?
-
I'm not sure if this answers your question, but you could try making each $x_i=2^{2^{i-1}}$. That is, $X=\{2,4,16,256,\ldots\}$ , and each element is the square of the previous one. Now, each of the $y_i$ will be a power of 2, where the binary representation of the exponent will indicate which of the $x_i$ comprise the product. I'm not sure what order the computation would be to evaluate this, but I imagine that it would be better than prime factorisation.
Thats a neat answer - thanks! However I am afraid that the magnitude of the elements in $X$ would bloat out the complexity of the algorithm :( – Jernej Aug 11 '12 at 9:10 | 2015-08-01 10:29:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8587470650672913, "perplexity": 125.9637082440556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988650.53/warc/CC-MAIN-20150728002308-00154-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://mersenneforum.org/showthread.php?s=59cc33f6fee8f2ce34c123eecaee5ed7&t=18734 | mersenneforum.org For science!
Register FAQ Search Today's Posts Mark Forums Read
2013-10-23, 04:27 #1 firejuggler "Vincent" Apr 2010 Over the rainbow 22×7×103 Posts For science! Planetary system containing 7 planet candidate found in Kepler's data http://arxiv.org/pdf/1310.5912v1.pdf Code: We report the discovery of 14 new transiting planet candidates in the Kepler field from the Planet Hunters citizen science program. None of these candidates overlap with Kepler Objects of Interest (KOIs), and five of the candidates were missed by the Kepler Transit Planet Search (TPS) algorithm. The new candidates have periods ranging from 124 − 904 days, eight residing in their host star’s habitable zone (HZ) and two (now) in multiple planet systems. We report the discovery of one more addition to the six planet candidate system around KOI-351, marking the first seven planet candidate system from Kepler. Additionally, KOI-351 bears some resemblance to our own solar system, with the inner five planets ranging from Earth to mini-Neptune radii and the outer planets being gas giants; however, this system is very compact, with all seven planet candidates orbiting . 1 AU from their host star. We perform a numerical integration of the orbits and show that the system remains stable for over 100 million years. A Hill stability test also confirms the feasibility for the dynamical stability of the KOI-351 system. *KOI => Kepler Object of Interest, mainly stars. Last fiddled with by firejuggler on 2013-10-23 at 04:29
2013-10-23, 17:25 #2 Brian-E "Brian" Jul 2007 The Netherlands 2·11·149 Posts One of my strongest lifetime wishes is to be still around when we encounter the first evidence of intelligent life elsewhere.
2013-10-23, 17:50 #3
R.D. Silverman
"Bob Silverman"
Nov 2003
North of Boston
1D5416 Posts
Quote:
Originally Posted by Brian-E One of my strongest lifetime wishes is to be still around when we encounter the first evidence of intelligent life elsewhere.
I'd settle for evidence of intelligent life here on Earth.
2013-10-23, 17:54 #4
chalsall
If I May
"Chris Halsall"
Sep 2002
2·72·113 Posts
Quote:
Originally Posted by R.D. Silverman I'd settle for evidence of intelligent life here on Earth.
2013-10-23, 17:58 #5
R.D. Silverman
"Bob Silverman"
Nov 2003
North of Boston
22×1,877 Posts
Quote:
I don't exist. I am only a figment of my imagination.
2013-10-23, 18:26 #6
davar55
May 2004
New York City
5·7·112 Posts
Quote:
Originally Posted by R.D. Silverman I don't exist. I am only a figment of my imagination.
So is the rest of the Universe. Nothing exists.
2013-10-23, 19:40 #7
"Kieren"
Jul 2011
In My Own Galaxy!
2×3×1,693 Posts
Quote:
Originally Posted by davar55 So is the rest of the Universe. Nothing exists.
Ah! You're getting it!
(Insert shameless ROFLMAO @ own cleverness here.)
2013-10-23, 23:12 #8
Uncwilly
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
10,891 Posts
Quote:
Originally Posted by Brian-E One of my strongest lifetime wishes is to be still around when we encounter the first evidence of intelligent life elsewhere.
Wish not granted. Referencing the corrupt a wish thread.
2013-10-23, 23:20 #9 firejuggler "Vincent" Apr 2010 Over the rainbow 1011010001002 Posts Wish granted. However, whatever your opinion is on any subject, they have the opposite idea.
2013-10-24, 05:53 #10
LaurV
Romulan Interpreter
"name field"
Jun 2011
Thailand
22×7×367 Posts
Quote:
Originally Posted by firejuggler Wish granted. However, whatever your opinion is on any subject, they have the opposite idea.
Well, my idea will be "let's fight against each other!"
2013-10-24, 13:01 #11 kladner "Kieren" Jul 2011 In My Own Galaxy! 2·3·1,693 Posts [QUOTE=LaurV;357260]Well, my idea will be "let's fight against each other!" /QUOTE] Very clever! (I would not have caught the white text if I had not quoted.)
Similar Threads Thread Thread Starter Forum Replies Last Post Flatlander Hobbies 79 2022-11-04 15:24 MooooMoo Science & Technology 66 2020-07-31 02:36 cheesehead Soap Box 45 2010-10-07 22:50 kakos22 Information & Answers 0 2010-07-22 19:06 Spherical Cow Science & Technology 1 2006-11-13 10:16
All times are UTC. The time now is 16:57.
Tue Jan 31 16:57:17 UTC 2023 up 166 days, 14:25, 0 users, load averages: 2.16, 1.62, 1.33 | 2023-01-31 16:57:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24875910580158234, "perplexity": 12253.691966775898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499888.62/warc/CC-MAIN-20230131154832-20230131184832-00062.warc.gz"} |
https://www.gamedev.net/forums/topic/341813-does-anyone-know-a-way-around-virtual-templates/ | # Does anyone know a way around 'virtual templates'?
This topic is 4824 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Let me explain with (much simplified but hopefully still applicable) code:
using namespace std;
//these daft struct represent a type of archive class implementing static(compile-time) polymorphism
template <class T>
struct somebase
{
T i;
somebase() : i(5) { }
};
struct someobject : somebase<int>
{
};
//The following is what I want to achive but obviously its not valid
{
template <class T> //virtual template
virtual void load(T& ar, image& img) = 0;
};
{
template <class T>
};
//elsewhere in the land of oZ
Now I apologize for using struct everywhere (its been a long day). Ok, let me clarify, the somebase and someobject represent a special type of file archive that can be serialized to/from, they use static polymorphism and hence require that they passed as a template parameter to the image_loader's 'load' member. Basically what I want is to be able to have a stl map (as demonstrated) of image loaders but be able to pass both an image and the archive (as a template param) so the image can be loaded by the correct loader (in this case a bitmap loader). I can't think of an elagant way around the problem (I have tried honest but I keep getting back to 'if only virtual templates were allowed' [wink]) Now like i said its been a long day so I might well be overlooking something silly, easy and obvious as to the solution. Anyway I'm sooo tired so any questions can be answered tomoz [smile] Any help is welcome Thanks Dave
##### Share on other sites
Instead of doing what you're doing, why not try template specialization?
struct image_loader{ template <class T> //virtual template void load(T& ar, image& img); // this doesn't have to be virtual!};
template <typename T, typename DerivedType>void load(DerivedType &d, T &ar, image &img);// now specialize!!template<>void image_loader::load<int, my_image_loader>(my_image_loader &mil, int &ar, image &img){ // in here, use the "mil" reference}
##### Share on other sites
BTW you don't have to give a definition for your templated function... until you try to use it! In other words, it should be in the same translation unit as the place you use it...
IOW... define the template specialization in the same .cpp file as the place where you use it.
You can do many template specializations... as many as you need.
##### Share on other sites
You have a few options. Let's take a look at some extremely simplified code:
#include <iostream>struct implementation1 { int foo () { return 1; }};struct implementation2 { int foo () { return 2; }};struct interface { template < typename implementation_type > virtual void bar( implementation_type & i ) = 0;};struct interface_implementor : interface { template < typename implementation_type > virtual void bar( implementation_type & i ) { using namespace std; cout << i.foo() << endl; }};int main () { interface_implementator ii; implementation1 i1; implementation2 i2; ii.bar( i1 ); ii.bar( i2 );}
There are a few ways to refactor this code into something that compiles.
First, we could make all the implementation# classes derive from an interface base. In this case, such a class would look something like:
struct implementation_base { virtual int foo () = 0;};
And then of course our implementation# classes would need to drive from this class:
struct implementation1 : implementation_base { int foo () { return 1; }};
All the interface related functions are updated into non templatized versions:
template < typename implementation_type > virtual void bar( implementation_type_base & i ) = 0;
Second, we could approach the problem from the other end - and constrain the interface types instead. Example:
struct interface { virtual void do_something_with( int ) = 0; template < typename implementation_type > void bar( implementation_type & i ) { do_something_with( i.foo() ); }};struct interface_implementor : interface { void do_something_with( int i ) { using namespace std; cout << i << endl; }};
There are more options, but they start getting complex. When what both classes need to be very flexible, you enter the realm of needing multiple dispatch. Multiple dispatch can be a pain in languages which only support single-dispatch directly, especially if they're statically typed. Go figure that C++ would be a staticly typed language with no direct support for multiple dispatch.
There are workarounds, but a full coverage of multiple dispatch... we'll call it "out of the scope of this post" and direct you to google for more information on those situations :-).
Let's take your (actual) example now, your image loaders. We can apply either of the two mentioned refactorings to these classes.
Assuming we want to keep the main classes looking the same, we can assume we're going to need to pass our archive through a virtual function at some point. In order to deal with this, we need to make all our archives look "the same" - by which I mean, manipulatable through a single base. Assuming we can't control all the archive types passed to *_loader, we need to make a wrapper. Here's an example one - it consists of two parts, an interface, and an templatized implementation of that interface. For this example, I'll assume the only operations we preform on "ar" are reading bytes. Here's the end product:
struct archive_interface { virtual char get_me_a_byte() = 0;};template < typename T >class archive : public archive_interface { T & ar;public: archive( T & ar ) : ar( ar ) {} virtual char get_me_a_byte() { char byte; ar >> byte; return byte; }};
For *_loader, we put in their first "real" function - "load_impl", only with slightly different parameters - and let's make it protected. This is the "ugly" version which does everything, there's no real reason to make it available to the public.
Here's what this version can look like:
class image_loader {public: /* Dosn't compile yet: * template < class T > * virtual void load( T & ar , image & img ) = 0; */protected: virtual void load_impl( archive_interface & ar , image & img ) = 0;};class dummy_example_image_loader {public: /* Dosn't compile yet: * template < class T > * virtual void load( T & ar , image & img ) {} */protected: virtual void load_impl( archive_interface & ar , image & img ) { int width = (ar.get_me_a_byte() << 8) + (ar.get_me_a_byte() << 0); int height = (ar.get_me_a_byte() << 8) + (ar.get_me_a_byte() << 0); for ( .. ; a_long_time ; .. ) { /* add data from ar to img */ } }};
Okay, so now what? We still don't have anything for the general public!
This part's easy. We've implemented all the per-class stuff within load_impl - for both image_loader and all it's children, the function "load" will look the same - it will simply create an archive wrapper to pass to load_impl.
So does it need to be virtual? No! It will call virtual functions to take care of that. Remove the commented version of load found within dummy_example_image_loader, uncomment the one in image_loader, and implement it:
template < class T >virtual void load( T & ar , image & img ) { archive_wrapper ar_w( ar ); load_impl( ar_w , img );}
[Edited by - MaulingMonkey on August 27, 2005 11:42:10 PM]
##### Share on other sites
I'm not sure exactly what you are asking, because I haven't programmed much lately. However I get the idea that it might be the problem that I had once, and somehow solved.
I had a file that stored data, some of it was even nested using braces. I needed a way to read it into objects. Not only that but I wanted to be able to have the data format flexible because I didn't want to add lots of stuff when default parameters could be used.
Ok I wanted to be able to load up a struct S whenever one was found in the file. Ok one is found, now I need to read in its attributes but not all are present. To make it simpler everything was in this format: % name value
I'm not sure if the % was really needed or if it was just to help with the nesting, anyway.
I had a base struct Attribute with no template stuff, just a string for the attribute name and a virtual Load function. Then I had a templated derived class. It had a default value of type T, a current value of T (actually a stack, as it supported nesting) and a conversion operator to type T. It implemented Load by using the input stream operator of T into the current value.
so I would have TemplatedAttribute<A> AttributeA(defaultA);//and similar for others. The constructor added it to a std::map or something like that.
Then in the construtor for S I would so this:
a=AttributeA;
b=AttributeB;
c=AttributeC;
Somewhere inbtween there is this statemachine that handles calling Load on the right Attribute, by matching its stored string against the name of the attribute found in the file.
So in summary: two classes, one derived from the other. The base class has a virtual function, the the derived implements it in terms of T. So that std::map never interacts with the T because it operates through pointers to base, but you never have to do any casting because where you use the data you have access to the actual TemplatedAttribute object. Hope that this was relevant.
##### Share on other sites
Great solutions people!!!
I did actually get a working solution last night but scrapped it - something involving static function variable, or static constants, or something, anyway it wasn't very nifty (damn ugly too), the use of templates meant the compiler was creating too many specializations, creating unnessesary bloat - or something like that.[smile].
@Verg - If I understand you correctly that means creating new specialization for each type which would prevent the user adding new types (which is half the point of doing things this way).
@MaulingMonkey - woah! What a lot to read[wink], but well worth it. The class structures are pretty fixed (there is some flexiblility) but they are based on older code that is pretty integrated under-the-hood.
I am very impressed with your wrapper idea (since it allows the class structures to remain the same) and have adapted the idea without any trouble to my actual code - 'et voila, one v-happy man!'
I made a complete wrapper implementing all the c++ primtive data types and moved the wrapper into my code base as I think it complements my archive/stream/serialization library very well.
@Glak - Interesting to read athough I was unable to draw anything I could apply to my code (you seem to have had a more involved problem).
I did some searching on [google], search term 'c++ virtual-templates' and found some interesting info on the problem. I did read that possible the reason c++ doesn't support them is because of the lack of ideas about how to implement it (obviously it would be o-so-difficult to add efficiently).
Appreciate all the help
Dave
##### Share on other sites
Looks more like a problem for runtime polymorphism than compile time polymorphism. Specifically, if the archive type was itself polymorphic, you could take it by base pointer instead of template parameter, and the whole thing would be terribly simple.
##### Share on other sites
True, that would be easier, although it has two downsides:
First its less efficient, with compile-time polymorphism the compiler may optimize and inline such as down to a simple stream function such as 'ofstream << t'.
Second ideally the archive class would need 'virtual templates' (the very problem I had above) in order to serialize class-types too and static polymorphism allowed me to implement that easily.
edit: although I'm aware that using a runtime polymorphic wrapper impeades on performance (minimal though it is), the archive classes weren't actually intended for this sort of use.
The downside ofcourse to using polymorphism at compile-time is that there is no non-template base class to pass to functions.
##### Share on other sites
Quote:
Original post by dmatter@MaulingMonkey - woah! What a lot to read[wink], but well worth it. The class structures are pretty fixed (there is some flexiblility) but they are based on older code that is pretty integrated under-the-hood.I am very impressed with your wrapper idea (since it allows the class structures to remain the same) and have adapted the idea without any trouble to my actual code - 'et voila, one v-happy man!'I made a complete wrapper implementing all the c++ primtive data types and moved the wrapper into my code base as I think it complements my archive/stream/serialization library very well.
Glad to hear it's been of help :-).
Quote:
I did some searching on [google], search term 'c++ virtual-templates' and found some interesting info on the problem. I did read that possible the reason c++ doesn't support them is because of the lack of ideas about how to implement it (obviously it would be o-so-difficult to add efficiently).
I'm aware of at least one way it could be implemented that wouldn't be too hard I believe (specializations may make things a pain), although it would necessitate linking together the linker and compiler stages more than most compiler implementors are probably ready to accept, unless standardized.
Also, the possibility of introducing templatized virtuals is at least being evaluated by the standards comittee, although AFAIK no version has been decided upon as of yet.
I've got a lot of information shelved on multiple dispatch (and potential implementations within C++), I'm hoping to implement some wrappers to implement the concept within a library. It's complex enough that a single method is not going to be enough, I'm convinced. In order to make my code nice and extensible, I've been delving into techniques for variable numbers of template arguments - and revamping my development environment at the same time has delayed all this.
1. 1
2. 2
Rutin
16
3. 3
4. 4
5. 5
• 9
• 9
• 14
• 12
• 10
• ### Forum Statistics
• Total Topics
633270
• Total Posts
3011158
• ### Who's Online (See full list)
There are no registered users currently online
× | 2018-11-13 04:31:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1806449145078659, "perplexity": 3755.826554610767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741219.9/warc/CC-MAIN-20181113041552-20181113063552-00101.warc.gz"} |
https://www.x-mol.com/paper/phys/tag/70 | • J. Plant Growth. Regul. (IF 2.179) Pub Date : 2020-01-27
Jiayang Xu, Yuyi Zhou, Zicheng Xu, Zheng Chen, Liusheng Duan
Abstract Coronatine (COR) is a phytotoxin produced by Pseudomonas syringae and is a functional analogue of the bioactive hormone JA-Ile, which is widely involved in plant defence responses. In this study, we explored the effects of exogenous applications of COR on tobacco plants under polyethylene glycol-induced drought stress. Compared with control (CK), COR-treated tobacco plants exhibited higher leaf relative water content and better photosynthetic performance under drought exposure. Ultrastructural examination revealed that drought led to stomatal closure and disorganization of granum stacking in the chloroplasts (with obvious accumulation of plastoglobuli), and mitochondria in the CK samples presented injured cristae. In the leaf tissue of the COR-treated plants, regularly stacked granum thylakoids, few plastoglobuli and intact mitochondrial membranes and cristae were observed. Totals of 1803 and 6207 differentially expressed genes (DEGs) were identified between the samples from the COR-treated and CK plants under well-watered and drought conditions. Functional annotation analysis revealed that these DEGs were involved mainly in plant hormone signal transduction, cellular carbohydrate metabolic processes and photosynthesis processes. Six hundred forty transcription factor genes were also identified among the DEGs. This study provides a global view of COR-induced drought stress tolerance in tobacco from both physiological and transcriptional aspects.
更新日期:2020-01-27
• J. Plant Growth. Regul. (IF 2.179) Pub Date : 2020-01-27
Ambekar Nareshkumar, Sindhu Subbarao, Amarnatha Reddy Vennapusa, Vargheese Ashwin, Reema Banarjee, Mahesh J. Kulkarni, Vemanna S. Ramu, Makarla Udayakumar
Abstract Detoxification of reactive carbonyl compounds (RCC) is crucial to sustain cellular activity to improve plant growth and development. Seedling growth is highly affected by accumulation of RCC under stress. We report non-enzymatic, enzymatic mechanisms of detoxification of RCC in the cucumber, tobacco and rice seedling systems exposed to glucose, NaCl, methyl viologen (MV) induced oxidative stress. The cucumber seedlings exposed to carbonyl stress had higher levels of malondialdehyde (MDA), protein carbonyls (PCs) and advanced glycation end-product N-carboxymethyl-lysine (AGE-CML) that negatively affected the seedling growth. The overexpression of enzyme encoding aldo-keto reductase-1 (AKR1) in tobacco and rice showed detoxification of RCC, MDA and methylglyoxal (MG) with improved seedling growth under glucose, NaCl and MV-induced oxidative stress. Further, small molecules like acetylsalicylic acid (ASA), aminoguanidine (AG), carnosine (Car), curcumin (Cur) and pyridoxamine (PM) showed detoxification of RCC non-enzymatically and rescued the cucumber seedling growth from glucose, NaCl and MV-stress. In autotrophically grown rice seedlings these molecules substantially improved seedling growth under MV-induced oxidative stress. Seedlings treated with the small molecules sustained higher guaiacol peroxidase (GPX) enzyme activity signifying the role of small molecules in reducing carbonyl stress-induced protein inactivation and AGE-CML protein modifications. The results showed that besides enzymatic detoxification of RCC, the small molecules also could reduce cytotoxic effect of RCC under stress. The study demonstrates that small molecules are attractive compounds to improve the seedling growth under stress conditions.
更新日期:2020-01-27
• Plant Cell Rep. (IF 3.499) Pub Date : 2020-01-27
Priyanka Jha, Sergio J. Ochatt, Vijay Kumar
Abstract Key message This review summarizes recent knowledge on functions of WUS and WUS-related homeobox (WOX) transcription factors in diverse signaling pathways governing shoot meristem biology and several other aspects of plant dynamics. Abstract Transcription factors (TFs) are master regulators involved in controlling different cellular and biological functions as well as diverse signaling pathways in plant growth and development. WUSCHEL (WUS) is a homeodomain transcription factor necessary for the maintenance of the stem cell niche in the shoot apical meristem, the differentiation of lateral primordia, plant cell totipotency and other diverse cellular processes. Recent research about WUS has uncovered several unique features including the complex signaling pathways that further improve the understanding of vital network for meristem biology and crop productivity. In addition, several reports bridge the gap between WUS expression and plant signaling pathway by identifying different WUS and WUS-related homeobox (WOX) genes during the formation of shoot (apical and axillary) meristems, vegetative-to-embryo transition, genetic transformation, and other aspects of plant growth and development. In this respect, the WOX family of TFs comprises multiple members involved in diverse signaling pathways, but how these pathways are regulated remains to be elucidated. Here, we review the current status and recent discoveries on the functions of WUS and newly identified WOX family members in the regulatory network of various aspects of plant dynamics.
更新日期:2020-01-27
• J. High Energy Phys. (IF 5.833) Pub Date : 2020-01-21
Patrick Draper, Szilard Farkas
Abstract The swampland distance conjecture (SDC) addresses the ability of effective field theory to describe distant points in moduli space. It is natural to ask whether there is a local version of the SDC: is it possible to construct local excitations in an EFT that sample extreme regions of moduli space? In many cases such excitations exhibit horizons or instabilities, suggesting that there are bounds on the size and structure of field excitations that can be achieved in EFT. Static bubbles in ordinary Kaluza-Klein theory provide a simple class of examples: the KK radius goes to zero on a smooth surface, locally probing an in- finite distance point, and the bubbles are classically unstable against radial perturbations. However, it is also possible to stabilize KK bubbles at the classical level by adding flux. We study the impact of imposing the Weak Gravity Conjecture (WGC) on these solutions, finding that a rapid pair production instability arises in the presence of charged matter with q/m ≳ 1. We also analyze 4d electrically charged dilatonic black holes. Small curvature at the horizon imposes a bound log (MBH) ,≳ |∆𝜙|, independent of the WGC, and the bound can be strengthened if the particle satisfying the WGC is sufficiently light. We conjecture that quantum gravity in asymptotically flat space requires a general bound on large localized moduli space excursions of the form |∆𝜙| ≲ | log(RΛ)|, where R is the size of the minimal region enclosing the excitation and Λ−1 is the short-distance cutoff on local EFT. The bound is qualitatively saturated by the dilatonic black holes and Kaluza-Klein monopoles.
更新日期:2020-01-27
• J. High Energy Phys. (IF 5.833) Pub Date : 2020-01-21
Jesse F. Giron, Richard F. Lebed, Curtis T. Peterson
Abstract We incorporate fine-structure corrections into the dynamical diquark model of multiquark exotic hadrons. These improvements include effects due to finite diquark size, spin-spin couplings within the diquarks, and most significantly, isospin-dependent couplings in the form of pionlike exchanges assumed to occur between the light quarks within the diquarks. Using a simplified two-parameter interaction Hamiltonian, we obtain fits in which the isoscalar JPC = 1++ state — identified as the X (3872) — appears naturally as the lightest exotic (including all states that are predicted by the model but have not yet been observed), while the closed-charm decays of Zc(3900) and Zc(4020) prefer J/𝜓 and hc modes, respectively, in accord with experiment. We explore implications of this model for the excited tetraquark multiplets and the pentaquarks.
更新日期:2020-01-27
• Plant Mol. Biol. (IF 3.928) Pub Date : 2020-01-27
Wenfang Guo, Gangqiang Li, Nan Wang, Caifeng Yang, Yanan Zhao, Huakang Peng, Dehu Liu, Sanfeng Chen
Key message Overexpression of K2-NhaD in transgenic cotton resulted in phenotypes with strong salinity and drought tolerance in greenhouse and field experiments, increased expression of stress-related genes, and improved regulation of metabolic pathways, such as the SOS pathway. Abstract Drought and salinity are major abiotic stressors which negatively impact cotton yield under field conditions. Here, a plasma membrane Na+/H+ antiporter gene, K2-NhaD, was introduced into upland cotton R15 using an Agrobacterium tumefaciens-mediated transformation system. Homozygous transgenic lines K9, K17, and K22 were identified by PCR and glyphosate-resistance. TAIL-PCR confirmed that T-DNA carrying the K2-NhaD gene in transgenic lines K9, K17 and K22 was inserted into chromosome 3, 19 and 12 of the cotton genome, respectively. Overexpression of K2-NhaD in transgenic cotton plants grown in greenhouse conditions and subjected to drought and salinity stress resulted in significantly higher relative water content, chlorophyll, soluble sugar, proline levels, and SOD, CAT, and POD activity, relative to non-transgenic plants. The expression of stress-related genes was significantly upregulated, and this resulted in improved regulation of metabolic pathways, such as the salt overly sensitive pathway. K2-NhaD transgenic plants growing under field conditions displayed strong salinity and drought tolerance, especially at high levels of soil salinity and drought. Seed cotton yields in transgenic line were significantly higher than in wild-type plants. In conclusion, the data indicate that K2-NhaD transgenic lines have great potential for the production of stress-tolerant cotton under field conditions.
更新日期:2020-01-27
• J. Am. Soc. Mass Spectrom. (IF 3.202) Pub Date : 2019-11-21
Dahang Yu, Zhe Wang, Kellye A. Cupp-Sutton, Xiaowen Liu, Si Wu
Abstract Post-translational modifications (PTMs) play critical roles in biological processes and have significant effects on the structures and dynamics of proteins. Top-down proteomics methods were developed for and applied to the study of intact proteins and their PTMs in human samples. However, the large dynamic range and complexity of human samples makes the study of human proteins challenging. To address these challenges, we developed a 2D pH RP/RPLC-MS/MS technique that fuses high-resolution separation and intact protein characterization to study the human proteins in HeLa cell lysate. Our results provide a deep coverage of soluble proteins in human cancer cells. Compared to 225 proteoforms from 124 proteins identified when 1D separation was used, 2778 proteoforms from 628 proteins were detected and characterized using our 2D separation method. Many proteoforms with critically functional PTMs including phosphorylation were characterized. Additionally, we present the first detection of intact human GcvH proteoforms with rare modifications such as octanoylation and lipoylation. Overall, the increase in the number of proteoforms identified using 2DLC separation is largely due to the reduction in sample complexity through improved separation resolution, which enables the detection of low-abundance PTM-modified proteoforms. We demonstrate here that 2D pH RP/RPLC is an effective technique to analyze complex protein samples using top-down proteomics.
更新日期:2020-01-27
• J. Am. Soc. Mass Spectrom. (IF 3.202) Pub Date : 2019-09-11
Kevin A. Janssen, Mariel Coradin, Congcong Lu, Simone Sidoli, Benjamin A. Garcia
Abstract The analysis of histone post-translational modifications (PTMs) by mass spectrometry (MS) has been critical to the advancement of the field of epigenetics. The most sensitive and accurate workflow is similar to the canonical proteomics analysis workflow (bottom-up MS), where histones are digested into short peptides (4-20 aa) and quantitated in extracted ion chromatograms. However, this limits the ability to detect even very common co-occurrences of modifications on histone proteins, preventing biological interpretation of PTM crosstalk. By digesting with GluC rather than trypsin, it is possible to produce long polypeptides corresponding to intact histone N-terminal tails (50-60 aa), where most modifications reside. This middle-down MS approach is used to study distant PTM co-existence. However, the most sensitive middle-down workflow uses weak cation exchange-hydrophilic interaction chromatography (WCX-HILIC), which is less robust than conventional reversed-phase chromatography. Additionally, since the buffer systems for middle-down and bottom-up proteomics differ substantially, it is cumbersome to toggle back and forth between both experimental setups on the same LC system. Here, we present a new workflow using porous graphitic carbon (PGC) as a stationary phase for histone analysis where bottom-up and middle-down sized histone peptides can be analyzed simultaneously using the same reversed-phase buffer setup. By using this protocol for middle-down sized peptides, we identified 406 uniquely modified intact histone tails and achieved a correlation of 0.85 between PGC and WCX-HILIC LC methods. Together, our method facilitates the analysis of single and combinatorial histone PTMs with much simpler applicability for conventional proteomics labs than the state-of-the-art middle-down MS.
更新日期:2020-01-27
• J. Am. Soc. Mass Spectrom. (IF 3.202) Pub Date : 2019-12-02
Zhijie Wu, Yutong Jin, Bifan Chen, Morgan K. Gugger, Chance L. Wilkinson-Johnson, Timothy N. Tiambeng, Song Jin, Ying Ge
Abstract Reversible phosphorylation plays critical roles in cell growth, division, and signal transduction. Kinases which catalyze the transfer of γ-phosphate groups of nucleotide triphosphates to their substrates are central to the regulation of protein phosphorylation and are therefore important therapeutic targets. Top-down mass spectrometry (MS) presents unique opportunities to study protein kinases owing to its capabilities in comprehensive characterization of proteoforms that arise from alternative splicing, sequence variations, and post-translational modifications. Here, for the first time, we developed a top-down MS method to characterize the catalytic subunit (C-subunit) of an important kinase, cAMP-dependent protein kinase (PKA). The recombinant PKA C-subunit was expressed in Escherichia coli and successfully purified via his-tag affinity purification. By intact mass analysis with high resolution and high accuracy, four different proteoforms of the affinity-purified PKA C-subunit were detected, and the most abundant proteoform was found containing seven phosphorylations with the removal of N-terminal methionine. Subsequently, the seven phosphorylation sites of the most abundant PKA C-subunit proteoform were characterized simultaneously using tandem MS methods. Four sites were unambiguously identified as Ser10, Ser11, Ser18, and Ser30, and the remaining phosphorylation sites were localized to Ser2/Ser3, Ser358/Thr368, and Thr[215-224]Tyr in the PKA C-subunit sequence with a 20mer 6xHis-tag added at the N-terminus. Interestingly, four of these seven phosphorylation sites were located at the 6xHis-tag. Furthermore, we have performed dephosphorylation reaction by Lambda protein phosphatase and showed that all phosphorylations of the recombinant PKA C-subunit phosphoproteoforms were removed by this phosphatase.
更新日期:2020-01-27
• J. Am. Soc. Mass Spectrom. (IF 3.202) Pub Date : 2019-08-19
Xin Yan, Lingjun Li, Chenxi Jia
Abstract Methylation of proteins has considerable impacts on physiological processes including signal transduction, DNA damage repair, transcriptional regulation, gene activation, and inhibition of gene expression. However, the traditional proteomics-based approach suffers from limited identification rates of these critical methylation sites on endogenous peptides. In this work, a peptidomics-based workflow was established to discover and characterize the global methylome of endogenous peptides in human cells. The reliability of our strategy was validated by methyl-SILAC labeling, resulting in 83% true-positive identifications in the HeLa cell line. We applied this approach to seven human cell lines, and 700 methylated forms on 646 putative methylation sites were identified in total, with over 61% of the methylation sites being newly identified. This study provides a complementary strategy for a traditional proteomics-based approach that enables identification of missing methylation sites and creates a first methylome draft of endogenous peptides of human cell lines, offering a valuable resource for in-depth studies of biological functions of methylated endogenous peptides.
更新日期:2020-01-27
• J. Am. Soc. Mass Spectrom. (IF 3.202) Pub Date : 2019-05-30
Matthew V. Holt, Tao Wang, Nicolas L. Young
Abstract Histone post-translational modifications (PTMs) have been intensively investigated due to their essential function in eukaryotic genome regulation. Histone modifications have been effectively studied using modified bottom-up proteomics approaches; however, the methods often do not capture single-molecule combinations of PTMs (proteoforms) that mediate known and expected biochemical mechanisms. Both middle-down mass spectrometry (MS) and top-down MS quantitation of H4 proteoforms present viable access to this important information. Histone H4 middle-down has previously avoided GluC digestion due to complex digestion products and interferences; however, the common AspN digestion cleaves at amino acid 23, disconnecting K31ac from other PTMs. Here, we demonstrate the effective use of GluC-based middle-down quantitation and compare it to top-down-based quantitation of proteoforms. Despite potential interferences in the m/z space, the proteoforms arising from all three GluC products (E52, E53, and E63) and intact H4 are chromatographically resolved and successfully analyzed in a single LC–MS analysis. Quantitative results and associated analytical metrics are compared between the different analytes of a single sample digested to different extents to reveal general concordance as well as the relative biases and complementarity of each approach. There is moderate proteoform discordance between digestion products (e.g., E52 and E53); however, each digestion product exhibits high concordance, regardless of digestion time. Under the conditions used, the GluC products are better chromatographically resolved yet show greater variance than the top-down quantitation that are more extensively sampled for MS2. GluC-based middle-down of H4 is thus viable. Both top-down and middle-down approaches have comparable quantitation capacity and are complementary.
更新日期:2020-01-27
• J. Am. Soc. Mass Spectrom. (IF 3.202) Pub Date : 2019-07-08
Yusi Cui, Ka Yang, Dylan Nicholas Tabang, Junfeng Huang, Weiping Tang, Lingjun Li
Abstract Simultaneous enrichment of glyco- and phosphopeptides will benefit the studies of biological processes regulated by these posttranslational modifications (PTMs). It will also reveal potential crosstalk between these two ubiquitous PTMs. Unlike custom-designed multifunctional solid phase extraction (SPE) materials, operating strong anion exchange (SAX) resin in electrostatic repulsion-hydrophilic interaction chromatography (ERLIC) mode provides a readily available strategy to analytical labs for enrichment of these PTMs for subsequent mass spectrometry (MS)-based characterization. However, the choice of mobile phase has largely relied on empirical rules from hydrophilic interaction chromatography (HILIC) or ion-exchange chromatography (IEX) without further optimization and adjustments. In this study, ten mobile phase compositions of ERLIC were systematically compared; the impact of multiple factors including organic phase proportion, ion pairing reagent, pH, and salt on the retention of glycosylated and phosphorylated peptides was evaluated. This study demonstrated good enrichment of glyco- and phosphopeptides from the nonmodified peptides in a complex tryptic digest. Moreover, the enriched glyco- and phosphopeptides elute in different fractions by orthogonal retention mechanisms of hydrophilic interaction and electrostatic interaction in ERLIC, maximizing the LC-MS identification of each PTM. The optimized mobile phase can be adapted to the ERLIC HPLC system, where the high resolution in separating multiple PTMs will benefit large-scale MS-based PTM profiling and in-depth characterization.
更新日期:2020-01-27
• Anal. Bioanal. Chem. (IF 3.286) Pub Date : 2020-01-27
Carmen Gondhalekar, Eva Biela, Bartek Rajwa, Euiwon Bae, Valery Patsekin, Jennifer Sturgis, Cole Reynolds, Iyll-Joon Doh, Prasoon Diwakar, Larry Stanker, Vassilia Zorba, Xianglei Mao, Richard Russo, J. Paul Robinson
Abstract This study explores the adoption of laser-induced breakdown spectroscopy (LIBS) for the analysis of lateral-flow immunoassays (LFIAs). Gold (Au) nanoparticles are standard biomolecular labels among LFIAs, typically detected via colorimetric means. A wide diversity of lanthanide-complexed polymers (LCPs) are also used as immunoassay labels but are inapt for LFIAs due to lab-bound detection instrumentation. This is the first study to show the capability of LIBS to transition LCPs into the realm of LFIAs, and one of the few to apply LIBS to biomolecular label detection in complete immunoassays. Initially, an in-house LIBS system was optimized to detect an Au standard through a process of line selection across acquisition delay times, followed by determining limit of detection (LOD). The optimized LIBS system was applied to Au-labeled Escherichia coli detection on a commercial LFIA; comparison with colorimetric detection yielded similar LODs (1.03E4 and 8.890E3 CFU/mL respectively). Optimization was repeated with lanthanide standards to determine if they were viable alternatives to Au labels. It was found that europium (Eu) and ytterbium (Yb) may be more favorable biomolecular labels than Au. To test whether Eu-complexed polymers conjugated to antibodies could be used as labels in LFIAs, the conjugates were successfully applied to E. coli detection in a modified commercial LFIA. The results suggest interesting opportunities for creating highly multiplexed LFIAs. Multiplexed, sensitive, portable, and rapid LIBS detection of biomolecules concentrated and labeled on LFIAs is highly relevant for applications like food safety, where in-field food contaminant detection is critical. Graphical abstract
更新日期:2020-01-27
• Anal. Bioanal. Chem. (IF 3.286) Pub Date : 2020-01-27
Jia Liu, Olga Chesnokova, Irina Oleinikov, Yuhao Qiang, Andrew V. Oleinikov, E Du
Abstract Sequestration of Plasmodium falciparum–infected erythrocytes (IEs) is responsible for the pathophysiology of placental malaria, leading to serious complications such as intrauterine growth restriction and low birth weight. However, it is an experimental challenge to study the biology of human placenta. Conventional cell culture–based in vitro placental models rely on immunostaining techniques and high-magnification microscopy is limited in providing real-time quantitative analysis. Impedimetric sensing in combination with cell culture may offer a useful tool. In this paper, we report that real-time label-free measurement of cellular electrical impedance using xCELLigence technology can be used to quantify the proliferation, syncytial fusion, and long-term response of BeWo cells to IEs cytoadhesion. Specifically, we optimized key experimental parameters of cell seeding density and concentration of forskolin, a compound used to promote cell syncitiation, based on electrical signals and immunostaining results. Prolonged time of infection with IEs that led to cell-cell junction vanishment in BeWo cells and release of inflammatory cytokines were monitored in real time by continuous change in electrical impedance. The results suggest that the impedimetric technique is sensitive and can offer new opportunities for the study of cellular responses of trophoblast cells to IEs. The developed system can provide potentially a high-throughput screening tool of anti-adhesion or anti-inflammatory drugs for placental malaria infections.
更新日期:2020-01-27
• Anal. Bioanal. Chem. (IF 3.286) Pub Date : 2020-01-27
Cristina Muñoz-San Martín, María Pedrero, Maria Gamella, Ana Montero-Calle, Rodrigo Barderas, Susana Campuzano, José M. Pingarrón
Abstract Proteases are involved in cancer‚ taking part in immune (dis)regulation, malignant progression and tumour growth. Recently, it has been found that expression levels of one of the members of the serine protease family, trypsin, is upregulated in human cancer cells of several organs, being considered as a specific cancer biomarker. Considering the great attention that electrochemical peptide sensors have nowadays, in this work, we propose a novel electroanalytical strategy for the determination of this important biomolecule. It implies the immobilization of a short synthetic peptide sequence, dually labelled with fluorescein isothiocyanate (FITC) and biotin, onto neutravidin-modified magnetic beads (MBs), followed by the peptide digestion with trypsin. Upon peptide disruption, the modified MBs were incubated with a specific fluorescein Fab fragment antibody labelled with horseradish peroxidase (HRP-antiFITC) and magnetically captured on the surface of a screen-printed carbon electrode (SPCE), where amperometric detection was performed using the hydroquinone (HQ)/HRP/H2O2 system. The biosensor exhibited a good reproducibility of the measurements (RSD 3.4%, n = 10), and specificity against other proteins and proteases commonly found in biological samples. This work reports the first quantitative data so far on trypsin expression in human cell lysates. The developed bioplatform was used for the direct determination of this protease in lysates from pancreatic cancer, cervix carcinoma and kidney cells in only 3 h and 30 min using low amounts (~ 0.1 μg) of raw extracts. Graphical abstract
更新日期:2020-01-27
• Anal. Bioanal. Chem. (IF 3.286) Pub Date : 2020-01-27
Zhenqing Li, Xiaoxiao Wang, Jin Chen, Chunxian Tao, Dawei Zhang, Yoshinori Yamaguchi
Abstract Fluorescent microspheres (FMs) are widely employed in diagnostics and life sciences research; here, we investigated the effect of capillary coating, polymer concentration, electric field strength, and sample concentration on the separation performance of 1.0 μm FMs in hydroxyethyl cellulose (HEC) by capillary electrophoresis (CE). Results showed that (1) capillary coating could enhance the fluorescence signal. (2) For HEC with the same molecular weight, the higher HEC concentration is, the later the first peak appears in the electropherogram. (3) When FMs are diluted, increasing the electric field strength can enhance the migration speed and reduce the aggregation of FMs. (4) The number of FMs calculated is close to the theoretical value when it is diluted 10,000 times. The optimum conditions for CE were as follows: 6 cm/8 cm of effective length and total length of the coated capillary, 0.3% HEC (1300 k), and 300 V/cm of electric field strength. Such a study is helpful for the development of a FM counting system. Graphical abstract
更新日期:2020-01-27
• Anal. Bioanal. Chem. (IF 3.286) Pub Date : 2020-01-27
Oleg L. Bodulev, Konstantin M. Burkin, Eugene E. Efremov, Ivan Yu. Sakharov
Abstract Nowadays, considerable efforts are focused on advancing DNA detection methods, which are extremely important in clinical diagnostics, pathogen determination, gene therapy, and forensic analysis. A one-pot sensitive microplate-based chemiluminescent assay coupled with catalytic hairpin assembly (CHA) amplification for detection of a 35-mer DNA oligonucleotide was developed. To improve the assay sensitivity, a triple amplification strategy based on the application of CHA (1), streptavidin-polyperoxidase conjugate (Stp-polyHRP) (2), and an enhanced chemiluminescent reaction (3) was used. The one-pot format of the assay, where all steps of the DNA determination are performed in the same well without transfer of samples from one test tube to another, increased its precision. The proposed assay detected the target DNA in the fM range and distinguished the target DNA from related DNAs, demonstrating its high sensitivity and high selectivity. Moreover, the assay was applied successfully for the quantitative determination of the target in spiked samples of human plasma. A microplate format of the assay was convenient for the analysis of a large number of samples. This study provides a prospective tool for DNA detection. Graphical abstract
更新日期:2020-01-27
• Sports Med. (IF 7.583) Pub Date : 2019-06-28
Scott J. Dankel, Jeremy P. Loenneke
Abstract It is commonly stated that individuals respond differently to exercise even when the same exercise intervention is performed. This has led many researchers to conduct exercise interventions and subsequently categorize individuals into different responder categories to determine what causes individuals to respond differently. Some methods by which differential responders are categorized include percentile ranks, standard deviations from the mean, and cluster analyses. Notably, each of these methods will result in the presence of differential responders even in the absence of an exercise intervention, indicating that individuals may be categorized based on the presence of random error as opposed to true differences in the exercise response. Here we propose a method by which differential responders can be classified after accounting for the presence of random error that is quantified from a time-matched control group. Individuals who exceed random error from the mean response of the intervention group can be confidently labelled as high and low responders. Importantly, the number of differential responders will be proportional to the ratio of variance in the exercise and control groups. We provide easy-to-follow steps and examples to demonstrate how this technique can identify differential responders to exercise. We also detail the flaws in other classification methods by demonstrating the number of differential responders who would have been classified using the same data set. Our hope is that this method will help to avoid misclassifying individuals based on random error and, in turn, increase the replicability of differential responder studies.
更新日期:2020-01-27
• Sports Med. (IF 7.583) Pub Date : 2020-01-27
Hassane Zouhal, Amri Hammami, Jed M. Tijani, Ayyappan Jayavel, Maysa de Sousa, Peter Krustrup, Zouita Sghaeir, Urs Granacher, Abderraouf Ben Abderrahman
Abstract Background Small-sided soccer games (SSSG) are a specific exercise regime with two small teams playing against each other on a relatively small pitch. There is evidence from original research that SSSG exposure provides performance and health benefits for untrained adults. Objectives The aim of this systematic review was to summarize recent evidence on the acute and long-term effects of SSSG on physical fitness, physiological responses, and health indices in healthy untrained individuals and clinical populations. Methods This systematic literature search was conducted in four electronic databases (PubMed, Web of Science, SPORTDiscus) from inception until June 2019. The following key terms (and synonyms searched for by the MeSH database) were included and combined using the operators “AND”, “OR”, “NOT”: ((soccer OR football) AND (“soccer training” OR “football training” OR “soccer game*” OR “small-sided soccer game*”) AND (“physical fitness” OR “physiological adaptation*” OR “physiological response*” OR health OR “body weight” OR “body mass” OR “body fat” OR “bone composition” OR “blood pressure”)). The search syntax initially identified 1145 records. After screening for titles, abstracts, and full texts, 41 studies remained that examined the acute (7 studies) and long-term effects (34 studies) of SSSG-based training on physical fitness, physiological responses, and selected alth indices in healthy untrained individuals and clinical populations. Results No training-related injuries were reported in the 41 acute and long-term SSSG studies. Typically, a single session of SSSG lasted 12–20 min (e.g., 3 × 4 min with 3 min rest or 5 × 4 min with 4 min rest) involving 4–12 players (2 vs. 2 to 6 vs. 6) at an intensity ≥ 80% of HRmax. Following single SSSG session, high cardiovascular and metabolic demands were observed. Specifically, based on the outcomes, the seven acute studies reported average heart rates (HR) ≥ 80% of HRmax (165–175 bpm) and mean blood lactate concentrations exceeding 5 mmol/l (4.5–5.9 mmol/l) after single SSSG sessions. Based on the results of 34 studies (20 with healthy untrained, 10 with unhealthy individuals, and 4 with individuals with obesity), SSSG training lasted between 12 and 16 weeks and was performed 2–3 times per week. SSSG had positive long-term effects on physical fitness (e.g., Yo–Yo IR1 performance), physiological responses including maximal oxygen uptake (VO2max) [+ 7 to 16%], and many health-related markers such as blood pressure (reductions in systolic [− 7.5%] and diastolic [− 10.3%] blood pressure), body composition (decreased fat mass [− 2 to − 5%]), and improved indices of bone health (bone mineral density: [+ 5 to 13%]; bone mineral content: [+ 4 to 5%]), and metabolic (LDL-cholesterol [− 15%] as well as cardiac function (left-ventricular internal diastolic diameter [+ 8%], end diastolic volume [+ 21%], left-ventricular mass index [+ 18%], and left-ventricular ejection fraction [+ 8%]). Irrespective of age or sex, these health benefits were observed in both, untrained individuals and clinical populations. Conclusions In conclusion, findings from this systematic review suggest that acute SSSG may elicit high cardiovascular and metabolic demands in untrained healthy adults and clinical populations. Moreover, this type of exercise is safe with positive long-term effects on physical fitness and health indices. Future studies are needed examining the long-term effects on physical fitness and physiological adaptations of different types of SSSG training (e.g., 3 vs. 3; 6 vs. 6) in comparison to continuous or interval training in different cohorts.
更新日期:2020-01-27
• Space Sci. Rev. (IF 8.142) Pub Date : 2020-01-27
J. Seon, K.-S. Chae, G. W. Na, H.-K. Seo, Y.-C. Shin, J. Woo, C.-H. Lee, W.-H. Seol, C.-A. Lee, S. Pak, H. Lee, S.-H. Shin, D. E. Larson, K. Hatch, G. K. Parks, J. Sample, M. McCarthy, C. Tindall, Y.-J. Jeon, J.-K. Choi, J.-Y. Park
Abstract The Particle Detector (PD) experiment aboard the geostationary satellite GEO-KOMPSAT-2A (GK2A) measures populations of electrons and positive ions in the Earth’s geostationary orbit at a geographic longitude of $$128.2^{\circ }\mbox{E}$$, inclination of $$0^{\circ }$$ and a mean orbital radius of 6.6 Earth radii ($$R_{E}$$). The PD experiment consists of three sensors with different viewing angles relative to the spacecraft. Each sensor consists of two telescopes that are mechanically configured back-to-back with a field-of-view of $$20^{\circ }\times 20^{\circ }$$ and measures electrons and ions, using silicon detectors equipped with foils and magnets for the separation of ions and electrons. The energy ranges of the sensor for electrons and ions are 100–3800 keV and 148–22500 keV, respectively. A particular emphasis on electron measurement is given by allocating 48 energy bins in the measured energy range, whereas 22 energy bins are allocated for ion measurements. This unprecedented energy resolution of $$\Delta E/E$$ in the range 5–25% for the electron and ion flux measurements is acquired every three seconds with cyclic polling of each sensor every second to provide an effective temporal resolution of one second. Together with the magnetometer aboard the spacecraft, the PD experiment will provide quantitative observations that will enable improved understanding of the adiabatic and nonadiabatic dynamics of the Earth’s magnetosphere for space weather studies at geostationary orbits from the vantage point of a far-east longitude.
更新日期:2020-01-27
• Asia Pac. J. Manag. (IF 2.737) Pub Date : 2020-01-27
更新日期:2020-01-27
• J Neurooncol. (IF 3.129) Pub Date : 2020-01-27
Masaaki Yamamoto, Toru Serizawa, Osamu Nagano, Kyoko Aoyagi, Yoshinori Higuchi, Yasunori Sato, Hidetoshi Kasuya, Bierta E. Barfod
Abstract Purpose This study aimed to validate whether the recently-proposed prognostic grading system, initial brain metastasis velocity (iBMV), is applicable to breast cancer patients receiving stereotactic radiosurgery (SRS). We focused particularly on whether this grading system is useful for patients with all molecular types, i.e., positive versus negative for EsR, PgR and HER2. Methods and materials This was an institutional review board-approved, retrospective cohort study using our database, prospectively accumulated at three gamma knife institutes, during the 20-year-period since 1998. We excluded patients for whom the day of primary cancer diagnosis was not available, had synchronous presentation, lacked information regarding molecular types, and/or had received pre-SRS radiotherapy and/or surgery. We ultimately studied 511 patients categorized into two classes by iBMV scores, i.e., < 2.00 and ≥ 2.00. Results The median iBMV score for the entire cohort was 0.97 (IQR 0.39–2.84). Median survival time (MST) in patients with iBMV < 2.00, 15.9 (95% CI 13.0–18.6, IQR 7.5–35.5) months, was significantly longer than that in patients with iBMV ≥ 2.00, 8.2 (95% CI 6.8–9.9, IQR 3.9–19.4) months (HR 1.582, 95% CI: 1.308–1.915, p < 0.0001). The same results were obtained in patients with EsR (−), PgR (−), HER2 (+) and HER2 (−) cancers, while MSTs did not differ significantly between iBMV < 2.00 vs ≥ 2.00 in patients with EsR (+) and PgR (+) cancers. Conclusions This system was clearly shown to be applicable to breast cancer patients with SRS-treated BMs. However, this system is not applicable to patients with hormone receptor (+) breast cancer.
更新日期:2020-01-27
• J Neurooncol. (IF 3.129) Pub Date : 2020-01-27
Edwin Lok, Pyay San, Olivia Liang, Victoria White, Eric T. Wong
Abstract Introduction Tumor Treating Fields (TTFields) are alternating electric fields at 200 kHz that disrupt tumor cells as they undergo mitosis. Patient survival benefit has been demonstrated in randomized clinical trials but much of the data are available only for supratentorial glioblastomas. We investigated a series of alternative array configurations for the posterior fossa to determine the electric field coverage of a cerebellar glioblastoma. Methods Semi-automated segmentation of neuro-anatomical structures was performed while the gross tumor volume (GTV) was manually delineated. A three-dimensional finite-element mesh was generated and then solved for field distribution. Results Compared to the supratentorial array configuration, the alternative array configurations consist of posterior displacement the 2 lateral opposing arrays and inferior displacement of the posteroanterior array, resulting in an average increase of 46.6% electric field coverage of the GTV as measured by the area under the curve of the electric field-volume histogram (EAUC). Hotspots, or regions of interest with the highest 5% of TTFields intensity (E5%), had an average increase of 95.6%. Of the 6 posterior fossa configurations modeled, the PAHorizontal arrangement provided the greatest field coverage at the GTV when the posteroanterior array was placed centrally along the patient’s posterior neck and horizontally parallel, along the longer axis, to the coronal plane of the patient’s head. Varying the arrays also produced hotspots proportional to TTFields coverage. Conclusions Our finite element modeling showed that the alternative array configurations offer an improved TTFields coverage to the cerebellar tumor compared to the conventional supratentorial configuration.
更新日期:2020-01-27
• Mol. Cancer (IF 10.679) Pub Date : 2020-01-27
Martin P. Barr; Steven G. Gray; Kathy Gately; Emily Hams; Padraic G. Fallon; Anthony Mitchell Davies; Derek J. Richard; Graham P. Pidgeon; Kenneth J. O’Byrne
Since the publication of this work [1] and in response to a recent query that was brought to our attention in relation to the Western Blot in Figure 1(C) for NP2, protein lysates prepared around the same time as those presented in the manuscript in question, were run by SDS-PAGE under similar experimental conditions and probed using the same primary antibodies to NP1 and NP2 that were used originally.
更新日期:2020-01-27
• J. Neuroinflammation (IF 5.7) Pub Date : 2020-01-27
Bereketeab Haileselassie; Amit U. Joshi; Paras S. Minhas; Riddhita Mukherjee; Katrin I. Andreasson; Daria Mochly-Rosen
Out of the myriad of complications associated with septic shock, septic-associated encephalopathy (SAE) carries a significant risk of morbidity and mortality. Blood-brain-barrier (BBB) impairment, which subsequently leads to increased vascular permeability, has been associated with neuronal injury in sepsis. Thus, preventing BBB damage is an attractive therapeutic target. Mitochondrial dysfunction is an important contributor of sepsis-induced multi-organ system failure. More recently, mitochondrial dysfunction in endothelial cells has been implicated in mediating BBB failure in stroke, multiple sclerosis and in other neuroinflammatory disorders. Here, we focused on Drp1-mediated mitochondrial dysfunction in endothelial cells as a potential target to prevent BBB failure in sepsis. We used lipopolysaccharide (LPS) to induce inflammation and BBB disruption in a cell culture as well as in murine model of sepsis. BBB disruption was assessed by measuring levels of key tight-junction proteins. Brain cytokines levels, oxidative stress markers, and activity of mitochondrial complexes were measured using biochemical assays. Astrocyte and microglial activation were measured using immunoblotting and qPCR. Transwell cultures of brain microvascular endothelial cells co-cultured with astrocytes were used to assess the effect of LPS on expression of tight-junction proteins, mitochondrial function, and permeability to fluorescein isothiocyanate (FITC) dextran. Finally, primary neuronal cultures exposed to LPS were assessed for mitochondrial dysfunction. LPS induced a strong brain inflammatory response and oxidative stress in mice which was associated with increased Drp1 activation and mitochondrial localization. Particularly, Drp1-(Fission 1) Fis1-mediated oxidative stress also led to an increase in expression of vascular permeability regulators in the septic mice. Similarly, mitochondrial defects mediated via Drp1-Fis1 interaction in primary microvascular endothelial cells were associated with increased BBB permeability and loss of tight-junctions after acute LPS injury. P110, an inhibitor of Drp1-Fis1 interaction, abrogated these defects, thus indicating a critical role for this interaction in mediating sepsis-induced brain dysfunction. Finally, LPS mediated a direct toxic effect on primary cortical neurons, which was abolished by P110 treatment. LPS-induced impairment of BBB appears to be dependent on Drp1-Fis1-mediated mitochondrial dysfunction. Inhibition of mitochondrial dysfunction with P110 may have potential therapeutic significance in septic encephalopathy.
更新日期:2020-01-27
• J. Med. Case Rep. (IF 0) Pub Date : 2020-01-27
Chen Liu; Jian Zhai; Quan Yuan; Yu Zhang; Hongguang Xu
Oblique lateral interbody fusion surgery has become increasingly popular for lumbar degenerative diseases. The oblique corridor is between the psoas muscle and the retroperitoneal vessels, and its use could result in decreased tissue trauma, minimal blood loss, and short operation times. Patients who undergo oblique lateral interbody fusion surgery are always placed in the right lateral position to avoid damage to the inferior vena cava, which is typically a right-sided vessel. There is a substantial risk of vascular injury during the operation if there are anatomical variations in the vessels. A 77-year-old man, of the Han nationality, with lumbar spinal stenosis underwent stand-alone oblique lateral interbody fusion surgery. Transverse magnetic resonance imaging of the lumbar spine indicated that his inferior vena cava was left-sided. A three-dimensional reconstructed image of abdominal computed tomography angiography showed that the inferior vena cava was located on the left side. Finally, the surgeon decided to change the position of our patient from a right lateral position to a left lateral position before the surgery. To date, this is the first reported case where a patient underwent oblique lateral interbody fusion surgery in a left lateral decubitus position due to a left-sided inferior vena cava. This case demonstrates that carefully reading radiological results is important for operation planning and avoiding anatomical complications.
更新日期:2020-01-27
• Clin. Epigenet. (IF 5.496) Pub Date : 2020-01-27
Maria Desemparats Saenz-de-Juano; Elena Ivanova; Katy Billooye; Anamaria-Cristina Herta; Johan Smitz; Gavin Kelsey; Ellen Anckaert
After publication of the original article [1], we were notified that.
更新日期:2020-01-27
• Cell Commun. Signal. (IF 5.111) Pub Date : 2020-01-27
Paula Lindner; Søren Brøgger Christensen; Poul Nissen; Jesper Vuust Møller; Nikolai Engedal
Cell death triggered by unmitigated endoplasmic reticulum (ER) stress plays an important role in physiology and disease, but the death-inducing signaling mechanisms are incompletely understood. To gain more insight into these mechanisms, the ER stressor thapsigargin (Tg) is an instrumental experimental tool. Additionally, Tg forms the basis for analog prodrugs designed for cell killing in targeted cancer therapy. Tg induces apoptosis via the unfolded protein response (UPR), but how apoptosis is initiated, and how individual effects of the various UPR components are integrated, is unclear. Furthermore, the role of autophagy and autophagy-related (ATG) proteins remains elusive. To systematically address these key questions, we analyzed the effects of Tg and therapeutically relevant Tg analogs in two human cancer cell lines of different origin (LNCaP prostate- and HCT116 colon cancer cells), using RNAi and inhibitory drugs to target death receptors, UPR components and ATG proteins, in combination with measurements of cell death by fluorescence imaging and propidium iodide staining, as well as real-time RT-PCR and western blotting to monitor caspase activity, expression of ATG proteins, UPR components, and downstream ER stress signaling. In both cell lines, Tg-induced cell death depended on death receptor 5 and caspase-8. Optimal cytotoxicity involved a non-autophagic function of MAP1LC3B upstream of procaspase-8 cleavage. PERK, ATF4 and CHOP were required for Tg-induced cell death, but surprisingly acted in parallel rather than as a linear pathway; ATF4 and CHOP were independently required for Tg-mediated upregulation of death receptor 5 and MAP1LC3B proteins, whereas PERK acted via other pathways. Interestingly, IRE1 contributed to Tg-induced cell death in a cell type-specific manner. This was linked to an XBP1-dependent activation of c-Jun N-terminal kinase, which was pro-apoptotic in LNCaP but not HCT116 cells. Molecular requirements for cell death induction by therapy-relevant Tg analogs were identical to those observed with Tg. Together, our results provide a new, integrated understanding of UPR signaling mechanisms and downstream mediators that induce cell death upon Tg-triggered, unmitigated ER stress.
更新日期:2020-01-27
• Cell Commun. Signal. (IF 5.111) Pub Date : 2020-01-27
Nicole J. Chew; Elizabeth V. Nguyen; Shih-Ping Su; Karel Novy; Howard C. Chan; Lan K. Nguyen; Jennii Luu; Kaylene J. Simpson; Rachel S. Lee; Roger J. Daly
Triple negative breast cancer (TNBC) accounts for 16% of breast cancers and represents an aggressive subtype that lacks targeted therapeutic options. In this study, mass spectrometry (MS)-based tyrosine phosphorylation profiling identified aberrant FGFR3 activation in a subset of TNBC cell lines. This kinase was therefore evaluated as a potential therapeutic target. MS-based tyrosine phosphorylation profiling was undertaken across a panel of 24 TNBC cell lines. Immunoprecipitation and Western blot were used to further characterize FGFR3 phosphorylation. Indirect immunofluorescence and confocal microscopy were used to determine FGFR3 localization. The selective FGFR1–3 inhibitor, PD173074 and siRNA knockdowns were used to characterize the functional role of FGFR3 in vitro. The TCGA and Metabric breast cancer datasets were interrogated to identify FGFR3 alterations and how they relate to breast cancer subtype and overall patient survival. High FGFR3 expression and phosphorylation were detected in SUM185PE cells, which harbor a FGFR3-TACC3 gene fusion. Low FGFR3 phosphorylation was detected in CAL51, MFM-223 and MDA-MB-231 cells. In SUM185PE cells, the FGFR3-TACC3 fusion protein contributed the majority of phosphorylated FGFR3, and largely localized to the cytoplasm and plasma membrane, with staining at the mitotic spindle in a small subset of cells. Knockdown of the FGFR3-TACC3 fusion and wildtype FGFR3 in SUM185PE cells decreased FRS2, AKT and ERK phosphorylation, and induced cell death. Knockdown of wildtype FGFR3 resulted in only a trend for decreased proliferation. PD173074 significantly decreased FRS2, AKT and ERK activation, and reduced SUM185PE cell proliferation. Cyclin A and pRb were also decreased in the presence of PD173074, while cleaved PARP was increased, indicating cell cycle arrest in G1 phase and apoptosis. Knockdown of FGFR3 in CAL51, MFM-223 and MDA-MB-231 cells had no significant effect on cell proliferation. Interrogation of public datasets revealed that increased FGFR3 expression in breast cancer was significantly associated with reduced overall survival, and that potentially oncogenic FGFR3 alterations (eg mutation and amplification) occur in the TNBC/basal, luminal A and luminal B subtypes, but are rare. These results indicate that targeting FGFR3 may represent a therapeutic option for TNBC, but only for patients with oncogenic FGFR3 alterations, such as the FGFR3-TACC3 fusion.
更新日期:2020-01-27
• Cell Commun. Signal. (IF 5.111) Pub Date : 2020-01-27
Ao-Xiang Guo; Jia-Jia Cui; Lei-Yun Wang; Ji-Ye Yin
CSDE1 (cold shock domain containing E1) plays a key role in translational reprogramming, which determines the fate of a number of RNAs during biological processes. Interestingly, the role of CSDE1 is bidirectional. It not only promotes and represses the translation of RNAs but also increases and decreases the abundance of RNAs. However, the mechanisms underlying this phenomenon are still unknown. In this review, we propose a “protein-RNA connector” model to explain this bidirectional role and depict its three versions: sequential connection, mutual connection and facilitating connection. As described in this molecular model, CSDE1 binds to RNAs and cooperates with other protein regulators. CSDE1 connects with different RNAs and their regulators for different purposes. The triple complex of CSDE1, a regulator and an RNA reprograms translation in different directions for each transcript. Meanwhile, a number of recent studies have found important roles for CSDE1 in human diseases. This model will help us to understand the role of CSDE1 in translational reprogramming and human diseases.
更新日期:2020-01-27
• Cell Commun. Signal. (IF 5.111) Pub Date : 2020-01-27
Marshall Ellison; Mukul Mittal; Minu Chaudhuri; Gautam Chaudhuri; Smita Misra
We have previously shown that the zinc finger transcription repressor SNAI2 (SLUG) represses tumor suppressor BRCA2-expression in non-dividing cells by binding to the E2-box upstream of the transcription start site. However, it is unclear how proliferating breast cancer (BC) cells that has higher oxidation state, overcome this repression. In this study, we provide insight into the mechanism of de-silencing of BRCA2 gene expression by PRDX5A, which is the longest member of the peroxiredoxin5 family, in proliferating breast cancer cells. We used cell synchronization and DNA affinity pulldown to analyze PRDX5A binding to the BRCA2 silencer. We used oxidative stress and microRNA (miRNA) treatments to study nuclear localization of PRDX5A and its impact on BRCA2-expression. We validated our findings using mutational, reporter assay, and immunofluorescence analyses. Under oxidative stress, proliferating BC cells express PRDX5 isoform A (PRDX5A). In the nucleus, PRDX5A binds to the BRCA2 silencer near the E2-box, displacing SLUG and enhancing BRCA2-expression. Nuclear PRDX5A is translated from the second AUG codon in frame to the first AUG codon in the PRDX5A transcript that retains all exons. Mutation of the first AUG increases nuclear localization of PRDX5A in MDA-MB-231 cells, but mutation of the second AUG decreases it. Increased mitronic hsa-miRNA-6855-3p levels under oxidative stress renders translation from the second AUG preferable. Mutational analysis using reporter assay uncovered a miR-6855-3p binding site between the first and second AUG codon in the PRDX5A transcript. miR-6855-3p mimic increases accumulation of nuclear PRDX5A and inhibits reporter gene translation. Oxidative stress increases miR-6855-3p expression and binding to the inter-AUG sequence of the PRDX5A transcript, promoting translation of nuclear PRDX5A. Nuclear PRDX5A relieves SLUG-mediated BRCA2 silencing, resulting in increased BRCA2-expression.
更新日期:2020-01-27
• BMC Pediatr. (IF 1.983) Pub Date : 2020-01-27
Chenmin Hu; Yanping Yu
Kawasaki disease (KD) is an acute febrile multisystem vasculitis and has been recognized to be the most common cause of acquired heart disease in children. Owing to its propensity to involve vessels throughout the entire body, KD often mimics other disease processes. The diagnosis might be delayed if other prominent symptoms appear before the characteristic clinical features of KD. Although gastrointestinal symptoms including vomiting, diarrhea, and abdominal pain are not uncommon in KD patients, KD with gastrointestinal bleeding is quite rare. A previously healthy 4-year-old boy initially presented with abdominal pain, followed by fever, rash, and gastrointestinal hemorrhage, eventually diagnosed as complete KD. The patient recovered smoothly after appropriate management and no subsequent complications occurred in the following months. The diagnosis of KD should be considered in children presenting with abdominal symptoms and fever without definable cause. Pediatricians should be aware of the risk of gastrointestinal bleeding in patients with KD, especially in those with prominent abdominal symptoms.
更新日期:2020-01-27
• BMC Pediatr. (IF 1.983) Pub Date : 2020-01-27
Alaka Adiso Limaso; Mesay Hailu Dangisso; Desalegn Tsegaw Hibstu
The first 28 days of aliveness are the biggest challenge mentioned for the continuity of life for children. In Ethiopia, despite a significant reduction in under-five mortality during the last 15 years, neonatal mortality remains a public health problem accounting for 47% of under-five mortality. Understanding neonatal survival and risk factors for neonatal mortality could help devising tailored interventions. The aim of this study was to determine the neonatal survival and risk factors for neonatal mortality in Aroresa district, Southern Ethiopia. A community based prospective follow up study was conducted among a cohort of term pregnant mothers and neonates delivered from January 1/2018 to March 30/2018. A total of 586 term pregnant mothers were selected with a multistage sampling technique and 584 neonates were followed-up for a total of 28 days, with 12 twin pairs. Data were coded, entered cleaned and analyzed using SPSS version 22. Kaplan–Meier survival curve was used to show pattern of neonatal death in 28 days. Independent and adjusted relationships of different predictors with neonates’ survival were assessed with Cox regression model. The risk of mortality was explored and presented with hazard ratio and 95% confidence interval and P-value less than 0.05 were considered as significant. The overall neonatal mortality was 41 per 1000 live births. Hazards of neonatal mortality was high for neonates with complications (AHR = 3.643; 95% CI, 1.36–9.77), male neonates (AHR = 2.71; 95% CI, 1.03–7.09), neonates that mothers perceived to be small (AHR = 3.46; 95% CI, 1.119–10.704), neonates who had initiated exclusive breast feeding (EBF) after 1 h (AHR = 3.572; 95% CI, 1.255–10.165) and mothers who had no postnatal care (AHR = 3.07; 95% CI, 1.16–8.12). Neonatal mortality in the study area was 4.1% which was high and immediate action should be taken towards achieving the Sustainable Development Goals. To improve neonatal survival, high impact interventions such as promotion of maternal service utilization, essential newborn care and early initiation of exclusive breast feeding were recommended.
更新日期:2020-01-27
• BMC Pediatr. (IF 1.983) Pub Date : 2020-01-27
Wei Zhong; Chao Yang; Lei Zhu; Yu-Qi Huang; Yong-Feng Chen
Acrodermatitis enteropathica (AE) is a rare autosomal recessive hereditary skin disease caused by mutations in the SLC39A4 gene and is characterized by periorificial dermatitis, alopecia and diarrhoea due to insufficient zinc absorption. Only one of the three known sets of twins with AE has genetic information. This case reports the discovery of new mutation sites in rare twin patients and draws some interesting conclusions by analysing the relationship between genetic information and clinical manifestations. Here, we report a pair of 16-month-old twin boys with AE exhibiting periorificial and acral erythema, scales and blisters, while subsequent laboratory examination showed normal plasma zinc and alkaline phosphatase levels. Further Sanger sequencing demonstrated that the patients were compound heterozygous for two unreported SLC39A4 mutations: a missense mutation in exon 5 (c.926G > T), which led to a substitution of the 309th amino acid residue cysteine with phenylalanine, a splice site mutation occurring in the consensus donor site of intron 5 (c.976 + 2 T > A). A family study revealed that the boys’ parents were heterozygous carriers of these two mutations. We identified a new compound heterozygous mutation in Chinese twins with AE, which consisted of two previous unreported variants in exon 5 and intron 5 of SLC39A4. We propose an up-to-date review that different mutations in SLC39A4 may exhibit different AE manifestations. In conjunction with future research, our work may shed light on genotype-phenotype correlations in AE patients and provide knowledge for genetic counselling and treatment for AE patients.
更新日期:2020-01-27
• BMC Med. Educ. (IF 1.87) Pub Date : 2020-01-27
Munashe Chigerwe; Karen A. Boudreaux; Jan E. Ilkiw
Following publication of the original article [1], we’ve been notified by an author that they have published their manuscript without seeking permission for the survey that was included in one of their tables (Table 1).
更新日期:2020-01-27
• BMC Fam. Pract. (IF 2.431) Pub Date : 2020-01-27
Geoffrey Hodgetts; Glenn Brown; Olivera Batić-Mujanović; Larisa Gavran; Zaim Jatić; Maja Račić; Gordana Tešanović; Amra Zalihić; Mary Martin; Richard Birtwhistle
Following publication of the original article [1], the authors opted to correct the name of co-author Amra Zalihić from Zahilić to Zalihić. The original article has been corrected.
更新日期:2020-01-27
• J. Nucl. Mater. (IF 2.547) Pub Date : 2020-01-27
M.H.H. Kolb; J.M. Heuser; R. Rolli; H.-C. Schneider; R. Knitter; M. Zmitko
更新日期:2020-01-27
• BMC Complement. Altern. Med. (IF 2.479) Pub Date : 2020-01-13
Eunbi Jo; Hyun-Jin Jang; Kyeong Eun Yang; Min Su Jang; Yang Hoon Huh; Hwa-Seung Yoo; Jun Soo Park; Ik-Soon Jang; Soo Jung Park
Cordyceps militaris (L.) Fr. (C. militaris) exhibits pharmacological activities, including antitumor properties, through the regulation of the nuclear factor kappa B (NF-κB) signaling. Tumor Necrosis Factor (TNF) and TNF-α modulates cell survival and apoptosis through NF- κB signaling. However, the mechanism underlying its mode of action on the NF-κB pathway is unclear. Here, we analyzed the effect of C. militaris extract (CME) on the proliferation of ovarian cancer cells by confirming viability, morphological changes, migration assay. Additionally, CME induced apoptosis was determined by apoptosis assay and apoptotic body formation under TEM. The mechanisms of CME were determined through microarray, immunoblotting and immunocytochemistry. CME reduced the viability of cells in a dose-dependent manner and induced morphological changes. We confirmed the decrease in the migration activity of SKOV-3 cells after treatment with CME and the consequent induction of apoptosis. Immunoblotting results showed that the CME-mediated upregulation of tumor necrosis factor receptor 1 (TNFR1) expression induced apoptosis of SKOV-3 cells via the serial activation of caspases. Moreover, CME negatively modulated NF-κB activation via TNFR expression, suggestive of the activation of the extrinsic apoptotic pathway. The binding of TNF-α to TNFR results in the disassociation of IκB from NF-κB and the subsequent translocation of the active NF-κB to the nucleus. CME clearly suppressed NF-κB translocation induced by interleukin (IL-1β) from the cytosol into the nucleus. The decrease in the expression levels of B cell lymphoma (Bcl)-xL and Bcl-2 led to a marked increase in cell apoptosis. These results suggest that C. militaris inhibited ovarian cancer cell proliferation, survival, and migration, possibly through the coordination between TNF-α/TNFR1 signaling and NF-κB activation. Taken together, our findings provide a new insight into a novel treatment strategy for ovarian cancer using C. militaris.
更新日期:2020-01-27
• BMC Complement. Altern. Med. (IF 2.479) Pub Date : 2020-01-13
Serawit Deyno; Abiy Abebe; Mesfin Asefa Tola; Ariya Hymete; Joel Bazira; Eyasu Makonnen; Paul E. Alele
Echinops kebericho is widely used for treatment of a variety of diseases including infectious, non-infectious disease and fumigation during child birth. Antibacterial, antimalarial, anti-leshimania, anti-diarrheal and insect repellent activities have been elucidated. Its toxicity profile is not yet investigated and thus this study was to investigate acute and sub-acute toxicity of E. kebericho decoctions. Acute toxicity study was performed in female Wistar albino rats with single oral dose and followed up to 14 days. The sub-acute oral dose toxicity studies were conducted in rats of both sexes in accordance with the repeated dose 28-day oral toxicity study in rodent OECD guidelines. Physical observations were made regularly during the study period while body weight was measured weekly. Organ weight, histopathology, clinical chemistry and hematology data were collected on the 29th day. Results were presented as mean ± standard deviation. One-way analysis of variance (ANOVA) was performed if assumptions were met; otherwise Kruskal-Wallis analysis was performed. Oral administration of E. kebericho decoction showed no treatment-related mortality in female rats up to the dose of 5000 mg/kg. In sub-acute toxicity studies, no significant treatment-related abnormalities were observed compared to negative controls. Food consumption, body weight, organ weight, hematology, clinical chemistry, and histopathology did not show significant variation between controls and treatment groups. However, creatinine, relative lung weight, triglycerides, and monocytes were lower in treated compared to control groups. Significant variations between male and female groups in food consumption, relative organ weight, hematology, clinical chemistry were observed. Histolo-pathology of high-dose treated groups showed fatty liver. Echinops kebericho showed LD50 of greater than 5000 mg/kg in acute toxicity study and is well tolerated up to the dose of 600 mg/kg body weight in sub-acute toxicity study.
更新日期:2020-01-27
• BMC Complement. Altern. Med. (IF 2.479) Pub Date : 2020-01-13
Fanchao Feng; Jingyi Huang; Zhichao Wang; Jiarui Zhang; Di Han; Qi Wu; Hailang He; Xianmei Zhou
Xiao-ai-ping injection (XAPI), as patented Chinese medicine, has shown promising outcomes in non-small-cell lung cancer (NSCLC) patients. This meta-analysis investigated the efficacy and safety of XAPI in combination with platinum-based chemotherapy. A comprehensive literature search was conducted to identify relevant studies in Pubmed, EMBASE, the Cochrane Library, Chinese National Knowledge Infrastructure, Wangfang Database, VIP Database, and Chinese Biology Medical Database from the date of their inception to September 2018. The RevMan 5.3 software was applied to calculate the risk ratio (RR) and mean difference (MD) with 95% confidence interval (CI). We included and analyzed 24 randomized controlled trials. The meta-analysis showed that XAPI adjunctive to platinum-based chemotherapy had better outcomes in objective tumor response rate (ORR) (RR: 1.27, 95% CI, 1.14–1.40); improved Karnofsky performance scores (KPS) (RR: 1.70, 95% CI, 1.48–1.95); reduction in occurrence of grade 3/4 leukopenia (RR: 0.49, 95% CI, 0.38–0.64), anemia (RR: 0.63, 95% CI, 0.46–0.87) and thrombocytopenia (RR: 0.53, 95% CI, 0.38–0.73), nausea and vomiting (RR: 0.57, 95% CI, 0.36–0.90); and enhanced immune function (CD8+ [MD: 4.96, 95% CI, 1.16–8.76] and CD4+/CD8+ [MD: 2.58, 95% CI, 1.69–3.47]). However, it did not increase dysregulated liver and kidney function, diarrhea, constipation, and fatigue. Subgroup analysis of ORR and KPS revealed that dosage, treatment duration, and methodological quality did not affect the outcome significantly. Our meta-analyses demonstrated that XAPI in combination with platinum-based chemotherapy had a better tumor response, improved the quality of life, attenuated adverse side effects, and enhanced immune function, which suggests that it might be used for advanced NSCLC. Moreover, low dosage (< 60 ml/d) and long-term treatment of XAPI might be a choice for advanced NSCLC patients.
更新日期:2020-01-27
• BMC Complement. Altern. Med. (IF 2.479) Pub Date : 2020-01-13
Caroline A. Smith; Chloe Parton; Marlee King; Gisselle Gallego
Complementary and alternative medicine and therapies (CAM) are widely used by parents of children with autism spectrum disorder (ASD). However, there is a gap in our understanding of how and why parents of children with ASD make decisions about CAM treatment, and how “evidence” influences their decision-making. The aim of this study was to explore views and perspectives on CAM decision-making among parents of children with ASD in Australia. Semi-structured interviews were conducted with parents of children with ASD (18 years and under) who were living in Australia. The interviews were digitally recorded, transcribed and then analysed using thematic analysis. Twenty-one parents were interviewed (20 women and one man). The mean age of participants was 43 years, (SD = 5.12 years), the majority of whom were born in Australia (71%), and almost half (43%) had a bachelor degree or higher. Three main themes were identifiedin the thematic analysis. First theme was ‘Parents’ experiences of researching CAM treatments, the second theme was, “Navigating CAM information and practices”, which comprises of the subthemes: Assessing information on CAM treatments’ What counts as ‘evidence’? and Assessing the impact of CAM treatments on the child - What counts as effective?, and the final theme was, “Creating a central and trustworthy source about CAM”. Across themes parents’ CAM decision-making was described as pragmatic, influenced by time, cost, and feasibility. Parents also reported that information on CAM was complex and often conflicting, and the creation of a centralised and reliable source of information on CAM was identified as a potential solution to these challenges. The development of evidence-based information resources for parents and supporting CAM health literacy may assist with navigating CAM decision-making for children’s with ASD.
更新日期:2020-01-27
• BMC Complement. Altern. Med. (IF 2.479) Pub Date : 2020-01-13
Yamna Khurshid; Basir Syed; Shabana U. Simjee; Obaid Beg; Aftab Ahmed
Nigella sativa (NS), a member of family Ranunculaceae is commonly known as black seed or kalonji. It has been well studied for its therapeutic role in various diseases, particularly cancer. Literature is full of bioactive compounds from NS seed. However, fewer studies have been reported on the pharmacological activity of proteins. The current study was designed to evaluate the anticancer property of NS seed proteins on the MCF-7 cell line. NS seed extract was prepared in phosphate-buffered saline (PBS), and proteins were precipitated using 80% ammonium sulfate. The crude seed proteins were partially purified using gel filtration chromatography, and peaks were resolved by SDS-PAGE. MTT assay was used to screen the crude proteins and peaks for their cytotoxic effects on MCF-7 cell line. Active Peaks (P1 and P4) were further studied for their role in modulating the expression of genes associated with apoptosis by real-time reverse transcription PCR. For protein identification, proteins were digested, separated, and analyzed with LC-MS/MS. Data analysis was performed using online Mascot, ExPASy ProtParam, and UniProt Knowledgebase (UniProtKB) gene ontology (GO) bioinformatics tools. Gel filtration chromatography separated seed proteins into seven peaks, and SDS-PAGE profile revealed the presence of multiple protein bands. Among all test samples, P1 and P4 depicted potent dose-dependent inhibitory effect on MCF-7 cells exhibiting IC50 values of 14.25 ± 0.84 and 8.05 ± 0.22 μg/ml, respectively. Gene expression analysis demonstrated apoptosis as a possible cell killing mechanism. A total of 11 and 24 proteins were identified in P1 and P4, respectively. The majority of the proteins identified are located in the cytosol, associate with biological metabolic processes, and their molecular functions are binding and catalysis. Hydropathicity values were mostly in the hydrophilic range. Our findings suggest NS seed proteins as a potential therapeutic agent for cancer. To our knowledge, it is the first study to report the anticancer property of NS seed proteins.
更新日期:2020-01-27
• BMC Complement. Altern. Med. (IF 2.479) Pub Date : 2020-01-15
Wei Zhou; Jiarui Wu; Yingli Zhu; Ziqi Meng; Xinkui Liu; Shuyu Liu; Mengwei Ni; Shanshan Jia; Jingyuan Zhang; Siyu Guo
As an effective prescription for gastric cancer (GC), Compound Kushen Injection (CKI) has been widely used even though few molecular mechanism analyses have been carried out. In this study, we identified 16 active ingredients and 60 GC target proteins. Then, we established a compound-predicted target network and a GC target protein-protein interaction (PPI) network by Cytoscape 3.5.1 and systematically analyzed the potential targets of CKI for the treatment of GC. Finally, molecular docking was applied to verify the key targets. In addition, we analyzed the mechanism of action of the predicted targets by Kyoto Encyclopedia of Genes and Genomes (KEGG) and Gene Ontology (GO) analyses. The results showed that the potential targets, including CCND1, PIK3CA, AKT1, MAPK1, ERBB2, and MMP2, are the therapeutic targets of CKI for the treatment of GC. Functional enrichment analysis indicated that CKI has a therapeutic effect on GC by synergistically regulating some biological pathways, such as the cell cycle, pathways in cancer, the PI3K-AKT signaling pathway, the mTOR signaling pathway, and the FoxO signaling pathway. Moreover, molecular docking simulation indicated that the compounds had good binding activity to PIK3CA, AKT1, MAPK1, ERBB2, and MMP2 in vivo. This research partially highlighted the molecular mechanism of CKI for the treatment of GC, which has great potential in the identification of the effective compounds in CKI and biomarkers to treat GC.
更新日期:2020-01-27
• BMC Complement. Altern. Med. (IF 2.479) Pub Date : 2020-01-15
Sha-Sha Wang; Shao-Yan Zhou; Xiao-Yan Xie; Ling Zhao; Yao Fu; Guang-Zhi Cai; Ji-Yu Gong
As the dry rhizome of Anemone raddeana Regel, Rhizoma Anemones Raddeanae (RAR), which belongs to Ranunculaceae, is usually used to treat wind and cold symptoms, hand-foot disease and spasms, joint pain and ulcer pain in China. It is well known that the efficacy of RAR can be distinctly enhanced by processing with vinegar due to the reduced toxicity and side effects. However, the entry of vinegar into liver channels can cause a series of problems. In this paper, the differences in the acute toxicity, anti-inflammatory and analgesic effects between RAR and vinegar-processed RAR were compared in detail. The changes in the chemical compositions between RAR and vinegar-processed RAR were investigated, and the mechanism of vinegar processing was also explored. Acute toxicity experiments were used to examine the toxicity of vinegar-processed RAR. A series of studies, such as the writhing reaction, ear swelling experiment, complete Freund’s adjuvant-induced rat foot swelling experiment and cotton granuloma, in experimental mice was conducted to observe the anti-inflammatory effect of vinegar-processed RAR. The inflammatory cytokines of model rats were determined by enzyme-linked immunosorbent assay (ELISA). Liquid Chromatography-Quadrupole-Time of Flight mass spectrometer Detector (LC-Q-TOF) was used to analyse the chemical compositions of the RARs before and after vinegar processing. Neither obvious changes in mice nor death phenomena were observed as the amount of vinegar-processed RAR in crude drug was set at 2.1 g/kg. Vinegar-processed RAR could significantly prolong the latency, reduce the writhing reaction time to reduce the severity of ear swelling and foot swelling, and remarkably inhibit the secretion of Interleukin-1β(IL-1β), Interleukin-6 (IL-6) and tumor necrosis factor-α (TNF-α) proinflammatory cytokines. The content of twelve saponins (e.g., Eleutheroside K) in RAR was decreased after vinegar processing, but six other types (e.g., RDA) were increased. These results revealed that vinegar processing could not only improve the analgesic and anti-inflammatory effects of RAR but also reduce its own toxicity. Not applicable.
更新日期:2020-01-27
• BMC Complement. Altern. Med. (IF 2.479) Pub Date : 2020-01-15
Romeol Romain Koagne; Frederick Annang; Bastien Cautain; Jesús Martín; Guiomar Pérez-Moreno; Gabin Thierry M. Bitchagno; Dolores González-Pacanowska; Francisca Vicente; Ingrid Konga Simo; Fernando Reyes; Pierre Tane
The proliferation and resistance of microorganisms area serious threat against humankind and the search for new therapeutics is needed. The present report describes the antiplasmodial and anticancer activities of samples isolated from the methanol extract of Albizia zygia (Mimosaseae). The plant extract was prepared by maceration in methanol. Standard chromatographic, HPLC and spectroscopic methods were used to isolate and identify six compounds (1–6). The acetylated derivatives (7–10) were prepared by modifying 2-O-β-D-glucopyranosyl-4-hydroxyphenylacetic acid and quercetin 3-O-α-L-rhamnopyranoside, previously isolated from A. zygia (Mimosaceae). A two-fold serial micro-dilution method was used to determine the IC50s against five tumor cell lines and Plasmodium falciparum. In general, compounds showed moderate activity against the human pancreatic carcinoma cell line MiaPaca-2 (10 < IC50 < 20 μM) and weak activity against other tumor cell lines such as lung (A-549), hepatocarcinoma (HepG2) and human breast adenocarcinoma (MCF-7and A2058) (IC50 > 20 μM). Additionally, the two semi-synthetic derivatives of quercetin 3-O-α-L-rhamnopyranoside exhibited significant activity against P. falciparum with IC50 of 7.47 ± 0.25 μM for compound 9 and 6.77 ± 0.25 μM for compound 10, higher than that of their natural precursor (IC50 25.1 ± 0.25 μM). The results of this study clearly suggest that, the appropriate introduction of acetyl groups into some flavonoids could lead to more useful derivatives for the development of an antiplasmodial agent.
更新日期:2020-01-27
• BMC Complement. Altern. Med. (IF 2.479) Pub Date : 2020-01-15
Kirtan Joshi; Alan Parrish; Elizabeth A. Grunz-Borgmann; Mary Gerkovich; William R. Folk
A variety of medicinal products prepared from secondary tubers of Harpagophytum procumbens subsp. procumbens (Burch.) DC.ex Meisn. (Devil’s Claw) and H. zeyheri are marketed in Africa, Europe, the United States, South America and elsewhere, where they are used for inflammatory and musculoskeletal conditions such as arthritis, lower back pain, rheumatism and neuralgia, etc. While clinical studies conducted over the last twenty years support the general safety of such products, infrequent gastrointestinal disturbances (diarrhea, nausea, vomiting, abdominal pain), headache, vertigo and hypersensitivity (allergic) reactions (rash, hives and face swelling) have been documented. Sex-related differences occur in the health conditions for which Devil’s Claw products are used, so it is likely that usage is similarly sex-related and so might be side effects and potential toxicities. However toxicologic studies of Devil’s Claw products have been conducted primarily with male animals. To address this deficit, we report toxicological studies in female and male rats of several H. procumbens (HP) aqueous-alcohol extracts chemically analyzed by UPLC-MS. Female and male Sprague Dawley rats were studied for one and three months in groups differing by consumption of diets without and with HP extracts at a 7–10-fold human equivalent dose (HED). Sera were analyzed for blood chemistry, and heart, liver, lung, kidney, stomach, and small and large intestine tissues were examined for histopathology. Treatment group differences for blood chemistry were analyzed by ANOVA with Dunnett’s test and significant group differences for endpoints with marginal distributional properties were verified using the Kruskal-Wallis test. Group differences for histopathology were tested using Chi Square analysis. Significant group by sex-related differences in blood chemistry were detected in both studies. Additionally, several sex-related differences occurred between the studies. However, significant histopathology effects associated with the consumption of the extracts were not detected. Toxicologic analysis of Devil’s Claw extracts cause significant sex-related effects in blood chemistry. However, in our judgement, none of the observed effects suggest serious toxicity at these doses and durations. Subsequent toxicologic and clinical studies of H. procumbens and other medicines with similar properties should explore in greater detail the basis and consequences of potential sex-related effects.
更新日期:2020-01-27
• Mater. Lett. (IF 3.019) Pub Date : 2020-01-27
Xiaotong Lu; Hongjie Luo; Shijie Yang; You Wei; Jianrong Xu; Zhi Yao
更新日期:2020-01-27
• Mater. Lett. (IF 3.019) Pub Date : 2020-01-27
Wang Xin; Li Baokui
An improved Jominy curve hardness model was built for carburizing-quenching process. In contrast to the other hardness model, it avoids the computational heterogeneity of phase transformation and has strong operability. Finally the model was applied to the carburized Jominy and gear specimens of 17CrNiMo6 steel. Namely, the corresponding experiment results were utilized to verify the simulation results. Hardness distribution between the measured and simulated results shows good agreement. Especially the simulation accuracy on the low Jominy distance and the hardened layer was better than that on other positions.
更新日期:2020-01-27
• BMC Complement. Altern. Med. (IF 2.479) Pub Date : 2020-01-15
Jihong Lee; Sun Haeng Lee; Gyu Tae Chang
Although a variety of patient-reported outcome measures (PROMs) for children have been developed, there is no pediatric PROM specific to Korean medicine (KM) that is validated by experts in the field. The aim of this study was to collate the opinions of specialists in KM pediatrics on the development of a generic PROM that can be used by Korean medical doctors to assess the health status of children. A three-round Delphi survey was conducted to determine the level of consensus on the development of a new PROM. Delphi questionnaires were sent by e-mail to 91 KM pediatricians on January 24, 2018. The Delphi questionnaire was composed of four sections: conceptualization, construction, items, and sources of content for a PROM. A nine-point Likert scale was used, and if more than two-thirds of the panels agreed or disagreed with a given sentence, they were considered to have reached a consensus. A draft of a PROM for the pediatric field of KM was developed in accordance with the preliminary conceptual framework. Out of 91 experts, 18 finished three rounds of the Delphi survey. The experts reached a consensus on the necessity of a KM pediatric PROM for measuring various areas including child health, and using Likert scales with a recall period of 3 months. They also agreed on specific items and sources of content. A new draft of a health questionnaire for KM pediatrics was developed based on the Delphi consensus. It contains 44 items covering 7 domains: i) functions of the digestive system, ii) functions of the respiratory system, iii) mental functions, iv) skin functions, v) pain, vi) functions of the metabolic and endocrine systems, and vii) demographic details. This research represents the first step in developing a health questionnaire for the pediatric field of KM. The questionnaire can be used in clinical and research settings after verifying several types of validity and reliability.
更新日期:2020-01-27
• Mater. Lett. (IF 3.019) Pub Date : 2020-01-27
A. Lobo-Guerrero
The Rietveld method has been used to refine the polyvinyl alcohol (PVA) structure. The PVA is a polymeric material exhibiting a semicrystalline “head-to-tail” arrangement of repeating units. Experimental X-ray pattern of PVA was adjusted considering a model based on monoclinic symmetry, but using the P21/m and P21/c spatial groups. The lattice parameters and atomic positions were adjusted in each case. The P21/c based model showed a better Rietveld adjust than the P21/m. Then, the P21/c model was used to analyze the structural behavior of the PVA subjected to different pH environments. From refinement results, the PVA subjected to acid environment remained unchanged and suffered a regalement of its crystalline structure when it is subjected to a basic pH, causing a loss of its permanence and crystallinity. The results provide a powerful model to evaluate the crystallinity degree of the PVA and its structural variations using the Rietveld refinement method.
更新日期:2020-01-27
• BMC Complement. Altern. Med. (IF 2.479) Pub Date : 2020-01-15
Shuai Lu; Yubo Zhang; Huajun Li; Jing Zhang; Yingqian Ci; Mei Han
Cancer cachexia is a severe condition that leads to the death of advanced cancer patients, and approximately 50~80% of cancer patients have cancer cachexia. Ginseng extract has been reported to have substantial anticancer and immune-enhancing effects; however, no study has reported the use of ginseng alone to treat cancer cachexia. Our study’s purpose was to investigate the therapeutic effects of ginseng-related monomers or mixtures on a cancer cachexia mouse model. We selected BALB/c mice and injected the mice subcutaneously with C26 colon cancer cells to construct a cancer cachexia experimental animal model. The water extract of ginseng (WEG), two types of ginseng extracts (ginsenosides at doses of 5 mg/kg (GE5) and 50 mg/kg (GE50)) and ginsenoside Rb1 (Rb1) were used to treat cancer cachexia mice. Enzyme-linked immunosorbent assays (ELISAs) were used to analyze the inhibitory effects on two key inflammatory cytokines, tumor necrosis factor-α (TNF-α) and interleukin-6 (IL-6). Our experimental results show that GE5, GE50 and Rb1 significantly reduced the levels of TNF-α (P < 0.01) and IL-6 (P < 0.01), which are closely related to cancer cachexia; however, WEG, GE5, GE50 and Rb1 did not significantly improve the gastrocnemius muscle weight or the epididymal fat weight of mice with cancer cachexia. These results indicate that GE5, GE50 and Rb1 may be useful for reducing symptoms due to inflammation by reducing the TNF-α and IL-6 cytokine levels in cancer cachexia mice, thereby ameliorating the symptoms of cancer cachexia. Our results may be beneficial for future studies on the use of Chinese herbal medicines to treat cancer cachexia.
更新日期:2020-01-27
• BMC Complement. Altern. Med. (IF 2.479) Pub Date : 2020-01-15
Nadia Montero-Oleas; Ingrid Arevalo-Rodriguez; Solange Nuñez-González; Andrés Viteri-García; Daniel Simancas-Racines
Although cannabis and cannabinoids are widely used with therapeutic purposes, their claimed efficacy is highly controversial. For this reason, medical cannabis use is a broad field of research that is rapidly expanding. Our objectives are to identify, characterize, appraise, and organize the current available evidence surrounding therapeutic use of cannabis and cannabinoids, using evidence maps. We searched PubMed, EMBASE, The Cochrane Library and CINAHL, to identify systematic reviews (SRs) published from their inception up to December 2017. Two authors assessed eligibility and extracted data independently. We assessed methodological quality of the included SRs using the AMSTAR tool. To illustrate the extent of use of medical cannabis, we organized the results according to identified PICO questions using bubble plots corresponding to different clinical scenarios. A total of 44 SRs published between 2001 and 2017 were included in this evidence mapping with data from 158 individual studies. We extracted 96 PICO questions in the following medical conditions: multiple sclerosis, movement disorders (e.g. Tourette Syndrome, Parkinson Disease), psychiatry conditions, Alzheimer disease, epilepsy, acute and chronic pain, cancer, neuropathic pain, symptoms related to cancer (e.g. emesis and anorexia related with chemotherapy), rheumatic disorders, HIV-related symptoms, glaucoma, and COPD. The evidence about these conditions is heterogeneous regarding the conclusions and the quality of the individual primary studies. The quality of the SRs was moderate to high according to AMSTAR scores. Evidence on medical uses of cannabis is broad. However, due to methodological limitations, conclusions were weak in most of the assessed comparisons. Evidence mapping methodology is useful to perform an overview of available research, since it is possible to systematically describe the extent and distribution of evidence, and to organize scattered data.
更新日期:2020-01-27
• Mater. Lett. (IF 3.019) Pub Date : 2020-01-27
Geng Yongjuan; Li Shaochun; Hou Dongshuai; Zhang Weifeng; Jin Zuquan; Li Qiuyi; Luo Jianlin
Foamed concrete is a lightweight building material, but its high water absorption is one of its main disadvantages. The objective of this study was to fabricate, characterize and evaluate a GO/Silane superhydrophobic coatings on foamed concrete surface. The results showed that the water contact angle of the foamed concrete was 165.5° and water can roll off from the surface easily. Superhydrophobic modification improves waterproof properties of the foamed concrete obviously, the water sorptivity was reduced about 97.2%. SEM results showed that the superhydrophobic surface was mainly due to the rough structure brought by GO and the low surface energy brought by silane.
更新日期:2020-01-27
• Mater. Lett. (IF 3.019) Pub Date : 2020-01-27
Biao Li; Siyuan Dong; Yingqi Jia; Kaiqiang Shi; Yanjun Lin; Jingbin Han
A series of polyvinylidene fluoride (PVDF)/layered double hydroxide (LDH) composite membranes were prepared via phase inversion method, in which the presence of LDH promotes the formation of β-phase PVDF. This work reveals the key role of hydrogen-bonds on the nucleation mechanism and crystallization behavior of PVDF. The composite membranes exhibit improved porosity and hydrophilicity, leading to an enhanced antifouling property against proteins.
更新日期:2020-01-27
• BMC Complement. Altern. Med. (IF 2.479) Pub Date : 2020-01-15
Zi-fei Yin; Yang-lin Wei; Xuan Wang; Li-na Wang; Xia Li
Pulmonary fibrosis (PF) is a chronic and progressive interstitial lung disease. Buyang Huanwu Tang (BYHWT), a classical traditional Chinese medicine formula, has been widely utilized for the treatment of PF in China. This present study aimed to explore the mechanism of BYHWT in the treatment of PF in vitro. TGF-β1 stimulated human alveolar epithelial A549 cells were used as in vitro model for PF. Post the treatment of BYHWT, cell viability was measured by MTT assay, and cell morphology was observed under microscope. The epithelial-to-mesenchymal transition (EMT) markers (E-cadherin, Vimentin) and collagen I (Col I) were detected by western blot, immunofluorescence staining and real-time quantitative polymerase chain reaction. With the co-administration of activators (IGF-1, SC79) and inhibitors (LY294002, MK2206), the effect of BYHWT on PI3K/Akt pathway was analyzed by western blot. BYHWT inhibited cell growth, and prevented cell morphology changed from epithelial to fibroblasts in TGF-β1 induced A549 cells. BYHWT decreased Vimentin and Col I, while increased E-cadherin at both protein and mRNA levels. Moreover, phosphorylation of PI3K (p-PI3K) and phosphorylation of Akt (p-Akt) were significantly down-regulated by BYHWT in TGF-β1 stimulated A549 cells. These results indicate that BYHWT suppressed TGF-β1-induced collagen accumulation and EMT of A549 cells by inhibiting the PI3K/Akt signaling pathway. These findings suggest that BYHWT may have potential for the treatment of PF.
更新日期:2020-01-27
• Mater. Lett. (IF 3.019) Pub Date : 2020-01-27
I. Krutikova; M. Ivanov; A. Murzakaev; K. Nefedova
Ce3+:Y2O3, Pr3+:Y2O3, Ce3+:(LaxY1-x)2O3, Pr3+:(LaxY1-x)2O3 nanoparticles were fabricated by laser ablation. The nanopowders consisted of spherical particles with average size of 14÷17 nm. Ytterbium fiber laser operated in pulse mode at 5 kHz repetition rate and average radiation power of 255 W. The intensity of the laser radiation in focal spot was about 106 W/cm2 with close-to-Gaussian profile. The structural and morphological properties of the nanoparticles were investigated employing TEM, BET, FT-IR, XRD analysis.
更新日期:2020-01-27
• Mater. Lett. (IF 3.019) Pub Date : 2020-01-27
Chenguang Li; Mingwei Zhang; Mianmian Ruan; Jun Wang; Jiamiao Liang; Deliang Zhang
Metal powders with hierarchical nanostructures are generally designed and fabricated by dealloying with or without assistance of other processes. However, they are mainly nanoporous metal powders and their derivatives which have limited applications, so metal powders with novel nanostructures should be explored further for various applications. Herein, high energy mechanical milling and dealloying were combined for fabricating metal powders with controllable nanostructures. As an example, a nanograins-attached and ultrathin Cu flake powder was fabricated by partial mechanical alloying of a Cu-42wt.%Al powder mixture and subsequent dealloying. The dealloyed Cu powder particles had ultrathin flaky shapes with numerous Cu nanograins being attached to their surfaces, and the microstructure of the as-milled Cu-42wt.%Al powder particles and the dealloyed Cu particles were studied to elucidate the formation mechanism of the unique morphology of the dealloyed Cu powder.
更新日期:2020-01-27
• BMC Complement. Altern. Med. (IF 2.479) Pub Date : 2020-01-16
Euphorbia hirta (Linn) family Euphorbiaceae has been used in indigenous system of medicine for the treatment of gastrointestinal disorders. This study was designed to determine the pharmacological basis for the medicinal use of E. hirta in diarrhea and constipation. The aqueous-methanol extract of whole herb of E. hirta (EH.Cr) and its petroleum ether (Pet.EH), chloroform (CHCl3.EH), ethyl acetate (Et.Ac.EH) and aqueous (Aq.EH) fractions were tested in the in-vivo experiments using Balb/c mice, while the in-vitro studies were performed on isolated jejunum and ileum preparations of locally bred rabbit and Sprague Dawley rats, respectively, using PowerLab data system. Qualitative phytochemical analysis showed the presence of alkaloids, saponins, flavonoids, tannins, phenols, cardiac glycosides, while HPLC of EH.Cr showed quercetin in high proportion. In mice, EH.Cr at the dose of 500 and 1000 mg/kg showed 41 and 70% protection from castor oil-induced diarrhea, respectively, similar to the effect of quercetin and loperamide, while at lower doses (50 and 100 mg/kg), it caused an increase in the fecal output. In loperamide-induced constipated mice, EH.Cr also displayed laxative effect with respective values of 28.6 and 35.3% at 50 and 100 mg/kg. In rabbit jejunum, EH.Cr showed atropine-sensitive inhibitory effect in a concentration-dependent manner, while quercetin and nifedipine exhibited atropine-insensitive effects. Fractions of E. hirta also produced atropine-sensitive inhibitory effects except Pet.EH and CHCl3.EH. On high (80 mM) and low (20 mM) K+ − induced contractions, the crude extract and fractions exhibited a concentration-dependent non-specific inhibition of both spasmogens and displaced concentration-response curves of Ca++ to the right with suppression of the maximum effect similar to the effect quercetin and nifedipine. Fractions showed wide distribution of spasmolytic and Ca++ antagonist like effects. In rat ileum, EH.Cr and its fractions exhibited atropine-sensitive gut stimulant effects except Pet.EH. The crude extract of E. hirta possesses antidiarrheal effect possibly mediated through Ca++ antagonist like gut inhibitory constituents, while its laxative effect was mediated primarily through muscarinic receptor agonist like gut stimulant constituents. Thus, these findings provide an evidence to the folkloric use of E. hirta in diarrhea and constipation.
更新日期:2020-01-27
• Mater. Lett. (IF 3.019) Pub Date : 2020-01-27
N. Volkov; G. Abrosimova; A. Aronin
更新日期:2020-01-27
• BMC Complement. Altern. Med. (IF 2.479) Pub Date : 2020-01-16
Jun Xie; Tingli Zhu; Qun Lu; Xiaomin Xu; Yinghua Cai; Zhenghong Xu
Gastrointestinal cancer is one of the most common malignancies and imposes heavy burdens on both individual health and social economy. We sought to survey the effect of a self-care education program on quality of life and fatigue in gastrointestinal cancer patients who received chemotherapy. Ninety-one eligible gastrointestinal cancer patients were enrolled in this study and 86 valid samples were analyzed. Data were acquired with a demographics questionnaire, endpoint multidimensional questionnaire and the European Organization for Research and Treatment of Cancer (EORTC) quality of life questionnaire QLQ-C30. The collected data were analyzed using SPSS software. The self-care education intervention significantly improved the quality of life with respect to emotional function (p = 0.018), role function (p = 0.041), cognitive function (p = 0.038) and alleviated side effects such as nausea/vomiting (p = 0.028) and fatigue (p = 0.029). Further analysis demonstrated that the self-care education benefited total fatigue, affective fatigue and cognitive fatigue in gastrointestinal cancer patients regardless of baseline depression. Our results suggested the beneficial effects of the self-care education in both quality of life and anti-fatigue in gastrointestinal cancer patients under chemotherapy. The self-care education could be considered as a complementary approach during combination chemotherapy in gastrointestinal cancer patients.
更新日期:2020-01-27
Contents have been reproduced by permission of the publishers.
down
wechat
bug | 2020-01-28 00:49:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4126460552215576, "perplexity": 13204.642939382391}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251737572.61/warc/CC-MAIN-20200127235617-20200128025617-00282.warc.gz"} |
https://mathoverflow.net/questions/314421/number-of-permutations-that-are-products-of-disjoint-cycles-of-distinct-length/314428 | # Number of permutations that are products of disjoint cycles of distinct length
What is the number of permutations $$\pi\in S_n$$ that are products of disjoint cycles of distinct length? What is the number of permutations that are products of disjoint cycles such that no more than $$k$$ cycles are of any given length?
I am really interested in the asymptotics of the proportion of permutations in $$S_n$$ with these properties as $$n\to \infty$$. What is a good reference for this sort of statistics?
• Dumb question (from me): you are ignoring cycles of length one (fixed points), right? In which case, a good start on estimating the size of the complement is counting all permutations with two cycles of length two, and over counting by adding those with two cycles of length three. For back of the envelope estimates I would start with that. Gerhard "Higher Order Terms Can Wait" Paseman, 2018.11.02. – Gerhard Paseman Nov 2 '18 at 17:41
• Actually, both versions of the question interest me - ignoring fixed points and considering them as cycles of length 1. – H A Helfgott Nov 2 '18 at 18:52
If we denote by $$c_i(\sigma)$$ the number of cycles of length $$i$$ in $$\sigma$$, we can write the exponential generating function of permutations with cycle statistics as $$\sum_{n\geq 1}\sum_{\sigma\in S_n}\left(\frac{x^n}{n!}\prod_{i\geq 1}t_i^{c_i(\sigma)}\right)=\exp\left(\sum_{i\geq 1} \frac{t_ix^i}{i}\right) =\prod_{i\geq 1} \left(1+\frac{t_ix^i}{i}+\frac{t_i^2x^{2i}}{2i^2}+\cdots\right)$$ From here we see that the exponential generating function of permutations with distinct cycle sizes can be obtained by removing all terms where any $$t_i$$ has exponent $$\geq 2$$, and then setting all $$t_i=1$$. So we get $$\prod_{i\geq 1}\left(1+\frac{x^i}{i}\right)$$ From here the methods of A Hybrid of Darboux's Method and Singularity Analysis in Combinatorial Asymptotics by P. Flajolet, E. Fusy, X. Gourdon, D. Panario, N. Pouyanne show that the coefficient of $$x^n$$ is asymptotically equal to $$e^{-\gamma}+\frac{e^{-\gamma}}{n}+O\left(\frac{\log n}{n^2}\right)$$ which means that our desired number of permutations is asymptotically given by $$n!\left(e^{-\gamma}+\frac{e^{-\gamma}}{n}+O\left(\frac{\log n}{n^2}\right)\right).$$ Notice that in section 3 they actually provide much more refined asymptotics, in case you wanted more terms. Moreover, I believe their method should let you compute the asymptotics for permutations where no more than $$k$$ cycles are of any given length. In this case the generating function is given by $$\prod_{i\geq 1}\left(1+\frac{x^i}{i}+\frac{x^{2i}}{2i^2}+\cdots +\frac{x^{ki}}{k!i^k}\right)$$ from the same considerations as above. | 2019-10-21 08:58:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8732261061668396, "perplexity": 112.27672325323674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987763641.74/warc/CC-MAIN-20191021070341-20191021093841-00011.warc.gz"} |
https://www.physicsforums.com/threads/legrangian-eqn-of-motion.182587/ | # Homework Help: Legrangian Eqn. of Motion
1. Sep 1, 2007
### Mindscrape
Two blocks, each of mass M, are connected by an extensionless, uniform string of length l. One block is placed on a smooth horizontal surface, and the other block hangs over the side, the string passing over a frictionless pulley. Describe the motion of the system when the string has a mass m.
By Hamilton's principle
$$L = T - U$$
the kinetic energies will be
$$T = 1/2 m \dot{x}^2 + 1/2 m \dot{y}^2$$
and if the potential is defined to be zero at the horizontal, the potential will be
$$U = -Mgy + U_{string}$$
This is the part I need a quick help on. The x block has a zero potential because it stays along the horizontal where the zero potential is defined, and the hanging block will have a potential of -Mgy, and I know that the mass of the string contributing to the potential will increase until finally it reaches as the string moves down. So I was thinking that
$$U_{string} = -\frac{m}{t}*g*y$$
That gives the mass per unit time for a given length y, which would also be
$$U_{string} = -m g \dot{y}$$
But units don't work out correctly unless I divide U_string by t, which would create a discontinuity and not make any sense. I don't know why I am having so much trouble with such a simple prospect.
2. Sep 2, 2007
### Irid
Why did you use (m/t)? You should find the center of mass of the part of the string which hangs below the table. If the total mass is m, then the hanging part is
$$m'=m\frac{y}{l}$$
and mass center is in the middle (y/2). This information should give you the potential energy of the string. | 2018-10-19 02:12:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.508479118347168, "perplexity": 342.6577618445855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512268.20/warc/CC-MAIN-20181019020142-20181019041642-00092.warc.gz"} |
https://bitworking.org/news/2003/09/Please_do_not_use_D_as_an_ACCESSKEY_in_your_HTML | Congrats, your new site is redesigned, all shiny and new, and in the redesign you decided to cover accessibility and in the process start using accesskey. Now accesskey is neat, in that it which gives you the power to re-map keys to move the keyboard focus to a control or to follow a link, but with that power comes responsibility. You can add an accesskey attribute to a, area, button, input, label, legend, and textarea.
I use the keyboard shortcuts for all my applications as much as possible. One of those keyboard shortcuts I use all day is Alt-D, which from within Mozilla or IE will jump the keyboard focus to the address bar of the browser. Unless of course, I'm reading your webpage where you have defined the ACCESSKEY D to jump to some particular link or web form element. Bugzilla is one such example. That is annoying. Don't do that. You are actually reducing your sites accessibility. While were at it, never define ACCESSKEY bindings for:
• F - File
• B - Bookmarks
• E - Edit
• V - View
• G - Go
• T - Tools
• H - Help
• A - Favorites
Your safest bet, if you want access keys, is to bind them to numeric keys only, which has the smallest chance of conflicting with browser based keyboard shortcuts.
Update 1: Damian Cugley points out that the exact keys listed above can't be counted in because the shortcut keys for the browser depend on the I18N of the browser. All the more reason to stick to numeric access keys.
Update 2: Just stumbled across another site: National Film Board of Canada. Do I need to start a running list?
The other trick is that the access-keys that cannot be used will vary according to the locale of your reader, because the words they are mnemonic for will of course be spelled differently...
The best approach I can think of to that little problem is to make sure that when I internationalize a web page, the access keys are included in the i18n effort. That will not save someone who is reading your page in English in a non-English browser, but that can't be helped.
Posted by Damian Cugley on 2003-09-04
FYI, Ctrl-L puts focus on the address bar in Mozilla, and mozilla derivatives (Firebird, etc).
Posted by Peter Kovacs on 2003-09-05
Actually thie problem in my mind isn't the websites that do this it is the user agent.
A system key binding should not be able to be overridden by HTML.
One could go so far to assert that it is a security issue (I wouldn't) in that the standard behavior the user expects is now provided with a different mechanism. For example if you caught Alt-D event and then called a trigger to redirect you to another site...
Ideally the user agent would allow you to prevent this and even give the user a warning that a site is attempting to provide a new implementation of a standard key binding.
Posted by Kevin Burton on 2003-09-05
Kevin,
I agree, that fact that the access keys and user agents use the same meta key seems like a bug and not a feature.
Posted by Joe on 2003-09-06
If the user agent we talk about is Mozilla, then we talk about bug 128452:
http://bugzilla.mozilla.org/show_bug.cgi?id=128452
Posted by Martijn on 2003-09-07
Better tell that to the people on the site I link! Although they recommend the government numerical standard (which has been widely adopted internationally), they do suggest alt+D...
Posted by Richard on 2003-12-03
we're forgetting that Ctrl-Tab focuses the Address bar in IE
[not that I use IE under Linux : )]
Posted by Chris Neale on 2004-01-01 | 2017-08-21 00:34:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21814630925655365, "perplexity": 2647.9659208173125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107065.72/warc/CC-MAIN-20170821003037-20170821023037-00003.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=64536 | # Volumes in the 4th spatial dimension
by Aki
Tags: dimension, spatial, volumes
P: 82 How would you calculate the volume of a 4-dimensional object? Like a hypercube, hypersphere, etc...
P: 181 Hypercube with a side n: n^4 I guess.
P: 112 Here check this out... http://mathworld.wolfram.com/Four-Di...lGeometry.html Remeber to book mark this website for it is very handy.
HW Helper
P: 2,004
## Volumes in the 4th spatial dimension
You can find the volume of an N-dimensional sphere of radius R by the following integral:
$$V_N(R)=\int\theta(R^2-x^2)d^Nx$$
where $x^2=\sum x_n^2$ and $\theta$ is the unit step function.
P: 84 how about hyperspherical packing like sphere packing? How would you do that?
HW Helper
P: 11,863
Quote by Galileo You can find the volume of an N-dimensional sphere of radius R by the following integral: $$V_N(R)=\int\theta(R^2-x^2)d^Nx$$ where $x^2=\sum x_n^2$ and $\theta$ is the unit step function.
The volume of any sphere (any # of dimensions) is ZERO...You were probably referring to a ball... An N-1 dimensional ball... (Assuming it is open,the equation would be $\sum_{i=1}^{n} x_{i}^{2} < R^{2}$ )
Daniel.
P.S.Of course,it's natural to choose the system of coordinates withe the origin of axis in the center of the ball.
HW Helper
P: 2,004
Quote by dextercioby The volume of any sphere (any # of dimensions) is ZERO...
Here we go again...
Sci Advisor HW Helper P: 11,863 It's not mathematics the "thing" you're trying to do by ignoring the WIDELY ACCEPTED definitions of current mathematics...I don't know what it is,i'm assuming it is bulls***. Daniel.
P: 347
Quote by Galileo Here we go again...
My sentiments exactly.
P: 82
Quote by damoclark Now how could you calculate the surface area of a sphere? If you get a basket ball or something you can see that the surface area of a sphere is the infinite sum of circles which starting from one pole of the surface of the sphere, get bigger, until one reaches the equator then shrink back to zero radius at the other pole. Assuming your sphere has radius 1, you'll find the circumference of your circle r units away from a pole is 2*Pi*sin(r). Integrate that between 0 and Pi and you'll get 4*Pi, which is the surface area of your sphere. Since the surface area of a sphere of radius R has units R^2, then the Surface area of a general sphere of radius R is 4*Pi*R^2.
I'm already lost here, lol. How did you get 4*pi when you integrate 2*pi*sin(r)? Shouldn't it be 2*pi*(-cos r) if you take the antiderivative?
And what do you mean by "integrate between 0 and pi)?
Thanks
Related Discussions General Physics 22 General Physics 0 Special & General Relativity 35 General Physics 35 | 2014-04-21 07:25:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6694232225418091, "perplexity": 1000.8283276028546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/generating-correlated-normals.915220/ | # Generating correlated normals
Tags:
1. May 20, 2017
### fignewtons
1. The problem statement, all variables and given/known data
Given correlation matrix
$$M = \begin{bmatrix} 1 & .3 & .5 \\ .3 & 1 & .2 \\ .5 & .2 & 1 \\ \end{bmatrix}$$
And 3 independent standard normals $$N_1, N_2, N_3$$
using cholesky decomposition
A) get the correlated standard normals
B) and if you want to transform them such that A ~ N(0,2), B~N(2,8), C~N(4,9) what is it?
2. Relevant equations
Cholesky decomposition: $$M = Z*Z^T$$ where Z is a lower triangular matrix.
3. The attempt at a solution
A) the correlated standard normals I get are
$$A = N_1 \\ B = 0.3 N_1 + \sqrt{.91}N_2 \\ C = 0.2 N_1 + 0.05241 N_2 + 0.86444N_3$$
Is this correct?
B) do I simply add the mean and scale the variance? Ie. for C, I get $$C = 4 + \sqrt{\frac{9}{.79}}C_0$$ where $$C_0$$ is the untransformed variable ~N(0, 0.79). Please check if my reasoning is correct.
2. May 20, 2017
### andrewkirk
For (A) I get a different answer to you, although I entered the mtx in a rush so may have mistyped. What did you get for the Z matrix?
For (B) the variance scaling is simply by the given variances, ie 2, 8 and 9, since the random variable being scaled is standard normal. Why do you think the untransformed variable is N(0,0.79)? From part (A) the new variables created by the Cholesky multiplication were supposed to be standard normals.
3. May 21, 2017
### fignewtons
EDIT_I noticed that I copied down the equation for C incorrectly because I looked up the incorrect Z_3,1. After the revision, I did get C~N(0,1).
Thanks for the help!
-------------------------
The Z matrix I got was
$$Z = \begin{bmatrix} 1 & 0 & 0 \\ .3 & \sqrt{.91} & 0 \\ .5 & .05241 & .86444 \\ \end{bmatrix}$$
Using the relation Y = Z*N where Z is as above and N is the column vector of standard normals, I get for Y is a column vector of correlated normals with row 1 being A, row 2 being B, and row 3 being C.
When I write out Y, it is the above part A answer.
I try to figure out what is E[C] and Var[C]. For E[C] I use linearity of expectation to get $$E[C] = 0.5 E[N_1] + 0.05241E[N_2] + 0.86444E[N_3]$$ and since E[N_i] = 0, E[C] = 0
To get Var[C] I use the bilinearity of variance and independence of N_i's so $$Var[C] = 0.5^2 Var[N_1] + 0.05241^2[VarN_2] + 0.86444^2Var[N_3]$$ and since Var[N_i] = 1, the variance is basically the sum of the squared constants, which is 1.
Thus I get that C is ~N(0,1), what I call untransformed
When we transform this to be ~N(4,9), what I though to do is to add 4 to it and apply a sqrt factor to scale the variance to be 9. Since we already have variance 1, and if we want to take it out of the parentheses it must be sqrt, we need a factor of $$\sqrt{9}$$
Not sure if this is correct but it's what makes sense to me right now.
Last edited: May 21, 2017
4. May 21, 2017
### andrewkirk
@fignewtons Yes that all looks correct. Make sure the multiplication by $\sqrt 9$ is done before adding 4. | 2017-08-19 18:56:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8044227361679077, "perplexity": 876.0369023830252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105712.28/warc/CC-MAIN-20170819182059-20170819202059-00237.warc.gz"} |
http://blogs.harvard.edu/pamphlet/2015/09/28/whence-function-notation/ | ## Whence function notation?
### September 28th, 2015
I begin — in continental style, unmotivated and, frankly, gratuitously — by defining Ackerman’s function $$A$$ over two integers:
$A(m, n) = \left\{ \begin{array}{l} n + 1 & \mbox{ if m=0 } \\ A(m-1, 1) & \mbox{ if m > 0 and n = 0 } \\ A(m-1, A(m, n-1)) & \mbox{ if m > 0 and n > 0 } \end{array} \right.$
…drawing their equations evanescently in dust and sand… Image of “Death of Archimedes” from Charles F. Horne, editor, Great Men and Famous Women, Volume 3, 1894. Reproduced by Project Gutenberg. Used by permission.
You’ll have appreciated (unconsciously no doubt) that this definition makes repeated use of a notation in which a symbol precedes a parenthesized list of expressions, as for example $$f(a, b, c)$$. This configuration represents the application of a function to its arguments. But you knew that. And why? Because everyone who has ever gotten through eighth grade math has been taught this notation. It is inescapable in high school algebra textbooks. It is a standard notation in the most widely used programming languages. It is the very archetype of common mathematical knowledge. It is, for God’s sake, in the Common Core. It is to mathematicians as water is to fish — so encompassing as to be invisible.
Something so widespread, so familiar — it’s hard to imagine how it could be otherwise. It’s difficult to un-see it as anything but function application. But it was not always thus. Someone must have invented this notation, some time in the deep past. Perhaps it came into being when mathematicians were still drawing their equations evanescently in dust and sand. Perhaps all record has been lost of that ur-application that engendered all later function application expressions. Nonetheless, someone must have come up with the idea.
…that ur-application… Photo from the author.
Surprisingly, the origins of the notation are not shrouded in mystery. The careful and exhaustive scholarship of mathematical historian Florian Cajori (1929, page 267) argues for a particular instance as originating the use of this now ubiquitous notation. Leonhard Euler, the legendary mathematician and perhaps the greatest innovator in successful mathematical notations, proposed the notation first in 1734, in Section 7 of his paper “Additamentum ad Dissertationem de Infinitis Curvis Eiusdem Generis” [“An Addition to the Dissertation Concerning an Infinite Number of Curves of the Same Kind”].
The paper was published in 1740 in Commentarii Academiae Scientarium Imperialis Petropolitanae [Memoirs of the Imperial Academy of Sciences in St. Petersburg], Volume VII, covering the years 1734-35. A visit to the Widener Library stacks produced a copy of the volume, letterpress printed on crisp rag paper, from which I took the image shown above of the notational innovation.
Here is the pertinent sentence (with translation by Ian Bruce.):
Quocirca, si $$f\left(\frac{x}{a} +c\right)$$ denotet functionem quamcunque ipsius $$\frac{x}{a} +c$$ fiet quoque $$dx − \frac{x\, da}{a}$$ integrabile, si multiplicetur per $$\frac{1}{a} f\left(\frac{x}{a} + c\right)$$.
[On account of which, if $$f\left(\frac{x}{a} +c\right)$$ denotes some function of $$\frac{x}{a} +c$$, it also makes $$dx − \frac{x\, da}{a}$$ integrable, if it is multiplied by $$\frac{1}{a} f\left(\frac{x}{a} + c\right)$$.]
There is the function symbol — the archetypal $$f$$, even then, to evoke the concept of function — followed by its argument corralled within simple curves to make clear its extent.
It’s seductive to think that there is an inevitability to the notation, but this is an illusion, following from habit. There are alternatives. Leibniz for instance used a boxy square-root-like diacritic over the arguments, with numbers to pick out the function being applied: $$\overline{a; b; c\,} \! | \! \lower .25ex {\underline{\,{}^1\,}} \! |$$, and even Euler, in other later work, experimented with interposing a colon between the function and its arguments: $$f : (a, b, c)$$. In the computing world, “reverse Polish” notation, found on HP calculators and the programming languages Forth and Postscript, has the function symbol following its arguments: $$a\,b\,c\,f$$, whereas the quintessential functional programming language Lisp parenthesizes the function and its arguments: $$(f\ a\ b\ c)$$.
Finally, ML and its dialects follow Church’s lambda calculus in merely concatenating the function and its (single) argument — $$f \, a$$ — using parentheses only to disambiguate structure. But even here, Euler’s notation stands its ground, for the single argument of a function might itself have components, a ‘tuple’ of items $$a$$, $$b$$, and $$c$$ perhaps. The tuples might be indicated using an infix comma operator, thus $$a,b,c$$. Application of a function to a single tuple argument can then mimic functions of multiple arguments, for instance, $$f (a, b, c)$$ — the parentheses required by the low precedence of the tuple forming operator — and we are back once again to Euler’s notation. Clever, no? Do you see the lengths to which people will go to adhere to Euler’s invention? As much as we might try new notational ideas, this one has staying power.
#### References
Florian Cajori. 1929. A History of Mathematical Notations, Volume II. Chicago: Open Court Publishing Company.
Leonhard Euler. 1734. Additamentum ad Dissertationem de Infinitis Curvis Eiusdem Generis. In Commentarii Academiae Scientarium Imperialis Petropolitanae, Volume VII (1734–35), pages 184–202, 1740.
### 3 Responses to “Whence function notation?”
1. Ingo Blechschmidt Says:
There are in fact arguments that one should use the notation “(x)f” instead of “f(x)”. This proposal harmonizes the reading direction (from left to right) with the data flow: The input “x” is given to “f” which produces an output.
This is especially useful when composing functions. A complex function “h” might be the composite of simpler functions “f” and “g”: “h(x) = g(f(x))”. This means that “x” is first given to “f”, which produces an intermediate value, which is then given to “g”. Notice that the flow of data doesn’t match the reading direction. In the alternate proposal, the equation would read “(x)h = ((x)f)g”.
In several branches of mathematics, so called “commutative diagrams” are an important visualization aid. Reversing the traditional notation would help with translating those diagrams to formulas and back.
2. asd Says:
>Because everyone who has ever gotten through eighth grade math has been taught this notation.
“everyone in the US”, you mean.
3. Stuart Shieber Says:
I’m pretty sure this notation is used (and taught) outside of the US too. | 2019-12-14 11:36:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7606750130653381, "perplexity": 2624.9094895323014}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540586560.45/warc/CC-MAIN-20191214094407-20191214122407-00450.warc.gz"} |
https://proofwiki.org/wiki/Definite_Integral_to_Infinity_of_Power_of_x_over_Power_of_x_plus_Power_of_a | # Definite Integral to Infinity of Power of x over Power of x plus Power of a
## Theorem
$\displaystyle \int_0^\infty \dfrac {x^m \rd x} {x^n + a^n} = \frac {\pi a^{m + 1 - n} } {n \sin \left({\left({m + 1}\right) \frac \pi n}\right)}$
for $0 < m + 1 < n$.
## Proof
$\displaystyle \int_0^\infty \dfrac {x^m \rd x} {x^n + a^n}$ $=$ $\displaystyle \int_0^\infty \dfrac {x^m \rd x} {\paren {x^{m + 1} }^{\frac n {m + 1} } + \paren {a^{m + 1} }^{\frac n {m + 1} } }$ $\displaystyle$ $=$ $\displaystyle \frac 1 {m + 1} \int_0^\infty \dfrac 1 {u^{\frac n {m + 1} } + \paren {a^{m + 1} }^{\frac n {m + 1} } } \rd u$ substituting $u = x^{m + 1}$ $\displaystyle$ $=$ $\displaystyle \frac 1 {\paren {m + 1} \paren {\frac n {m + 1} \paren {a^{m + 1} }^{\frac n {m + 1} - 1} } } \csc \paren {\frac {\paren {m + 1} \pi} n}$ Definite Integral to Infinity of $\dfrac 1 {1 + x^n}$: Corollary $\displaystyle$ $=$ $\displaystyle \frac {a^{m + 1} } {n a^{\paren {m + 1} \frac n {m + 1} } } \csc \paren {\frac {\paren {m + 1} \pi} n}$ $\displaystyle$ $=$ $\displaystyle \frac {\pi a^{m + 1 - n} } {n \sin \paren {\paren {m + 1} \frac \pi n} }$
$\blacksquare$ | 2020-04-04 17:36:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9955112338066101, "perplexity": 109.72039555364869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524604.46/warc/CC-MAIN-20200404165658-20200404195658-00112.warc.gz"} |
https://www.bedworks.com.au/231-latex-mattresses?q=Mattress+Thickness-35cm | Filter By
Price
Price
• \$879 - \$1,099
Select Size
Select Size
Mattress Thickness
Mattress Thickness
Active filters
• Mattress Thickness: 35cm
• -20%
• On sale!
• -20%
## Latex Mattresses
With countless of mattresses online in Sydney, finding the best mattress for you can be difficult. There are many factors to consider in buying the right mattress. You may ask questions like, ‘is the mattress good for your back?’, ’does the mattress get hot? ‘and so on and so forth. With all the questions to ask when buying a mattress, the first thing you have to consider is the material.
Latex Mattress is one of the best mattress to buy. You may ask ‘Are latex mattress good’, the answer is YES! Latex mattresses are good mattresses to buy because it is extremely comfortable. Latex is naturally springy and therefore provides great back support while simultaneously promoting pressure relief. When you lie down in a quality latex mattress, your hips, shoulders and back, are enveloped in a ‘gentle embrace’ to provide pressure relief without letting you sink in. It relaxes your body as it gives off a hammock-like feeling. This kind of comfort makes the latex mattress good, and is even the recommended mattress for those with back or joint pain.
Another thing that makes latex mattresses good is its natural resiliency. This means that latex mattresses are capable of absorbing and isolating movement. Latex mattress allow for minimum partner disturbance as movement is not transmitted easily to the other side of the mattress. You get to sleep peacefully and undisturbed throughout the night.
You may ask, ‘are latex mattresses hot’? The answer is, no. Latex has a naturally open-cell structure that promotes air circulation and ventilation. Most of the latex mattresses utilises a series of pinholes in the latex mattress, thus increasing air flow. As air passes through your latex mattresses, it dispels heat and keeps you cool and comfortable, all throughout the night, thus making latex mattress good for even humid nights.
With latex mattress promoting air flow throughout the mattress, moisture is therefore dissipated, thus letting you stay dry through the night. As you stay dry, moulds, bacteria and allergen build-up is dissipated. Latex mattresses are resistant to mould and dust mites thus making latex mattresses one of the best hypoallergenic mattress ever.
Mattresses can be expensive. That is why you need a good mattress that lasts. Latex mattresses are good because it is one of the most durable type of mattresses in the market. This mattress last longer and is more durable, thus letting you enjoy the comfort of the latex mattress for many years.
BEDWORKS is the right place to search for latex mattress as we carry a wide range of best latex mattress online. We have latex mattress at every mattress size at every price point. If you’re looking for a queen size latex mattress, a good king single size latex mattress or even as big as a super king size latex mattress, we have the best selection of latex mattress in Australia. Whether you need a firm latex mattress or a plush latex mattress, we have every latex mattress firmness available, so you get the best latex mattress that’s right for you. Visit our latex mattress online store and see our entire range! | 2021-12-09 10:00:40 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8090483546257019, "perplexity": 5958.589895943038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363791.16/warc/CC-MAIN-20211209091917-20211209121917-00563.warc.gz"} |
https://datascience.stackexchange.com/questions/41255/guidance-needed-with-dimension-reduction-for-clustering-some-numerical-lots-o | # Guidance needed with dimension reduction for clustering - some numerical, lots of categorical data
I've my data in a Pandas df with 25.000 rows and 1.500 columns without any NaNs. Of the columns about 30 contain numerical data which I standardized with StandardScaler(). The rest are cols with binary values which originated from cols with categorical data. (used pd.get_dummies() for this)
Now I'd like to reduce the dimensions. I'm already running
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(df)
for three hours and I asked my self if my approach was correct. I also saw two variants of PCA, one for sparse data. Does it mean that it doesn't make sense to run PCA in such a mixed scenario?
As I was up to now busy with cleaning and transforming my data, I'd like to understand what a good strategy would be to eliminate irrelevant columns.
I'd appreciate some hints to move forward. | 2021-04-17 20:46:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25370854139328003, "perplexity": 1527.6641740301782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464045.54/warc/CC-MAIN-20210417192821-20210417222821-00197.warc.gz"} |
https://dwwiki.mooo.com/w/index.php?title=Totem&oldid=67849 | # Totem
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Totem
Ritual information
Deities Pishe Gufnork Gapp Sandelfon Fish Hat Sek
GP cost 40 + 20 per maintained minion
Components holy symbol, god-specific item
Required powers Speech
Learned At 25-30 faith.rituals.misc.target
Skills faith.rituals.defensive.area, faith.rituals.defensive.target
Resisted By n
Angers? n
Type Defensive
Steps 2
Targets objects
Description Transforms an item into a being.
Notes
Priestwiki Discworld ritual help
Totem is a priest ritual that summons a creature to protect you. You can only have one totem at a time.
## Performing
This ritual costs 40 GP, and is performed on an item specific to your deity.
It requires the power of speech.
### Skills
This ritual uses faith.rituals.defensive.area and faith.rituals.defensive.target.
### Components
The target for the ritual varies according to deity.
### Performing messages
Performing
You chant the psalm of transformation.
You call upon Sek to transform a cured human right eye.
Success
Your cured human right eye flashes brightly and gives rise to a large salamander.
Failure
Sek refuses to grant you a totem.
What others see
Priest chants a psalm.
Priest calls upon Sek.
Priest's cured human right eye flashes brightly and gives rise to a large salamander.
## Totems
Totems are a type of minion that will automatically assist the priest in a fight and protect them. They can be ordered to protect npcs, but not other players.
Their skills vary[2], but their offensive abilities are fairly low, meaning that their main use is in defending the priest (they're capable of absorbing a hundred percent of all hits until they die) and in sapping the enemy's action points.
Each deity grants a different form of totem.
Using a totem while killing something will cause you to get less burial xp, but--depending on your guild level and what you're killing--it can cause you to get more kill xp, as well. For their summoner, they seem to share kill and burial xp just like a groupmate (one with a low guild level). However, they will "drain" the kill and burial xp of the summoner's actual groupmates, since it doesn't share as much xp with them.
### Duration
A totem will leave of its own accord after sixteen minutes, give or take a few seconds, if it is not killed or dismissed before that time. Skill does not appear to be a factor.
You can extend this duration by feeding them enough gp at once:
Forms of syntax available for the command 'feed':
feed <positive number> {gp|gps|guild points} [to] <minion>
feed <minion> fully
### Leaving
For most totems, leaving looks like this:
The <totem> crumbles into dust.
For Pishe, however, it looks like this:
The misty woman dissipates into the air.
## Wards
When a ward is triggered, a totem is summoned. The totem follows and will obey the victim.
## Understanding Minion GP Cost
It costs 40 gp to summon a totem if you are not already controlling any minions, and an additional 20 gp for every minion you're controlling. So, summoning a totem when you have one dust devil will cost 60 gp.
## Notes
If your totem leaves the room you're in--for example, because Fear or Agoraphobia was performed on it, or because it was dragged underwater due to a lack of swimming skills--you will be dragged along with it, because you automatically follow your totems (you can, however, unfollow them to avoid this). If you leave the room through a non-standard exit, such as a climbing exit, the totem will appear in your room after a second or two instead of following you normally.
Sometimes a totem will stop protecting you for no very obvious reason (this sometimes correlates with them just appearing in your room instead of following you as they normally do, even when there's nothing odd about the exits). If this happens, it's best to just order it to leave and summon a new one. | 2023-02-08 20:13:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33386287093162537, "perplexity": 10009.119157396712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00653.warc.gz"} |
https://support.bioconductor.org/p/9144883/ | Error in shrinkage for DESeq2
1
0
Entering edit mode
Kent • 0
@5f50f52d
Last seen 7 days ago
United Kingdom
Hi I have done a DESeq2 DGE analysis and it showed an error when I was using ashr shrinkage
using 'ashr' for LFC shrinkage. If used in published research, please cite:
Stephens, M. (2016) False discovery rates: a new deal. Biostatistics, 18:2.
https://doi.org/10.1093/biostatistics/kxw041
warning: solve(): system is singular (rcond: 4.52254e-18); attempting approx solution
warning: solve(): system is singular (rcond: 7.84057e-18); attempting approx solution
warning: solve(): system is singular (rcond: 3.78635e-19); attempting approx solution
warning: solve(): system is singular (rcond: 6.26207e-19); attempting approx solution
warning: solve(): system is singular (rcond: 7.78489e-19); attempting approx solution
warning: solve(): system is singular (rcond: 1.14601e-18); attempting approx solution
I assume that stems from co-linearity of some of the factors? Is there any fix? May I ask if the result is trustworthy? And if I am going to use it is there anything I should be careful of?
The code I used:
st$sv <- svobj$sv #st is the sample table. svobj$sv is the sva variables estimated for batch effect correction ddsTxi.1 <- DESeqDataSetFromTximport(txi, #txi is the tximport from kalisto of a cancer patient cohort colData = st, design = ~ 0 + Final.Label + sv) #Final label is the subtype of the cancer of each patient # Pre-filtering. Filter out samples with TPM below 2 in less than 4% of the sample (there are some rare subtypes) library(genefilter) abundance = txi$abundance
abs_crit = pOverA(0.04, 2)
abs_filter <- filterfun(abs_crit)
keep <- genefilter(abundance, abs_filter)
dds.1 <- ddsTxi.1[keep,]
dds.1 <- DESeq(dds.1)
res.1 <- lfcShrink(dds.1, contrast = c(-1/12,-1/12,-1/12,-1/12,1,-1/12,-1/12,-1/12,-1/12,-1/12,-1/12,-1/12,-1/12,0), type = "ashr")
Edit: Thank you Michael for pointing towards the solution here: Warnings when using "ashr" in lfcshrinkage for DESeq2
sva DESeq2 • 276 views
ADD COMMENT
0
Entering edit mode
@mikelove
Last seen 2 days ago
United States
There were some recent support site posts with the same error, and a fix emerged as I remember.
ADD COMMENT
0
Entering edit mode
Hi Michael. Thanks for the heads-up. I actually read some of the posts here and on Biostars regarding to the same error, of which most of them you have replied. It seems like most people tried to fix it by filtering out more genes with low counts but the problem persists if I interpreted what they replied correctly. Do you recall what kind of fix it is? Maybe it is just a few google search away but I have the wrong keywords.
ADD REPLY
1
Entering edit mode
I searched ashr and did a time sort:
https://support.bioconductor.org/post/search/?query=Ashr&order=date
It’s the most recent after this one.
ADD REPLY
Login before adding your answer.
Traffic: 561 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats
Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by the version 2.3.6 | 2022-06-28 15:10:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33260610699653625, "perplexity": 10197.25287390192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103556871.29/warc/CC-MAIN-20220628142305-20220628172305-00606.warc.gz"} |
https://planetmath.org/Surjective | # surjective
A function $f\colon X\to Y$ is called surjective or onto if, for every $y\in Y$, there is an $x\in X$ such that $f(x)=y$.
Equivalently, $f\colon X\to Y$ is onto when its image is all the codomain:
$\mathrm{Im}f=Y.$
## Properties
1. 1.
If $f\colon X\to Y$ is any function, then $f\colon X\to f(X)$ is a surjection. That is, by restricting the codomain, any function induces a surjection.
2. 2.
The composition of surjective functions (when defined) is again a surjective function.
3. 3.
If $f\colon X\to Y$ is a surjection and $B\subseteq Y$, then (see this page (http://planetmath.org/InverseImage))
$ff^{-1}(B)=B.$
Title surjective Canonical name Surjective Date of creation 2013-03-22 12:32:48 Last modified on 2013-03-22 12:32:48 Owner drini (3) Last modified by drini (3) Numerical id 7 Author drini (3) Entry type Definition Classification msc 03-00 Synonym onto Related topic TypesOfHomomorphisms Related topic InjectiveFunction Related topic Bijection Related topic Function Related topic OneToOneFunctionFromOntoFunction Defines surjection | 2020-10-21 16:59:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 11, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9249840378761292, "perplexity": 2394.0185016931887}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876768.45/warc/CC-MAIN-20201021151342-20201021181342-00053.warc.gz"} |
http://www.ck12.org/book/CK-12-Middle-School-Math-Concepts-Grade-7/section/2.12/ | <meta http-equiv="refresh" content="1; url=/nojavascript/">
# 2.12: Commutative Property of Multiplication with Decimals
Difficulty Level: At Grade Created by: CK-12
0%
Progress
Practice Commutative Property of Multiplication with Decimals
Progress
0%
Credit: Mark Mrwizard
Source: https://www.flickr.com/photos/mark_mrwizard/5990295938/
Marigold has a lot of tomato plants in her vegetable garden. Marigold is planning to pick the ripe tomatoes and make salsa with them. She looks up a basic recipe. The recipe says that for every cup of tomatoes, she will need 0.5 onion. For every onion, she needs 4 cloves of garlic. How can Marigold determine how many cloves of garlic she needs in terms of the number of cups of tomatoes she picks?
In this concept, you will learn to identify and use the commutative and associative properties of multiplication with decimals.
### Guidance
The Commutative Property of Multiplication states that when finding a product, changing the order of the factors will not change their product. In symbols, the Commutative Property of Multiplication says that for numbers a\begin{align*}a\end{align*} and b\begin{align*}b\end{align*}:
ab=ba
Here is an example using simple whole numbers.
Show that 24=42\begin{align*}2 \cdot 4 = 4 \cdot 2\end{align*}.
First, find 24\begin{align*}2 \cdot 4\end{align*}.
24=8
Next, find 42\begin{align*}4 \cdot 2\end{align*}.
42=8
Notice that both products are 8.
The answer is that because both 24\begin{align*}2 \cdot 4\end{align*} and 42\begin{align*}4 \cdot 2\end{align*} are equal to 8, they are equal to each other.
24=42
The Associative Property of Multiplication states that when finding a product, changing the way factors are grouped will not change their product. In symbols, the Associative Property of Multiplication says that for numbers a\begin{align*}a\end{align*},b\begin{align*}b\end{align*} and c\begin{align*}c\end{align*}:
(ab)c=a(bc)
Here is an example using simple whole numbers.
Show that (25)6=2(56)\begin{align*}(2 \cdot 5) \cdot 6=2 \cdot (5 \cdot 6)\end{align*}.
First find (25)6\begin{align*}(2 \cdot 5) \cdot 6\end{align*}. Start by multiplying the numbers in parentheses. Then multiply the result with 6.
(25)6==10660
Next, find 2(56)\begin{align*}2 \cdot (5 \cdot 6)\end{align*}. Again, start by multiplying the numbers in parentheses. Then multiply 2 by the result.
2(56)==23060
Notice that both products are 60.
The answer is that because both (25)6\begin{align*}(2 \cdot 5) \cdot 6\end{align*} and 2(56)\begin{align*}2 \cdot (5 \cdot 6)\end{align*} are equal to 60, they are equal to each other.
(25)6=2(56)
Both the Commutative Property of Multiplication and the Associative Property of Multiplication can be useful in simplifying expressions. The Commutative Property of Multiplication allows you to reorder factors while the Associative Property of Multiplication allows you to regroup factors.
Here is an example.
Simplify 29.3(12.4x)\begin{align*}29.3(12.4x)\end{align*}.
First, use the Associative Property of Multiplication to regroup the factors.
29.3(12.4x)\begin{align*}29.3(12.4x)\end{align*} is equivalent to (29.312.4)x\begin{align*}(29.3 \cdot 12.4)x\end{align*} .
Now, simplify (29.312.4)x\begin{align*}(29.3 \cdot 12.4)x\end{align*}. Multiply the numbers in parentheses. Use what you have learned about decimal number multiplication.
29.3× 12.4 1172 5860+ 29300 363.32
(29.312.4)x\begin{align*}(29.3 \cdot 12.4)x\end{align*} simplifies to 363.32x\begin{align*}363.32x\end{align*}.
The answer is that 29.3(12.4x)\begin{align*}29.3(12.4x)\end{align*} simplifies to 363.32x\begin{align*}363.32x\end{align*}.
Here is another example.
Simplify (0.3x)0.4\begin{align*}(0.3x) \cdot 0.4\end{align*}.
First, use the Commutative Property of Multiplication to reorder the factors.
(0.3x)0.4\begin{align*}(0.3x) \cdot 0.4\end{align*} is equivalent to 0.4(0.3x)\begin{align*}0.4 \cdot (0.3x)\end{align*}.
Next, use the Associative Property of Multiplication to regroup the factors.
0.4(0.3x)\begin{align*}0.4 \cdot (0.3x)\end{align*} is equivalent to (0.40.3)x\begin{align*}(0.4 \cdot 0.3)x\end{align*}.
Now, simplify (0.40.3)x\begin{align*}(0.4 \cdot 0.3)x\end{align*}. Multiply the numbers in parentheses. Use what you have learned about decimal number multiplication.
0.4× 0.30.12
(0.40.3)x\begin{align*}(0.4 \cdot 0.3)x\end{align*} simplifies to 0.12x\begin{align*}0.12x\end{align*}.
The answer is that (0.3x)0.4\begin{align*}(0.3x) \cdot 0.4\end{align*} simplifies to 0.12x\begin{align*}0.12x\end{align*}.
### Guided Practice
Simplify the following expression.
4.5(9.2y)\begin{align*}4.5(9.2y)\end{align*}
First, use the Associative Property of Multiplication to regroup the factors.
4.5(9.2y)\begin{align*}4.5(9.2y)\end{align*} is equivalent to (4.59.2)y\begin{align*}(4.5 \cdot 9.2)y\end{align*}.
Now, simplify (4.59.2)y\begin{align*}(4.5 \cdot 9.2)y\end{align*}. Multiply the numbers in parentheses. Use what you have learned about decimal number multiplication.
4.5× 9.290+ 4050 41.50
(4.59.2)y\begin{align*}(4.5 \cdot 9.2)y\end{align*} simplifies to 41.4y\begin{align*}41.4y\end{align*}.
The answer is that 4.5(9.2y)\begin{align*}4.5(9.2y)\end{align*} simplifies to 41.4y\begin{align*}41.4y\end{align*}.
### Examples
#### Example 1
Simplify 4.8(3.1k)\begin{align*}4.8(3.1k)\end{align*}.
First, use the Associative Property of Multiplication to regroup the factors.
4.8(3.1k)\begin{align*}4.8(3.1k)\end{align*} is equivalent to (4.83.1)k\begin{align*}(4.8 \cdot 3.1)k\end{align*}.
Now, simplify (4.83.1)k\begin{align*}(4.8 \cdot 3.1)k\end{align*}. Multiply the numbers in parentheses. Use what you have learned about decimal number multiplication.
4.8×3.148+ 1440 14.88
(4.83.1)k\begin{align*}(4.8 \cdot 3.1)k\end{align*} simplifies to 14.88k\begin{align*}14.88k\end{align*}.
The answer is that 4.8(3.1k)\begin{align*}4.8(3.1k)\end{align*} simplifies to 14.88k\begin{align*}14.88k\end{align*}.
#### Example 2
Simplify (3.45p)2.3\begin{align*}(3.45p) \cdot 2.3\end{align*}.
First, use the Commutative Property of Multiplication to reorder the factors.
(3.45p)2.3\begin{align*}(3.45p) \cdot 2.3\end{align*} is equivalent to 2.3(3.45p)\begin{align*}2.3 \cdot (3.45p)\end{align*}.
Next, use the Associative Property of Multiplication to regroup the factors.
2.3(3.45p)\begin{align*}2.3 \cdot (3.45p)\end{align*} is equivalent to (2.33.45)p\begin{align*}(2.3 \cdot 3.45)p\end{align*}.
Now, simplify (2.33.45)p\begin{align*}(2.3 \cdot 3.45)p\end{align*}. Multiply the numbers in parentheses. Use what you have learned about decimal number multiplication.
3.45×2.3 1035+ 6900 7.935
(2.33.45)p\begin{align*}(2.3 \cdot 3.45)p\end{align*} simplifies to 7.935p\begin{align*}7.935p\end{align*}.
The answer is that (3.45p)2.3\begin{align*}(3.45p) \cdot 2.3\end{align*} simplifies to 7.935p\begin{align*}7.935p\end{align*}.
#### Example 3
Simplify \begin{align*}1.98 \cdot (a \cdot 6.4)\end{align*}.
First, use the Commutative Property of Multiplication to reorder the factors within the parentheses.
\begin{align*}1.98 \cdot (a \cdot 6.4)\end{align*} is equivalent to \begin{align*}1.98 \cdot (6.4 \cdot a)\end{align*}.
Next, use the Associative Property of Multiplication to regroup the factors.
\begin{align*}1.98 \cdot (6.4 \cdot a)\end{align*} is equivalent to \begin{align*}(1.98 \cdot 6.4) \cdot a\end{align*}
Now, simplify \begin{align*}(1.98 \cdot 6.4) \cdot a\end{align*}. Multiply the numbers in parentheses. Use what you have learned about decimal number multiplication.
\begin{align*}(1.98 \cdot 6.4) \cdot a\end{align*} simplifies to \begin{align*}12.672a\end{align*}.
The answer is that \begin{align*}1.98 \cdot (a \cdot 6.4)\end{align*} simplifies to \begin{align*}12.672a\end{align*}.
Credit: Richard Smith
Source: https://www.flickr.com/photos/smith/191453691/
Remember Marigold who is planning to make salsa? Her recipe says that for every cup of tomatoes she will need 0.5 onion, and for every onion she will need 4 cloves of garlic. Marigold wants to figure out how many cloves of garlic she will need in terms of the number of cups of tomatoes she picks.
First, Marigold should write an expression for this situation. She should start by defining her variable. She doesn’t know how many cups of tomatoes she will have, so that unknown quantity will be her variable.
Let \begin{align*}x\end{align*} equal the number of cups of tomatoes Marigold picks.
Now, the problem says that she will need 0.5 onion for every tomato. So the number of onions she needs is \begin{align*}0.5x\end{align*}.
Next, the problem says that she will need 4 cloves of garlic for every onion. Since she will have \begin{align*}0.5x\end{align*} onions, she will need \begin{align*}(0.5x) \cdot 4\end{align*} cloves of garlic.
Now, Marigold can simplify the expression.
First, she can use the Commutative Property of Multiplication to reorder the factors.
\begin{align*}(0.5x) \cdot 4\end{align*} is equivalent to \begin{align*}4 \cdot (0.5x)\end{align*}.
Next, she can use the Associative Property of Multiplication to regroup the factors.
\begin{align*}4 \cdot (0.5x)\end{align*} is equivalent to \begin{align*}(4 \cdot 0.5)x\end{align*}.
Now, she can simplify \begin{align*}(4 \cdot 0.5)x\end{align*}. She can use what she learned about decimal number multiplication to multiply the numbers in parentheses.
\begin{align*}(4 \cdot 0.5)x\end{align*} simplifies to \begin{align*}2x\end{align*}.
The answer is that Marigold will need 2 cloves of garlic for every cup of tomatoes she picks.
### Explore More
Simplify the following expressions.
1. \begin{align*}(4.21 \times 8.8) \times p\end{align*}
2. \begin{align*}16.14 \times q \times 6.2\end{align*}
3. \begin{align*}3.6(91.7x)\end{align*}
4. \begin{align*}5.3r(2.8)\end{align*}
5. \begin{align*}5.6x(3.8)\end{align*}
6. \begin{align*}2.4y(2.8)\end{align*}
7. \begin{align*}6.7x(3.1)\end{align*}
8. \begin{align*}8.91r(2.3)\end{align*}
9. \begin{align*}5.67y(2.8)\end{align*}
10. \begin{align*}4.53x(2.2)\end{align*}
11. \begin{align*}5.6(2.8x)\end{align*}
12. \begin{align*}9.2y(3.2)\end{align*}
13. \begin{align*}4.5x(2.3)\end{align*}
14. \begin{align*}15.4x(12.8)\end{align*}
15. \begin{align*}18.3y(14.2)\end{align*}
### Vocabulary Language: English
Associative Property
Associative Property
The associative property states that you can change the groupings of numbers being added or multiplied without changing the sum. For example: (2+3) + 4 = 2 + (3+4), and (2 X 3) X 4 = 2 X (3 X 4).
Commutative Property
Commutative Property
The commutative property states that the order in which two numbers are added or multiplied does not affect the sum or product. For example $a+b=b+a \text{ and\,} (a)(b)=(b)(a)$.
Estimation
Estimation
Estimation is the process of finding an approximate answer to a problem.
Product
Product
The product is the result after two amounts have been multiplied.
1. [1]^ Credit: Mark Mrwizard; Source: https://www.flickr.com/photos/mark_mrwizard/5990295938/; License: CC BY-NC 3.0
2. [2]^ Credit: Richard Smith; Source: https://www.flickr.com/photos/smith/191453691/; License: CC BY-NC 3.0
## Date Created:
Nov 30, 2012
Sep 23, 2015
You can only attach files to Modality which belong to you
If you would like to associate files with this Modality, please make a copy first. | 2015-10-07 09:47:59 | {"extraction_info": {"found_math": true, "script_math_tex": 95, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 1, "texerror": 0, "math_score": 0.9999735355377197, "perplexity": 3151.4044773589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736682947.6/warc/CC-MAIN-20151001215802-00229-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/496596/how-hbadness-and-hfuzz-influence-overfull-hbox-warning | How \hbadness and \hfuzz influence “Overfull \hbox” warning?
TeXbook (p.302) says:
A nonempty hbox is considered “overfull” if its glue cannot shrink to achieve the specified size, provided that \hbadness is less than 100 or that the excess width (after shrinking by the maximum amount) is more than \hfuzz. It is “tight” if its glue shrinks and the badness exceeds \hbadness; it is “loose” if its glue stretches and the badness exceeds \hbadness but is not greater than 100; it is “underfull” if its glue stretches and the badness is greater than \hbadness and greater than 100.
I have a question about case 1. Let's consider it, using the following template:
\hbadness=-1 % to report the badness
\spaceskip.3333em \rightskip0pt plus20pt % allow only 20pt of stretchability
\def\text{The badness of this line is 1000.}
\setbox0=\hbox{\text}
\end
1)
A nonempty hbox is considered “overfull” if its glue cannot shrink to achieve the specified size ⟨quote omitted⟩ ... TeX prints a warning message and displays the offending box, whenever such anomalies are discovered.
In the template substitute Xpt with -0.1pt and we confirm statement (1):
Overfull \hbox (0.1pt too wide) in paragraph at lines 6--7
\tenrm The bad-ness of this line is 1000.
But I do not understand what the following phrase (which is marked as "⟨quote omitted⟩" above) means:
provided that \hbadness is less than 100 or that the excess width (after shrinking by the maximum amount) is more than \hfuzz
because "Overfull \hbox" message is always printed.
this not part of the question - it is just a notice: the following cases are considered using the same timplate
2)
It is “tight” if its glue shrinks and the badness exceeds \hbadness ... TeX prints a warning message and displays the offending box, whenever such anomalies are discovered.
In the template change plus to minus and Xpt to -10pt and we confirm statement (2):
Tight \hbox (badness 12) in paragraph at lines 6--7
\tenrm The bad-ness of this line is 1000.
3)
it is “loose” if its glue stretches and the badness exceeds \hbadness but is not greater than 100... TeX prints a warning message and displays the offending box, whenever such anomalies are discovered.
We confirm this by substituting 0.1pt and 20pt instead of Xpt in the template:
Loose \hbox (badness 0) in paragraph at lines 6--7
\tenrm The bad-ness of this line is 1000.
Loose \hbox (badness 100) in paragraph at lines 6--7
\tenrm The bad-ness of this line is 1000.
4)
it is “underfull” if its glue stretches and the badness is greater than \hbadness and greater than 100... TeX prints a warning message and displays the offending box, whenever such anomalies are discovered.
This is confirmed using 20.1pt instead of Xpt:
Underfull \hbox (badness 101) in paragraph at lines 6--7
\tenrm The bad-ness of this line is 1000.
• If I run your first example (with Xpt replaced by -.1pt) and set \hbadness to at least 100, I get exactly the described behaviour: The "Overfull \hbox" warning disappears. – Marcel Krüger Jun 20 '19 at 8:03
• @MarcelKrüger yes but just set \hfuzz to 2pt, why doesn't that suppress the overfull message, that's the question I think (semantics of "or" in English is confusing at the best of times:-) – David Carlisle Jun 20 '19 at 8:07
provided that \hbadness is less than 100 or that the excess width (after shrinking by the maximum amount) is more than \hfuzz.
Means that if (as here) you set \hbadness to be less than 100, then any overfull box will be reported, and \hfuzz has no effect.
Given how often I have read that paragraph in the TeXBook I can't say I ever read it that way until you provided this example. If you had asked, I'd have said that setting \hfuzz to a non zero value inhibits warnings about boxes that are overfull by a smaller amount without considering this extra condition on \hbadness, so nice example:-)
• @Skillmon that's what tex.web is for.... – David Carlisle Jun 20 '19 at 8:19 | 2020-10-28 05:04:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9010361433029175, "perplexity": 5513.8026164656685}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107896778.71/warc/CC-MAIN-20201028044037-20201028074037-00560.warc.gz"} |
https://www.physicsforums.com/threads/average-speed-of-a-and-b-where-d-and-t-is-not-given.597442/ | # Average Speed Of a and b where d and t is not given
1. Apr 17, 2012
### rajatbbsr
A body covers half its journey with a speed of a m/s and the other half with a speed of b m/s Calculate the average speed of the body during the whole journey
2. Apr 17, 2012
### Curious3141
What have you tried so far?
3. Apr 17, 2012
### rajatbbsr
(d1/t1+d2/t2)/2
4. Apr 17, 2012
### Steely Dan
The definition of average speed is
$$v_{avg} = \frac{\Delta x}{\Delta t} = \frac{\Delta x}{t_1+t_2},$$
if we let $t_1$ and $t_2$ denote the times for the two parts of the trip. You'll lead yourself astray if you try to use shortcuts on calculating average speed.
5. Apr 17, 2012
### rajatbbsr
Can you please explain it to me couldn't get you
6. Apr 17, 2012
### Steely Dan
All I'm saying is that the formula I posted is the definition of average speed, the way it's commonly understood. Sometimes you can also calculate average speeds in physics I by appealing to the notion of "average" that you might already have in your head, like calculating the mean of a set of numbers. But you might get the wrong answer if you do it that way unless you're very careful. So use the physics definition that I posted instead of the algebraic mean definition. And that definition is just the total distance divided by the total amount of time.
7. Apr 17, 2012
### rajatbbsr
hmmm got you isn't the answer is d/(t1+t2) can it be more simplified
8. Apr 17, 2012
### Steely Dan
Yes, it has to be simplified. The goal here is to write the answer only in terms of a and b, since that's the only information you have, in the sense of actual numbers.
9. Apr 17, 2012
### rajatbbsr
10. Apr 17, 2012
### Steely Dan
That part is up to you :-)
But as a hint, start by assigning $d_1,t_1$ to the first part of the journey and $d_2,t_2$ to the second part of the journey, and $d,t$ to the full journey. And use the one piece of information you have regarding the connection between the two parts of the trip.
11. Apr 17, 2012
### Curious3141
The definition of average speed = total distance travelled/total time taken.
It's NOT simply the average of the speeds in different legs of the journey.
You've denoted the distance travelled in each leg by d1 and d2. Since you're given that the body covers half its journey in each leg, why not just denote the distance of a single leg by d?
OK, so the total distance is 2d.
Can you now find an expression for the time taken in each half of the journey in terms of its speed and the distance travelled? | 2018-02-19 05:50:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7901906371116638, "perplexity": 625.0277378353919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812405.3/warc/CC-MAIN-20180219052241-20180219072241-00765.warc.gz"} |
https://gateoverflow.in/86561/test-by-bikram-computer-networks-test-1-question-3 | 349 views
Let the source has sent four TCP segments to Destination with the sequence numbers $50, \ 74, \ 97,\ 120$ ($1$st, $2$nd, $3$rd and $4$th segment) if the first and fourth segments arrive at destination successfully then the negative acknowledgement that destination sends to source is _______
Whenever receiver found that some packet is missing, then a nack packet is send from receiver to sender to tell them your packet is missing.
here also from packet with SN 56, receiver will calculate the next packet SN, which has to be 74. and when receiver will detect that there is no packet with SN = 74 then it send a nack with ack =74 to sender.
what if the packet with SN 97 is lost, then NACK should be sent for 97, isn't it? I question it is not mentioned which packet is get lost, so I think it could be 74 or 97. Please correct me if I'm wrong here.
@bhuv
it have lose both of the packets.
TCP only acknowledges bytes upto first missing byte in the stream(cumulative acknowledgements). Here given the segment with sequence number $74$ and $97$ is lost so the NAK send to the source is $74$ (the next expected sequence number).
http://www2.ic.uff.br/~michael/kr1999/3-transport/3_05-segment.html
How does the destination knows that the next missing byte is 74 only. It could be 51 also as there is no mention of amount of data bytes sent ??..plz ans.
if the first and fourth segments arrive at destination successfully
1 vote
1
240 views
1 vote | 2023-02-01 22:03:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31466448307037354, "perplexity": 1941.5542824292106}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00635.warc.gz"} |
https://people.maths.bris.ac.uk/~matyd/GroupNames/192i1/D4xDic6.html | Copied to
clipboard
## G = D4×Dic6order 192 = 26·3
### Direct product of D4 and Dic6
Series: Derived Chief Lower central Upper central
Derived series C1 — C2×C6 — D4×Dic6
Chief series C1 — C3 — C6 — C2×C6 — C2×Dic3 — C22×Dic3 — D4×Dic3 — D4×Dic6
Lower central C3 — C2×C6 — D4×Dic6
Upper central C1 — C22 — C4×D4
Generators and relations for D4×Dic6
G = < a,b,c,d | a4=b2=c12=1, d2=c6, bab=a-1, ac=ca, ad=da, bc=cb, bd=db, dcd-1=c-1 >
Subgroups: 632 in 280 conjugacy classes, 123 normal (29 characteristic)
C1, C2, C2, C3, C4, C4, C22, C22, C22, C6, C6, C2×C4, C2×C4, C2×C4, D4, Q8, C23, Dic3, Dic3, C12, C12, C2×C6, C2×C6, C2×C6, C42, C42, C22⋊C4, C22⋊C4, C4⋊C4, C4⋊C4, C22×C4, C22×C4, C2×D4, C2×Q8, Dic6, Dic6, C2×Dic3, C2×Dic3, C2×C12, C2×C12, C2×C12, C3×D4, C22×C6, C4×D4, C4×D4, C4×Q8, C22⋊Q8, C4⋊Q8, C22×Q8, C4×Dic3, Dic3⋊C4, C4⋊Dic3, C4⋊Dic3, C6.D4, C4×C12, C3×C22⋊C4, C3×C4⋊C4, C2×Dic6, C2×Dic6, C2×Dic6, C22×Dic3, C22×C12, C6×D4, D4×Q8, C4×Dic6, C122Q8, Dic3.D4, C12⋊Q8, C12.48D4, D4×Dic3, D4×C12, C22×Dic6, D4×Dic6
Quotients: C1, C2, C22, S3, D4, Q8, C23, D6, C2×D4, C2×Q8, C24, Dic6, C22×S3, C22×D4, C22×Q8, 2- 1+4, C2×Dic6, S3×D4, S3×C23, D4×Q8, C22×Dic6, C2×S3×D4, Q8○D12, D4×Dic6
Smallest permutation representation of D4×Dic6
On 96 points
Generators in S96
(1 65 89 28)(2 66 90 29)(3 67 91 30)(4 68 92 31)(5 69 93 32)(6 70 94 33)(7 71 95 34)(8 72 96 35)(9 61 85 36)(10 62 86 25)(11 63 87 26)(12 64 88 27)(13 60 77 37)(14 49 78 38)(15 50 79 39)(16 51 80 40)(17 52 81 41)(18 53 82 42)(19 54 83 43)(20 55 84 44)(21 56 73 45)(22 57 74 46)(23 58 75 47)(24 59 76 48)
(1 34)(2 35)(3 36)(4 25)(5 26)(6 27)(7 28)(8 29)(9 30)(10 31)(11 32)(12 33)(13 43)(14 44)(15 45)(16 46)(17 47)(18 48)(19 37)(20 38)(21 39)(22 40)(23 41)(24 42)(49 84)(50 73)(51 74)(52 75)(53 76)(54 77)(55 78)(56 79)(57 80)(58 81)(59 82)(60 83)(61 91)(62 92)(63 93)(64 94)(65 95)(66 96)(67 85)(68 86)(69 87)(70 88)(71 89)(72 90)
(1 2 3 4 5 6 7 8 9 10 11 12)(13 14 15 16 17 18 19 20 21 22 23 24)(25 26 27 28 29 30 31 32 33 34 35 36)(37 38 39 40 41 42 43 44 45 46 47 48)(49 50 51 52 53 54 55 56 57 58 59 60)(61 62 63 64 65 66 67 68 69 70 71 72)(73 74 75 76 77 78 79 80 81 82 83 84)(85 86 87 88 89 90 91 92 93 94 95 96)
(1 22 7 16)(2 21 8 15)(3 20 9 14)(4 19 10 13)(5 18 11 24)(6 17 12 23)(25 37 31 43)(26 48 32 42)(27 47 33 41)(28 46 34 40)(29 45 35 39)(30 44 36 38)(49 67 55 61)(50 66 56 72)(51 65 57 71)(52 64 58 70)(53 63 59 69)(54 62 60 68)(73 96 79 90)(74 95 80 89)(75 94 81 88)(76 93 82 87)(77 92 83 86)(78 91 84 85)
G:=sub<Sym(96)| (1,65,89,28)(2,66,90,29)(3,67,91,30)(4,68,92,31)(5,69,93,32)(6,70,94,33)(7,71,95,34)(8,72,96,35)(9,61,85,36)(10,62,86,25)(11,63,87,26)(12,64,88,27)(13,60,77,37)(14,49,78,38)(15,50,79,39)(16,51,80,40)(17,52,81,41)(18,53,82,42)(19,54,83,43)(20,55,84,44)(21,56,73,45)(22,57,74,46)(23,58,75,47)(24,59,76,48), (1,34)(2,35)(3,36)(4,25)(5,26)(6,27)(7,28)(8,29)(9,30)(10,31)(11,32)(12,33)(13,43)(14,44)(15,45)(16,46)(17,47)(18,48)(19,37)(20,38)(21,39)(22,40)(23,41)(24,42)(49,84)(50,73)(51,74)(52,75)(53,76)(54,77)(55,78)(56,79)(57,80)(58,81)(59,82)(60,83)(61,91)(62,92)(63,93)(64,94)(65,95)(66,96)(67,85)(68,86)(69,87)(70,88)(71,89)(72,90), (1,2,3,4,5,6,7,8,9,10,11,12)(13,14,15,16,17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70,71,72)(73,74,75,76,77,78,79,80,81,82,83,84)(85,86,87,88,89,90,91,92,93,94,95,96), (1,22,7,16)(2,21,8,15)(3,20,9,14)(4,19,10,13)(5,18,11,24)(6,17,12,23)(25,37,31,43)(26,48,32,42)(27,47,33,41)(28,46,34,40)(29,45,35,39)(30,44,36,38)(49,67,55,61)(50,66,56,72)(51,65,57,71)(52,64,58,70)(53,63,59,69)(54,62,60,68)(73,96,79,90)(74,95,80,89)(75,94,81,88)(76,93,82,87)(77,92,83,86)(78,91,84,85)>;
G:=Group( (1,65,89,28)(2,66,90,29)(3,67,91,30)(4,68,92,31)(5,69,93,32)(6,70,94,33)(7,71,95,34)(8,72,96,35)(9,61,85,36)(10,62,86,25)(11,63,87,26)(12,64,88,27)(13,60,77,37)(14,49,78,38)(15,50,79,39)(16,51,80,40)(17,52,81,41)(18,53,82,42)(19,54,83,43)(20,55,84,44)(21,56,73,45)(22,57,74,46)(23,58,75,47)(24,59,76,48), (1,34)(2,35)(3,36)(4,25)(5,26)(6,27)(7,28)(8,29)(9,30)(10,31)(11,32)(12,33)(13,43)(14,44)(15,45)(16,46)(17,47)(18,48)(19,37)(20,38)(21,39)(22,40)(23,41)(24,42)(49,84)(50,73)(51,74)(52,75)(53,76)(54,77)(55,78)(56,79)(57,80)(58,81)(59,82)(60,83)(61,91)(62,92)(63,93)(64,94)(65,95)(66,96)(67,85)(68,86)(69,87)(70,88)(71,89)(72,90), (1,2,3,4,5,6,7,8,9,10,11,12)(13,14,15,16,17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32,33,34,35,36)(37,38,39,40,41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70,71,72)(73,74,75,76,77,78,79,80,81,82,83,84)(85,86,87,88,89,90,91,92,93,94,95,96), (1,22,7,16)(2,21,8,15)(3,20,9,14)(4,19,10,13)(5,18,11,24)(6,17,12,23)(25,37,31,43)(26,48,32,42)(27,47,33,41)(28,46,34,40)(29,45,35,39)(30,44,36,38)(49,67,55,61)(50,66,56,72)(51,65,57,71)(52,64,58,70)(53,63,59,69)(54,62,60,68)(73,96,79,90)(74,95,80,89)(75,94,81,88)(76,93,82,87)(77,92,83,86)(78,91,84,85) );
G=PermutationGroup([[(1,65,89,28),(2,66,90,29),(3,67,91,30),(4,68,92,31),(5,69,93,32),(6,70,94,33),(7,71,95,34),(8,72,96,35),(9,61,85,36),(10,62,86,25),(11,63,87,26),(12,64,88,27),(13,60,77,37),(14,49,78,38),(15,50,79,39),(16,51,80,40),(17,52,81,41),(18,53,82,42),(19,54,83,43),(20,55,84,44),(21,56,73,45),(22,57,74,46),(23,58,75,47),(24,59,76,48)], [(1,34),(2,35),(3,36),(4,25),(5,26),(6,27),(7,28),(8,29),(9,30),(10,31),(11,32),(12,33),(13,43),(14,44),(15,45),(16,46),(17,47),(18,48),(19,37),(20,38),(21,39),(22,40),(23,41),(24,42),(49,84),(50,73),(51,74),(52,75),(53,76),(54,77),(55,78),(56,79),(57,80),(58,81),(59,82),(60,83),(61,91),(62,92),(63,93),(64,94),(65,95),(66,96),(67,85),(68,86),(69,87),(70,88),(71,89),(72,90)], [(1,2,3,4,5,6,7,8,9,10,11,12),(13,14,15,16,17,18,19,20,21,22,23,24),(25,26,27,28,29,30,31,32,33,34,35,36),(37,38,39,40,41,42,43,44,45,46,47,48),(49,50,51,52,53,54,55,56,57,58,59,60),(61,62,63,64,65,66,67,68,69,70,71,72),(73,74,75,76,77,78,79,80,81,82,83,84),(85,86,87,88,89,90,91,92,93,94,95,96)], [(1,22,7,16),(2,21,8,15),(3,20,9,14),(4,19,10,13),(5,18,11,24),(6,17,12,23),(25,37,31,43),(26,48,32,42),(27,47,33,41),(28,46,34,40),(29,45,35,39),(30,44,36,38),(49,67,55,61),(50,66,56,72),(51,65,57,71),(52,64,58,70),(53,63,59,69),(54,62,60,68),(73,96,79,90),(74,95,80,89),(75,94,81,88),(76,93,82,87),(77,92,83,86),(78,91,84,85)]])
45 conjugacy classes
class 1 2A 2B 2C 2D 2E 2F 2G 3 4A 4B 4C 4D 4E 4F 4G 4H 4I 4J 4K 4L ··· 4Q 6A 6B 6C 6D 6E 6F 6G 12A 12B 12C 12D 12E ··· 12L order 1 2 2 2 2 2 2 2 3 4 4 4 4 4 4 4 4 4 4 4 4 ··· 4 6 6 6 6 6 6 6 12 12 12 12 12 ··· 12 size 1 1 1 1 2 2 2 2 2 2 2 2 2 4 4 4 6 6 6 6 12 ··· 12 2 2 2 4 4 4 4 2 2 2 2 4 ··· 4
45 irreducible representations
dim 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 4 4 4 type + + + + + + + + + + + - + + + + + - - + - image C1 C2 C2 C2 C2 C2 C2 C2 C2 S3 D4 Q8 D6 D6 D6 D6 D6 Dic6 2- 1+4 S3×D4 Q8○D12 kernel D4×Dic6 C4×Dic6 C12⋊2Q8 Dic3.D4 C12⋊Q8 C12.48D4 D4×Dic3 D4×C12 C22×Dic6 C4×D4 Dic6 C3×D4 C42 C22⋊C4 C4⋊C4 C22×C4 C2×D4 D4 C6 C4 C2 # reps 1 1 1 4 2 2 2 1 2 1 4 4 1 2 1 2 1 8 1 2 2
Matrix representation of D4×Dic6 in GL6(𝔽13)
1 3 0 0 0 0 8 12 0 0 0 0 0 0 12 0 0 0 0 0 0 12 0 0 0 0 0 0 12 0 0 0 0 0 0 12
,
1 3 0 0 0 0 0 12 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 12 0 0 0 0 0 0 12
,
1 0 0 0 0 0 0 1 0 0 0 0 0 0 12 11 0 0 0 0 1 1 0 0 0 0 0 0 1 12 0 0 0 0 1 0
,
1 0 0 0 0 0 0 1 0 0 0 0 0 0 12 6 0 0 0 0 4 1 0 0 0 0 0 0 6 10 0 0 0 0 3 7
G:=sub<GL(6,GF(13))| [1,8,0,0,0,0,3,12,0,0,0,0,0,0,12,0,0,0,0,0,0,12,0,0,0,0,0,0,12,0,0,0,0,0,0,12],[1,0,0,0,0,0,3,12,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,12,0,0,0,0,0,0,12],[1,0,0,0,0,0,0,1,0,0,0,0,0,0,12,1,0,0,0,0,11,1,0,0,0,0,0,0,1,1,0,0,0,0,12,0],[1,0,0,0,0,0,0,1,0,0,0,0,0,0,12,4,0,0,0,0,6,1,0,0,0,0,0,0,6,3,0,0,0,0,10,7] >;
D4×Dic6 in GAP, Magma, Sage, TeX
D_4\times {\rm Dic}_6
% in TeX
G:=Group("D4xDic6");
// GroupNames label
G:=SmallGroup(192,1096);
// by ID
G=gap.SmallGroup(192,1096);
# by ID
G:=PCGroup([7,-2,-2,-2,-2,-2,-2,-3,112,387,675,80,6278]);
// Polycyclic
G:=Group<a,b,c,d|a^4=b^2=c^12=1,d^2=c^6,b*a*b=a^-1,a*c=c*a,a*d=d*a,b*c=c*b,b*d=d*b,d*c*d^-1=c^-1>;
// generators/relations
×
𝔽 | 2021-06-14 02:14:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975346922874451, "perplexity": 1749.9996621365874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611320.18/warc/CC-MAIN-20210614013350-20210614043350-00553.warc.gz"} |
https://hal-cea.archives-ouvertes.fr/cea-01490533 | A model for the neutron resonance in HgBa$_{2}$CuO$_{4+\delta}$ - Archive ouverte HAL Access content directly
Preprints, Working Papers, ... Year :
## A model for the neutron resonance in HgBa$_{2}$CuO$_{4+\delta}$
X. Montiel
• Function : Author
C. Pépin
• Function : Author
#### Abstract
We study the spin dynamics of the Resonant Excitonic State (RES) proposed, within the theory of an emergent SU(2) symmetry, to explain some properties of the pseudo-gap phase of cuprate superconductors. The RES can be described as a proliferation of particle-hole patches with an internal modulated structure. We model the RES modes as a charge order with multiple $2{\bf {p}}_{\text{F}}$ ordering vectors, where $2{\bf {p}}_{\text{F}}$ connects two opposite side of the Fermi surface. This simple modelization enables us to propose a comprehensive study of the collective mode observed at the antiferromagnetic (AF) wave vector $\mathbf{Q}=(\pi,\pi)$ by Inelastic Neutron Scattering (INS) in both superconducting state (SC), but also in the Pseudogap regime. In this regime, we show that the dynamic spin susceptibility accuses a loss of coherence terms except at special wave vectors commensurate with the lattice. We argue that this phenomenon could explain the change of the spin response shape around $\mathbf{Q}$. We demonstrate that the hole dependence of the RES spin dynamics is in agreement with the experimental data in HgBa$_{2}$CuO$_{4+\delta}$.
### Dates and versions
cea-01490533 , version 1 (15-03-2017)
### Identifiers
• HAL Id : cea-01490533 , version 1
• ARXIV :
### Cite
X. Montiel, C. Pépin. A model for the neutron resonance in HgBa$_{2}$CuO$_{4+\delta}$. 2017. ⟨cea-01490533⟩
### Export
BibTeX TEI Dublin Core DC Terms EndNote Datacite
79 View | 2023-03-20 16:06:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3473537564277649, "perplexity": 3256.373904790129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00726.warc.gz"} |
https://www.askiitians.com/forums/Magical-Mathematics%5BInteresting-Approach%5D/q-5-boys-and-3-girls-are-to-be-arranged-around-a_235044.htm | # Q. 5 boys and 3 girls are to be arranged around a circular table such that B1 and G1 do not sit together. What I did was Total ways to arrange - Total ways in which B1 and G1 are together. i.e. 7!-6!(2!) But the answer is 5!*6!.What am I doing wrong?
Arun
25757 Points
4 years ago
Rule for Circular Permutations -
(n - 1)! = Clockwise + Anticlockwise arrangement.
Ex. Seating arrangement of persons round a table.
- wherein
DISTINCT arrangement.
Number of ways = Total - when B1 and G1 sit together
Total ways to seat 8 people on table = 7!
When B1 and G1 sit together $=6!\times 2!$
Number of ways $=7!-2\times 6!=6!(7-2)=5\times6!$ | 2023-01-28 23:14:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2349749356508255, "perplexity": 2467.4328255122205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499695.59/warc/CC-MAIN-20230128220716-20230129010716-00216.warc.gz"} |
https://www.physicsforums.com/threads/the-relativistic-lagrangian.358183/ | # The relativistic Lagrangian
1. Nov 26, 2009
### AxiomOfChoice
In J.D. Jackson's Classical Electrodynamics, an argument is made in support of the assertion that the relativistic Lagrangian $\mathcal L$ for a free particle has to be proportional to $1/\gamma$. The argument goes something like this:
1. $\mathcal L$ must be independent of position and can therefore only be a function of velocity and mass.
2. $\gamma \mathcal L$ must be a Lorentz scalar.
3. The only available Lorentz invariant function of the 4-velocity is $c^2 = v_\mu v^\mu$.
From this, it is (according to Jackson) "obvious" that the relativistic Lagrangian for the free particle has to be
$$\mathcal L = -mc^2 / \gamma.$$
I guess I can see why this should be the case, given that the Euler-Lagrange equations need to be satisfied and that the Lagrangian needs to have the appropriate units. What I *don't* get is where (2) and (3) come from. Can someone please explain?
Last edited: Nov 26, 2009
2. Nov 26, 2009
### AxiomOfChoice
(When I initially made this post, my question was incomplete. I've updated it. Sorry!)
3. Nov 26, 2009
### bcrowell
Staff Emeritus
Can you give a page number? I can't seem to find what you're referring to.
#3 seems obvious to me, but not #2. Re #3, what other scalar could you make out of the 4-velocity besides c? (Of course, you could make c5, etc.) Re #2, I don't know, but I'm guessing this is because the action has to be a scalar...?
4. Nov 26, 2009
### AxiomOfChoice
Sure. It's pgs. 580-81.
A question about (3)...am I to infer that the only Lorentz invariant function of any 4-vector is its scalar product?
5. Nov 26, 2009
### bcrowell
Staff Emeritus
Hmm...on pp. 580-581, I have the tail end of section 12.2, "On the question of obtaining the magnetic field..." This is in the 2nd ed. of Jackson. I don't see the material you're referring to. Do you have a later edition?
I think so, except that you can obviously take the norm of a 4-vector and push it through any function you like that accepts a scalar input and is itself Lorentz-invariant. E.g., you can multiply the norm of the v 4-vector by the mass and get the norm of the momentum, or take the norm of the v 4-vector to the 5th power.
6. Nov 26, 2009
### AxiomOfChoice
Yes, I think we've got different editions; I'm pretty sure I'm using the 3rd Ed. The discussion I'm talking about shows up at the start of Chapter 12, in any event.
7. Nov 26, 2009
### bcrowell
Staff Emeritus
Okay, on p. 573, the second page of section 12.1 in my 2nd edition of the book, I have an argument that $A=\int_{\tau_1}^{\tau_2} \gamma L d \tau$, and since A and $\tau$ are supposed to be invariant, $\gamma L$ has to be invariant as well. This seems reasonably sensible to me, although I wouldn't have had the confidence to state the same argument in the same somewhat breezy form without having thought long and hard about all the details that are not explicitly given. E.g., it seems plausible to me that we need to require Lorentz invariance for A, but it's not obvious that this is really true. If someone told me that A could be non-invariant, but all the predictions of the theory about experimental observables would still be 100% invariant, I wouldn't have had a snappy comeback to prove they were wrong.
8. Nov 27, 2009
### AxiomOfChoice
Ok. But I can understand an attempt to make the Lagrangian Lorentz invariant. (Is "covariant" another word for "Lorentz invariant"?) What I can't quite understand is this statement: "since A and $\tau$ are supposed to be invariant, $\gamma L$ has to be invariant as well." WHY is this true? I have a pretty rigorous training in higher mathematics, and it's trained me to need (and demand) a REASON for this! Maybe the integral can do something strange to $\mathcal L$ that makes it unnecessary for $\gamma \mathcal L$ to be Lorentz invariant...I certainly can't think of any reason why this can't happen!
9. Nov 27, 2009
### bcrowell
Staff Emeritus
More or less. The term is used kind of loosely by physicists, with several different, but related, definitions. "General covariance" refers to the property of GR that its predictions are invariant under any smooth change of coordinates. "Covariant" can also be used as the opposite of "contravariant," to describe tensors with upper versus low indices. In the present context, however, it basically means the same thing as "Lorentz invariant."
Yeah, I agree that Jackson is leaving a lot to the reader's imagination. I think it might be more transparent if you rewrite the equation for the action in differential form, as $dA/d\tau=\gamma L$. The left-hand side is the quotient of two infinitesimally small numbers. (I hope you're not allergic to infinitesimals. They can be just as rigorous as limits, as shown by Robinson in the 60's.) Now $d\tau$ is Lorentz-invariant, and say we want dA to be Lorentz-invariant as well. Then dividing these two Lorentz-invariant quantities gives another Lorentz scalar. Therefore $\gamma L$ has to be a scalar as well. This is a very common mode of reasoning in relativity. You start with things that you know behave as well-defined tensors, and combine them to get new tensors. E.g., $m dv/d\tau$ is constructed out of a scalar, a 4-vector, and a scalar, so it produces a valid 4-vector (the momentum 4-vector).
The main thing that sticks out to me as maybe needing more justification in my own reasoning above is that it's not necessarily obvious that dA has to be Lorentz-invariant, or even that A does. The only experimental observables are essentially incidence relations between world-lines, i.e., do they cross or not. It's conceivable that A and/or dA could be non-Lorentz-invariant, and yet you'd get observables that would be Lorentz-invariant. But I think the general philosophy is that all the machinery of tensors and least-action are designed so that you never, ever write anything down on the paper that isn't *manifestly* a valid relativistic equation. | 2017-08-23 16:16:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8625813722610474, "perplexity": 474.89899096609355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120573.75/warc/CC-MAIN-20170823152006-20170823172006-00701.warc.gz"} |
https://scipost.org/submissions/2110.14417v1/ | # The Giant Radio Array for Neutrino Detection (GRAND) Project
### Submission summary
As Contributors: Bruno Lazarotto Lago Arxiv Link: https://arxiv.org/abs/2110.14417v1 (pdf) Date submitted: 2021-10-28 15:05 Submitted by: Lazarotto Lago, Bruno Submitted to: SciPost Physics Proceedings Proceedings issue: 50th International Symposium on Multiparticle Dynamics (ISMD2021) Academic field: Physics Specialties: High-Energy Physics - Experiment
### Abstract
GRAND is designed to detect ultra-high-energy cosmic particles -- specially neutrinos, cosmic rays and gamma rays using radio antennas. With $\sim$20 mountainous sites around the world it will cover a total area of 200,000 km$^{2}$. The planned sensitivity of 10$^{-10}$ GeV cm$^{-2}$ s$^{-1}$ sr$^{-1}$ above $5\times10^{17}$ eV will likely ensure the detection of cosmogenic neutrinos predicted by most common scenarios enabling neutrino astronomy. Furthermore, PeV--EeV neutrinos can test particle interactions at energies above those achieved in accelerators. The pathfinder stage GRANDProto300 is planned to start taking data in 2021. We present the current overall status of the project with emphasis on the neutrino physics.
###### Current status:
Has been resubmitted
### Submission & Refereeing History
Resubmission 2110.14417v2 on 4 February 2022
Submission 2110.14417v1 on 28 October 2021
## Reports on this Submission
### Report
The proceeding is a very nice read and well-organised, I only have minor comments.
- Section 1, paragraph 2, Ref [2]: Another review of the field of cosmic-ray research is https://arxiv.org/abs/1903.07713, maybe you can also cite it here.
- Section 2, paragraph 1: You mention the exposure will increase by 20-80 times. I understand that this refers to the later stages compared to the early stage of GRAND, but perhaps it is more helpful here to give the increase in exposure compared to an existing neutrino observatory like IceCube?
- Section 2, paragraph 3: "involved on the" -> "involved in the"
- Section 2, paragraph 4: "The predicted sensitivity..." Perhaps you can say explicitly here that this is the sensitivity for gamma rays, to avoid any ambiguity.
- Section 3, paragraph 3: "in the atmosphere and also" remove "and"
- Section 4.1, paragraph 1: Remove "(EBL)" since the acronym is not used again in the proceeding.
- Section 4.1, last paragraphs: This sections ends with three very short paragraphs. While this is a correct application of the rule that each paragraph should cover one single thought/topic, I would still merge them since having several subsequent short paragraphs does not look good.
- Section 4.2, last bullet: Remove extra space at "(CRB) ." and remove "to" ion "(up to to 10Mpc)".
- Section 4.3: You mention the muon discrepancy in section 2, end of first paragraph as a motivation for GRAND. Perhaps you could add a sentence here regarding the discrepancy?
- Section 4.4, paragraph 2: I think it would be helpful to put a comma between "timing making".
- Section 4.4, paragraph 3: "detected by Laser ... Observatories - LIGO..." I suggest to change to "detected by the Laser ... Observatories LIGO..."
• validity: -
• significance: -
• originality: -
• clarity: -
• formatting: -
• grammar: - | 2022-05-17 00:25:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7663853764533997, "perplexity": 5306.240233709234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515466.5/warc/CC-MAIN-20220516235937-20220517025937-00245.warc.gz"} |
https://hackage.haskell.org/package/word-wrap-0.4.1/docs/Text-Wrap.html | word-wrap-0.4.1: A library for word-wrapping
Text.Wrap
Synopsis
# Documentation
Settings to control how wrapping is performed.
Constructors
WrapSettings FieldspreserveIndentation :: BoolWhether to indent new lines created by wrapping when their original line was indented.breakLongWords :: BoolWhether to break in the middle of the first word on a line when that word exceeds the wrapping width.
Instances
Source # Methods Source # Methods Source # MethodsshowList :: [WrapSettings] -> ShowS #
Wrap text at the specified width. Newlines and whitespace in the input text are preserved. Returns the lines of text in wrapped form. New lines introduced due to wrapping will have leading whitespace stripped.
Like wrapTextToLines, but returns the wrapped text reconstructed with newlines inserted at wrap points. | 2019-09-18 12:26:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.221738800406456, "perplexity": 11190.135316186012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573284.48/warc/CC-MAIN-20190918110932-20190918132932-00287.warc.gz"} |
https://www.coin-or.org/CppAD/Doc/cholesky_theory.htm | Prev Next Index-> contents reference index search external Up-> CppAD AD ADValued atomic atomic_base atomic_eigen_cholesky.cpp cholesky_theory atomic-> checkpoint atomic_base atomic_base-> atomic_ctor atomic_option atomic_afun atomic_forward atomic_reverse atomic_for_sparse_jac atomic_rev_sparse_jac atomic_for_sparse_hes atomic_rev_sparse_hes atomic_base_clear atomic_get_started.cpp atomic_norm_sq.cpp atomic_reciprocal.cpp atomic_set_sparsity.cpp atomic_tangent.cpp atomic_eigen_mat_mul.cpp atomic_eigen_mat_inv.cpp atomic_eigen_cholesky.cpp atomic_mat_mul.cpp atomic_eigen_cholesky.cpp-> cholesky_theory atomic_eigen_cholesky.hpp cholesky_theory Headings-> Reference Notation ---..Cholesky Factor ---..Taylor Coefficient ---..Lower Triangular Part Forward Mode Lemma 1 ---..Proof Lemma 2 Reverse Mode ---..Case k = 0 ---..Case k > 0
AD Theory for Cholesky Factorization
Reference
See section 3.6 of Sebastian F. Walter's Ph.D. thesis, Structured Higher-Order Algorithmic Differentiation in the Forward and Reverse Mode with Application in Optimum Experimental Design , Humboldt-Universitat zu Berlin, 2011.
Notation
Cholesky Factor
We are given a positive definite symmetric matrix $A \in \B{R}^{n \times n}$ and a Cholesky factorization $$A = L L^\R{T}$$ where $L \in \B{R}^{n \times n}$ is lower triangular.
Taylor Coefficient
The matrix $A$ is a function of a scalar argument $t$. For $k = 0 , \ldots , K$, we use $A_k$ for the corresponding Taylor coefficients; i.e., $$A(t) = o( t^K ) + \sum_{k = 0}^K A_k t^k$$ where $o( t^K ) / t^K \rightarrow 0$ as $t \rightarrow 0$. We use a similar notation for $L(t)$.
Lower Triangular Part
For a square matrix $C$, $\R{lower} (C)$ is the lower triangular part of $C$, $\R{diag} (C)$ is the diagonal matrix with the same diagonal as $C$ and $$\R{low} ( C ) = \R{lower} (C) - \frac{1}{2} \R{diag} (C)$$
Forward Mode
For Taylor coefficient order $k = 0 , \ldots , K$ the coefficients $A_k \in \B{R}^{n \times n}$, and satisfy the equation $$A_k = \sum_{\ell=0}^k L_\ell L_{k-\ell}^\R{T}$$ In the case where $k=0$, the $$A_0 = L_0 L_0^\R{T}$$ The value of $L_0$ can be computed using the Cholesky factorization. In the case where $k > 0$, $$A_k = L_k L_0^\R{T} + L_0 L_k^\R{T} + B_k$$ where $$B_k = \sum_{\ell=1}^{k-1} L_\ell L_{k-\ell}^\R{T}$$ Note that $B_k$ is defined in terms of Taylor coefficients of $L(t)$ that have order less than $k$. We also note that $$L_0^{-1} ( A_k - B_k ) L_0^\R{-T} = L_0^{-1} L_k + L_k^\R{T} L_0^\R{-T}$$ The first matrix on the right hand side is lower triangular, the second is upper triangular, and the diagonals are equal. It follows that $$L_0^{-1} L_k = \R{low} [ L_0^{-1} ( A_k - B_k ) L_0^\R{-T} ]$$ $$L_k = L_0 \R{low} [ L_0^{-1} ( A_k - B_k ) L_0^\R{-T} ]$$ This expresses $L_k$ in term of the Taylor coefficients of $A(t)$ and the lower order coefficients of $L(t)$.
Lemma 1
We use the notation $\dot{C}$ for the derivative of a matrix valued function $C(s)$ with respect to a scalar argument $s$. We use the notation $\bar{S}$ and $\bar{L}$ for the partial derivative of a scalar value function $\bar{F}( S, L)$ with respect to a symmetric matrix $S$ and an lower triangular matrix $L$. Define the scalar valued function $$\hat{F}( C ) = \bar{F} [ S , \hat{L} (S) ]$$ We use $\hat{S}$ for the total derivative of $\hat{F}$ with respect to $S$. Suppose that $\hat{L} ( S )$ is such that $$\dot{L} = L_0 \R{low} ( L_0^{-1} \dot{S} L_0^\R{-T} )$$ for any $S(s)$. It follows that $$\hat{S} = \bar{S} + \frac{1}{2} ( M + M^\R{T} )$$ where $$M = L_0^\R{-T} \R{low}( L_0^\R{T} \bar{L} )^\R{T} L_0^{-1}$$
Proof
$$\partial_s \hat{F} [ S(s) , L(s) ] = \R{tr} ( \bar{S}^\R{T} \dot{S} ) + \R{tr} ( \bar{L}^\R{T} \dot{L} )$$$$\R{tr} ( \bar{L}^\R{T} \dot{L} ) = \R{tr} [ \bar{L}^\R{T} L_0 \R{low} ( L_0^{-1} \dot{S} L_0^\R{-T} ) ]$$$$= \R{tr} [ \R{low} ( L_0^{-1} \dot{S} L_0^\R{-T} )^\R{T} L_0^\R{T} \bar{L} ]$$$$= \R{tr} [ L_0^{-1} \dot{S} L_0^\R{-T} \R{low}( L_0^\R{T} \bar{L} ) ]$$$$= \R{tr} [ L_0^\R{-T} \R{low}( L_0^\R{T} \bar{L} ) L_0^{-1} \dot{S} ]$$$$\partial_s \hat{F} [ S(s) , L(s) ] = \R{tr} ( \bar{S}^\R{T} \dot{S} ) + \R{tr} [ L_0^\R{-T} \R{low}( L_0^\R{T} \bar{L} ) L_0^{-1} \dot{S} ]$$We now consider the $(i, j)$ component function, for a symmetric matrix $S(s)$, defined by $$S_{k, \ell} (s) = \left\{ \begin{array}{ll} 1 & \R{if} \; k = i \; \R{and} \; \ell = j \\ 1 & \R{if} \; k = j \; \R{and} \; \ell = i \\ 0 & \R{otherwise} \end{array} \right\}$$ This shows that the formula in the lemma is correct for $\hat{S}_{i,j}$ and $\hat{S}_{j,i}$. This completes the proof because the component $(i, j)$ was arbitrary.
Lemma 2
We use the same assumptions as in Lemma 1 except that the matrix $S$ is lower triangular (instead of symmetric). It follows that $$\hat{S} = \bar{S} + \R{lower}(M)$$ where $$M = L_0^\R{-T} \R{low}( L_0^\R{T} \bar{L} )^\R{T} L_0^{-1}$$ The proof of this lemma is identical to Lemma 2 except that component function is defined by $$S_{k, \ell} (s) = \left\{ \begin{array}{ll} 1 & \R{if} \; k = i \; \R{and} \; \ell = j \\ 0 & \R{otherwise} \end{array} \right\}$$
Reverse Mode
Case k = 0
For the case $k = 0$, $$\dot{A}_0 = \dot{L}_0 L_0^\R{T} + L_0 \dot{L}_0^\R{T}$$ $$L_0^{-1} \dot{A}_0 L_0^\R{-T} = L_0^{-1} \dot{L}_0 + \dot{L}_0^\R{T} L_0^\R{-T}$$ $$\R{low} ( L_0^{-1} \dot{A}_0 L_0^\R{-T} ) = L_0^{-1} \dot{L}_0$$ $$\dot{L}_0 = L_0 \R{low} ( L_0^{-1} \dot{A}_0 L_0^\R{-T} )$$ It follows from Lemma 1 that $$\bar{A}_0 \stackrel{+}{=} \frac{1}{2} ( M + M^\R{T} )$$ where $$M = L_0^\R{-T} \R{low} ( L_0^\R{T} \bar{L}_0 )^\R{T} L_0^{-1}$$ and $\bar{A}_0$ is the partial before and after is before and after $L_0$ is removed from the scalar function dependency.
Case k > 0
In the case where $k > 0$, $$A_k = L_k L_0^\R{T} + L_0 L_k^\R{T} + B_k$$ where $B_k$ is defined in terms of Taylor coefficients of $L(t)$ that have order less than $k$. It follows that $$\dot{L}_k L_0^\R{T} + L_0 \dot{L}_k^\R{T} = \dot{A}_k - \dot{B}_k - \dot{L}_0 L_k^\R{T} - L_k \dot{L}_0^\R{T}$$ $$L_0^{-1} \dot{L}_k + \dot{L}_k^\R{T} L_0^\R{-T} = L_0^{-1} ( \dot{A}_k - \dot{B}_k - \dot{L}_0 L_k^\R{T} - L_k \dot{L}_0^\R{T} ) L_0^\R{-T}$$ $$L_0^{-1} \dot{L}_k = \R{low} [ L_0^{-1} ( \dot{A}_k - \dot{B}_k - \dot{L}_0 L_k^\R{T} - L_k \dot{L}_0^\R{T} ) L_0^\R{-T} ]$$ $$\dot{L}_k = L_0 \R{low} [ L_0^{-1} ( \dot{A}_k - \dot{B}_k - \dot{L}_0 L_k^\R{T} - L_k \dot{L}_0^\R{T} ) L_0^\R{-T} ]$$ The matrix $A_k$ is symmetric, it follows that $$\bar{A}_k \stackrel{+}{=} \frac{1}{2} ( M_k + M_k^\R{T} )$$ where $$M_k = L_0^\R{-T} \R{low} ( L_0^\R{T} \bar{L}_k )^\R{T} L_0^{-1}$$ The matrix $B_k$ is also symmetric, hence $$\bar{B}_k = - \; \frac{1}{2} ( M_k + M_k^\R{T} )$$ We define the symmetric matrix $C_k (s)$ by $$\dot{C}_k = \dot{L}_0 L_k^\R{T} + L_k \dot{L}_0^\R{T}$$ and remove the dependency on $C_k$ with $$\R{tr}( \bar{C}_k^\R{T} \dot{C}_k ) = \R{tr}( \bar{B}_k^\R{T} \dot{C}_k ) = \R{tr}( \bar{B}_k^\R{T} \dot{L}_0 L_k^\R{T} ) + \R{tr}( \bar{B}_k^\R{T} L_k \dot{L}_0^\R{T} )$$ $$= \R{tr}( L_k^\R{T} \bar{B}_k^\R{T} \dot{L}_0 ) + \R{tr}( L_k^\R{T} \bar{B}_k \dot{L}_0 )$$ $$= \R{tr}[ L_k^\R{T} ( \bar{B}_k + \bar{B}_k^\R{T} ) \dot{L}_0 ]$$ Thus, removing $C_k$ from the dependency results in the following update to $\bar{L}_0$: $$\bar{L}_0 \stackrel{+}{=} \R{lower} [ ( \bar{B}_k + \bar{B}_k^\R{T} ) L_k ]$$ which is the same as $$\bar{L}_0 \stackrel{+}{=} 2 \; \R{lower} [ \bar{B}_k L_k ]$$ We still need to remove $B_k$ from the dependency. It follows from its definition that $$\dot{B}_k = \sum_{\ell=1}^{k-1} \dot{L}_\ell L_{k-\ell}^\R{T} + L_\ell \dot{L}_{k-\ell}^\R{T}$$ $$\R{tr}( \bar{B}_k^\R{T} \dot{B}_k ) = \sum_{\ell=1}^{k-1} \R{tr}( \bar{B}_k^\R{T} \dot{L}_\ell L_{k-\ell}^\R{T} ) + \R{tr}( \bar{B}_k^\R{T} L_\ell \dot{L}_{k-\ell}^\R{T} )$$ $$= \sum_{\ell=1}^{k-1} \R{tr}( L_{k-\ell}^\R{T} \bar{B}_k^\R{T} \dot{L}_\ell ) + \sum_{\ell=1}^{k-1} \R{tr}( L_\ell^\R{T} \bar{B}_k \dot{L}_{k-\ell} )$$ We now use the fact that $\bar{B}_k$ is symmetric to conclude $$\R{tr}( \bar{B}_k^\R{T} \dot{B}_k ) = 2 \sum_{\ell=1}^{k-1} \R{tr}( L_{k-\ell}^\R{T} \bar{B}_k^\R{T} \dot{L}_\ell )$$ Each of the $\dot{L}_\ell$ matrices is lower triangular. Thus, removing $B_k$ from the dependency results in the following update for $\ell = 1 , \ldots , k-1$: $$\bar{L}_\ell \stackrel{+}{=} 2 \; \R{lower}( \bar{B}_k L_{k-\ell} )$$
Input File: omh/appendix/theory/cholesky.omh | 2018-01-20 15:16:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9644038081169128, "perplexity": 213.8937512838891}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889660.55/warc/CC-MAIN-20180120142458-20180120162458-00632.warc.gz"} |
https://codereview.stackexchange.com/questions/268120/library-program-in-ruby-with-class | # Library Program in Ruby with Class
I have been learning Ruby for 2 weeks. Today my bootcamp teacher taught Ruby Class and gave me homework. I coded simple library program. But I think the code is smelling bad. How can do better the code?
class Book
attr_reader :book_name, :author_name, :page_count, :more, :book_array
def initialize()
@book_name = ''
@author_name = ''
@page_count = 0
@more = 'yes'
@book_array=[]
end
until more_book?
end
book_list
end
private
puts 'Name of book'
@book_name=gets.chomp
puts 'Author of the book'
@author_name=gets.chomp
puts 'Page count of the book'
@page_count=gets.to_i
puts 'Do you want add more book'
@more=gets.chomp.capitalize
end
def more_book?
@more == 'No'
end
def book_list
@book_array.each_with_index do |item,index|
puts "Book#{index+1} {Name: #{item[:book_name]}, Author: #{item[:author_name]}, Page count: #{item[:page_count]}}"
end
end
book_array << {book_name:@book_name, author_name:@author_name, page_count:@page_count}
end
end
books = Book.new()
# Consistency
Sometimes you use accessor methods, sometimes you access instance variables directly. Sometime you use whitespace around operators, sometimes you don't. Sometimes you use space after a comma, sometimes you don't. Sometimes you use an empty argument list and sometimes you use no argument list at all when you send a message with no arguments. Sometimes you use an empty parameter list and sometimes you use no parameter list at all when you define a method with no parameters.
You should choose one style and stick with it. If you are editing some existing code, you should adapt your style to be the same as the existing code. If you are part of a team, you should adapt your style to match the rest of the team.
Most communities have developed standardized community style guides. In Ruby, there are multiple such style guides. They all agree on the basics (e.g. indentation is 2 spaces), but they might disagree on more specific points (single quotes or double quotes).
In general, if you use two different ways to write the exact same thing, the reader will think that you want to convey a message with that. So, you should only use two different ways of writing the same thing IFF you actually want to convey some extra information.
For example, some people always use parentheses for defining and calling purely functional side-effect free methods, and never use parentheses for defining and calling impure methods. That is a good reason to use two different styles (parentheses and no parentheses) for doing the same thing (defining methods).
# Indentation
The community standard for indentation is 2 spaces, and there should be no indentation at the beginning of the script. You are starting out with an indentation of 1 space, and the next line is again indented 1 space:
class Book
attr_reader :book_name, :author_name, :page_count, :more, :book_array
Instead, class Book should have no indentation at all, and attribute_reader should be indented 2 spaces relative to class Book.
Like this:
class Book
attr_reader :book_name, :author_name, :page_count, :more, :book_array
# Whitespace around operators
There should be 1 space either side of an operator. You sometimes use 1 space and sometimes no space.
For example here you are using two different styles literally in two consecutive lines:
@more = 'yes'
@book_array=[]
This should be
@more = 'yes'
@book_array = []
# Space after comma
There should be 1 space after a comma. You sometimes use 1 space, sometimes no space.
For example, this:
@book_array.each_with_index do |item,index|
should be this:
@book_array.each_with_index do |item, index|
# Space after colon in a hash literal
There should be 1 space after the colon in a hash literal.
For example, this:
book_name:@book_name
should be this:
book_name: @book_name
# Space in a hash literal
There should be 1 space after the opening curly brace and 1 before the closing curly brace in a hash literal, so this:
{book_name: @book_name, author_name: @author_name, page_count: @page_count}
should be this:
{ book_name: @book_name, author_name: @author_name, page_count: @page_count }
# No empty parameter list
In Ruby, if a method or a block takes no parameters, it is standard to not define an empty parameter list but simply leave it out completely.
So, this
def initialize()
should just be
def initialize
# No empty argument list
In Ruby, if a message send has no arguments, it is standard to not write an empty argument list but simply leave it out completely.
So, this
books = Book.new()
should just be
books = Book.new
# Vertical whitespace
There should be a blank line after every "logical" break. In particular, there should be a blank line after the attr_accessor:
attr_reader :book_name, :author_name, :page_count, :more, :book_array
def initialize
Also, 3 blank lines after the class body are a bit excessive. 1 blank line is standard according to most style guides. I can understand 2, but not 3. I would prefer 1.
There should be no blank line before the end of a block, whether that is an actual end keyword, a closing parenthesis, etc. For example here:
lang-ruby book_list
end
This should just be
lang-ruby
book_list
end
It could also help readability if you break up the ask method:
puts 'Name of book'
@book_name = gets.chomp
puts 'Author of the book'
@author_name = gets.chomp
puts 'Page count of the book'
@page_count = gets.to_i
puts 'Do you want add more book'
@more = gets.chomp.capitalize
# Code Formatting
If possible, you should set your editor or IDE to automatically format your code when you type, when you paste, and when you save, and set up your version control system to automatically format your commit when you push, as well as set up your CI system to reject code that is not correctly formatted. If not possible, you should seriously consider using a different editor or IDE, version control system, or CI system.
Here's the result of your code, when I simply paste it into my editor, without doing anything else (I am literally just copying the code from your question and pasting it into my editor, and my editor auto-formats it):
class Book
attr_reader :book_name, :author_name, :page_count, :more, :book_array
def initialize
@book_name = ''
@author_name = ''
@page_count = 0
@more = 'yes'
@book_array = []
end
until more_book?
end
book_list
end
private
puts 'Name of book'
@book_name = gets.chomp
puts 'Author of the book'
@author_name = gets.chomp
puts 'Page count of the book'
@page_count = gets.to_i
puts 'Do you want add more book'
@more = gets.chomp.capitalize
end
def more_book?
@more == 'No'
end
def book_list
@book_array.each_with_index do |item, index|
puts "Book#{index + 1} {Name: #{item[:book_name]}, Author: #{item[:author_name]}, Page count: #{item[:page_count]}}"
end
end
book_array << { book_name: @book_name, author_name: @author_name, page_count: @page_count }
end
end
books = Book.new
As you can see, simply copying your code into my editor, the editor corrected every single thing I wrote above except one.
Let me repeat that: I just spent 3 pages pointing out all the style inconsistencies and recommendations, and you could just have fixed all of that in a couple of milliseconds at the push of a button!
# Frozen string literals
Immutable data structures and purely functional code are always preferred, unless mutability and side-effects are required for clarity or performance. In Ruby, strings are always mutable, but there is a magic comment you can add to your files (also available as a command-line option for the Ruby engine), which will automatically make all literal strings immutable:
# frozen_string_literal: true
It is generally preferred to add this comment to all your files.
# Linting
You should run some sort of linter or static analyzer on your code. Rubocop is a popular one, but there are others.
Rubocop was able to detect all of the style violations I pointed out above (plus some more), and also was able to autocorrect all of them except one.
Let me repeat that: I have just spent two pages pointing out how to correct tons of stuff that you can actually correct within milliseconds at the push of a button. I have set up my editor such that it automatically runs Rubocop with auto-fix as soon as I hit "save".
In particular, running Rubocop on your code, it detects 32 offenses, of which it can automatically correct 31.
Here's what the result of the auto-fix looks like:
# frozen_string_literal: true
class Book
attr_reader :book_name, :author_name, :page_count, :more, :book_array
def initialize
@book_name = ''
@author_name = ''
@page_count = 0
@more = 'yes'
@book_array = []
end
until more_book?
end
book_list
end
private
puts 'Name of book'
@book_name = gets.chomp
puts 'Author of the book'
@author_name = gets.chomp
puts 'Page count of the book'
@page_count = gets.to_i
puts 'Do you want add more book'
@more = gets.chomp.capitalize
end
def more_book?
@more == 'No'
end
def book_list
@book_array.each_with_index do |item, index|
puts "Book#{index + 1} {Name: #{item[:book_name]}, Author: #{item[:author_name]}, Page count: #{item[:page_count]}}"
end
end
book_array << { book_name: @book_name, author_name: @author_name, page_count: @page_count }
end
end
books = Book.new
And here are the offenses that Rubocop could not automatically correct:
Inspecting 1 file
C
Offenses:
book.rb:3:1: C: Style/Documentation: Missing top-level documentation comment for class Book.
class Book
^^^^^^^^^^
book.rb:42:121: C: Layout/LineLength: Line is too long. [122/120]
puts "Book#{index + 1} {Name: #{item[:book_name]}, Author: #{item[:author_name]}, Page count: #{item[:page_count]}}"
^^
1 file inspected, 2 offenses detected
By the way, you might have noticed that I wrote above that Rubocop detected only 1 uncorrectable offense, yet after running it on the code, we are left with 2. That is because adding the space around the operator in index + 1 actually pushed the line over the maximum length.
Similar to Code Formatting, it is a good idea to set up your tools such that the linter is automatically run when you paste code, edit code, save code, commit code, or build your project, and that passing the linter is a criterium for your CI pipeline.
In my editor, I actually have multiple linters and static analyzers integrated so that they automatically always analyze my code, and also as much as possible automatically fix it while I am typing. This can sometimes be annoying (e.g. I get 75 notices for your original code, lots of which are duplicates because several different tools report the same problem), but it is in general tremendously helpful. It can be overwhelming when you open a large piece of code for the first time and you get dozens or hundreds of notices, but if you start a new project, then you can write your code in a way that you never get a notice, and your code will usually be better for it.
# Inconsistent use of attribute methods and instance variables
You are almost always accessing instance variables directly, e.g. here
@more == 'No'
which should be
more == 'No'
In fact, you only use the attribute once and only for one of your attributes, namely here:
book_array << { book_name: @book_name, author_name: @author_name, page_count: @page_count }
Here you are using the attr_reader for book_array while on the very same line not using the attr_readers for book_name, author_name, and page_count. In fact, you are never using the attr_readers for book_name, author_name, page_count, or more.
This is inconsistent. Choose one or the other.
I personally prefer to always use the attribute methods, because methods are more flexible: they can be overridden in subclasses or their implementation can be changed, without having to change any of the client code.
So, you should either change the second example to
@book_array << { book_name: @book_name, author_name: @author_name, page_count: @page_count }
or (my preference) to
book_array << { book_name: book_name, author_name: author_name, page_count: page_count }
and the same in a couple of other places.
# Exposing mutable state
Your book_array is only an attr_reader, which means that another object is not allowed to assign to it. However, that does not actually stop anybody from messing up your state: since book_array exposes an Array, and Arrays are mutable, someone could just change the array itself.
For example, I could do this:
books = Book.new
books.book_array << nil
And then the program will blow up with a NoMethodError exception when book_list tries to execute nil[:book_name].
You should never expose mutable internal state that way. You should at least copy and freeze the object like this, using a custom attribute reader instead of the auto-generated one:
class Book
private
# …
public
def book_array
@book_array.dup.freeze
end
end
However, that is actually still not safe, because the same applies to the hashes inside of that array: Those are also mutable and they also can be changed from the outside, so they should be frozen as soon as they are inserted into the array. And, you may have guessed it already: it applies to the strings inside the hash inside the array, too.
def ask
puts 'Name of book'
@book_name = gets.chomp.freeze
puts 'Author of the book'
@author_name = gets.chomp.freeze
puts 'Page count of the book'
@page_count = gets.to_i
puts 'Do you want add more book'
@more = gets.chomp.capitalize.freeze
end
book_array << { book_name: @book_name, author_name: @author_name, page_count: @page_count }.freeze
end
Note, however, that when you do this, then your add_arr method actually no longer works as intended: first off, it will raise an exception, because book_array now returns a frozen array that does not allow you to add a book anymore, and also, even if it were not frozen, it returns a new duplicate of the array every time, so even if you could add a book to it, that modification would immediately be lost.
So, there are two ways around this: one would be to directly use the instance variable in add_arr:
def add_arr
@book_array << { book_name: @book_name, author_name: @author_name, page_count: @page_count }.freeze
end
However, see the next section.
# Access Restrictions
As far as I can see, none of your attr_readers are intended to be used by other objects. In fact, most of them shouldn't be used by other objects! They are private internal state of the book object. For example, there is no reason for anyone except the book itself to access more. Therefore, all of them should not be part of the public API, they should be private:
private
attr_reader :book_name, :author_name, :page_count, :more, :book_array
# Unnecessary assignment
@book_name, @author_name, and @page_count get overwritten by ask as soon as the program starts. There is no reason to assign them in the initializer, if they immediately get overwritten anyway, so the initializer should just be
def initialize
@more = 'yes'
@books = []
end
# Hungarian Notation
You are sometimes using something that resembles Hungarian Notation but is not quite it. The original Hungarian notation, invented by Charles Simyoni at Xerox PARC, is about encoding semantic information in an identifier name (one of the examples by Simyoni is the use of the prefix us to mark an "unsafe string", i.e. a string that was supplied as user input and should this be treated as untrusted). However, you are mostly using it to encode the class name of the object, e.g. in book_array and add_arr. (Sidenote, speaking of consistency: why is one named arr and the other array?)
This is pretty much unnecessary. It is also not exactly true: neither the @book_array instance variable nor the add_arr method actually require an Array. Both of them would also work with a variety of other types, in fact, the only thing they require are the messages << and each_with_index.
If you want to express that something is a collection, this is usually done by simply naming it with a plural, for example, @book_array could simply be named @books.
# Naming
There are a some names that are somewhat confusing, misleading, or could be expressed better. I already mentioned @book_array which should just be @books.
Not only does add_arr use Hungarian Notation without a real need to, but it also isn't even correct: add_arr doesn't add an array, it adds a book! So, it should be named add_book, but since this is a library system, it is probably obvious that we are adding books, so it could just be called add. Well, except, actually, it doesn't even add a Book, it adds a Hash, but more on that later …
book_list is another misleading name. I would expect a method named book_list to be an attr_reader that returns a book list, i.e. a list of books. Instead, it is a command that lists the books, so at the very least it should be named with a verb to make it clear that it performs a command not a query. So, at the very least, it should be named list_books.
The two block parameters in the call to @books.each_with_index could use some love, too. @books is supposed to be a list of books, so when you iterate over it, what do you get? You get a book, not an item. So, this parameter should be named item. On the other hand, index could just be named i, which is a well-known name for an index. However, Reek complains about that, so we'll call it idx. (Side note: if you disagree with a default setting in a code formatter or a linter or a static analyzer, don't be afraid to change it!)
more_book? is also confusing. It asks about more books, but when the answer is true, then that actually means that there are no more books. So, the method is actually the wrong way round.
But the most confusing name of all is the class: Book. Because it is actually not a book. In fact, when you instantiate it, you assign it to a variable named books, which clearly indicates that it is not a book.
But actually books is not correct either, because Book is actually both a list of books and an application that asks questions, prints stuff, etc. In fact, Book is pretty much everything except a book! A book is actually a Hash in your design, namely the one constructed in your add_arr method.
# Error handling and input validation
There is zero error handling or input validation in your code. If someone enters "eight hundred" for the page count, the program raises an exception. If someone enters "Stop" for the question about more books, the program continues.
You should validate inputs by the user, and handle wrong inputs appropriately.
# Mixing I/O and computation
In the list_books method, we are mixing I/O and computation by both building a string representation of a library and printing it out. In general, I/O and computation should be segregated and I/O should be relegated to the outer layers of the system.
So, the list_books method should simply return a string representation (and should probably be called to_s), and then the application can print this string … or do something else with it. That is another problem with this design: you can only print a list of books. You cannot display it on a website, for example.
It is not the responsibility of a book to print something, it is not the responsibility of a book to know about other books and libraries, and it is not the responsibility of a book to ask the user questions.
# Excessive instance variables
You have a lot of instance variables. But apart from @books, none of them are actually used to hold state of the object. They are only used to pass information back and forth between the various methods. So, we should probably make them parameters, arguments, and return values instead. Something like this:
class Book
private
def initialize
@books = []
end
public
more = 'Yes'
while more == 'Yes'
end
list_books
end
private
puts 'Name of book'
book_name = gets.chomp.freeze
puts 'Author of the book'
author_name = gets.chomp.freeze
puts 'Page count of the book'
page_count = gets.to_i
puts 'Do you want add more book'
more = gets.chomp.capitalize.freeze
[{ book_name: book_name, author_name: author_name, page_count: page_count }.freeze, more]
end
def list_books
books.each_with_index do |book, idx|
puts "Book#{idx + 1} {Name: #{book[:book_name]}, Author: #{book[:author_name]}, Page count: #{book[:page_count]}}"
end
end
books << book
end
end
books = Book.new
# Overall design
The overall design is very weird, and really does not make much sense. There is no coherence in the class, it does many different things, it doesn't really have any state, and it is being used as a singleton.
It looks like a bunch of procedural code with a class … end wrapped around it for no reason. In fact, the code would be much better without the class.
In object-orientation, objects collaborate with each other by sending messages, but here, there is really only one object, books, which does everything. (Of course, technically speaking, all the strings and numbers and hashes and arrays are also objects, but they are not Domain Objects.)
Personally, I can see at least three different kinds of objects here: we have books, we have lists of books (libraries), and we have library management applications.
Maybe something like this:
# frozen_string_literal: true
class Book
include Enumerable
private
attr_writer :title, :author, :page_count
def initialize(title, author, page_count)
self.title = title.dup.freeze
self.author = author.dup.freeze
self.page_count = page_count
freeze
end
public
def <=>(other)
return nil unless other.is_a?(Book)
[title, author, page_count] <=> [other.title, other.author, other.page_count]
end
def to_h
{ title: title, author: author, page_count: page_count }
end
def to_s
"{ Title: #{title}, Author: #{author}, Page count: #{page_count} }"
end
freeze
end
class Library
include Enumerable
private
attr_writer :books
def initialize(*books)
self.books = books
freeze
end
public
def <<(...)
@books.<<(...)
self
end
books.each(&method(:<<))
nil
end
def books = @books.dup.freeze
alias_method :to_a, :books
def each(...) = books.each(...)
def to_s =
books.each.with_index(1).map {|book, idx| "Book#{idx} #{book}" }.join("\n")
freeze
end
class App
private
attr_accessor :library
def initialize = self.library = Library.new
public
def run
done = false
until done
library << book
end
print_library
end
private
title = user_input('Please, enter the title of the book:', :string)
author = user_input('Please, enter the author of the book:', :string)
page_count = user_input('Please, enter the number of pages of the book:', :integer)
done = !user_input('Do you want to add another book?', :boolean)
[Book.new(title, author, page_count), done]
end
def user_input(question, type)
print "#{question} "
input = gets.chomp
case type
when :string
input
when :integer
raise ArgumentError.new("Input should be an integer instead of #{input}") unless input =~ /^\d+\$/
input.to_i
when :boolean
input = input.downcase
raise ArgumentError.new("Input should be an boolean instead of #{input}") unless input =~ /^[yntf]/
input =~ /[yt]/
end
end
def print_library = puts(library)
end
App.new.run
Of course, there is still a lot of room for improvement here. For example, I can imagine an InputValidator, StringReader, IntegerReader, BooleanReader, and probably a BookReader. There could be much better input validation. There is a lot of potential for the Replace Conditional with Polymorphism Refactoring – in general, it should always be possible to write an OO program without any conditionals at all, except for message dispatch. | 2022-05-21 21:30:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43608924746513367, "perplexity": 3532.168008854025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662541747.38/warc/CC-MAIN-20220521205757-20220521235757-00575.warc.gz"} |
http://mathhelpforum.com/geometry/195052-how-find-length-diagonal-parallelogram.html | # Math Help - how to find the length of the diagonal in a parallelogram
1. ## how to find the length of the diagonal in a parallelogram
when it is known that the lengths of the two adjacent sides of the parallelogram viz. 6 and 10, is it possible to find the length of the diagonal containing that.
Any help.
Thanks
2. ## Re: how to find the length of the diagonal in a parallelogram
Originally Posted by arangu1508
when it is known that the lengths of the two adjacent sides of the parallelogram viz. 6 and 10, is it possible to find the length of the diagonal containing that.
Any help.
Thanks
According to the Law of Cosine we know that :
$d^2=a^2+b^2-2ab\cdot \cos \angle (a,b)$
So if you don't know angle between two sides of parallelogram you cannot calculate lengths of diagonals .
3. ## Re: how to find the length of the diagonal in a parallelogram
My original question is
the vector sum of the forces of magnitude 10N and 6N can be
(a) 2N (b) 8N (c) 10N and (d) 18N
I thought the vector sum would be the diagonal of parallelogram.
am I on the right track?
Kindly guide me.
4. ## Re: how to find the length of the diagonal in a parallelogram
Originally Posted by arangu1508
My original question is
the vector sum of the forces of magnitude 10N and 6N can be
(a) 2N (b) 8N (c) 10N and (d) 18N
I thought the vector sum would be the diagonal of parallelogram.
am I on the right track?
Kindly guide me.
Hint: Use Triangle Inequality Theorem to eliminate solutions that are not possible...
5. ## Re: how to find the length of the diagonal in a parallelogram
I have gone through the link. Thank you. According to that except the option (a) 2N rest of the options are possible. Is it okay?
Thank you Mr. Princeps
6. ## Re: how to find the length of the diagonal in a parallelogram
Originally Posted by arangu1508
I have gone through the link. Thank you. According to that except the option (a) 2N rest of the options are possible. Is it okay?
Thank you Mr. Princeps
Option (d) isn't possible also because 6 + 10 isn't greater than 18 | 2015-05-28 00:40:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9039429426193237, "perplexity": 556.6807124168071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929176.79/warc/CC-MAIN-20150521113209-00058-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/459485/optimization-of-pool-size-and-number-of-tests-for-prevalence-estimation-via-grou | # Optimization of pool size and number of tests for prevalence estimation via group testing
I'm trying to devise a protocol for pooling lab tests from a cohort in order to get prevalence estimates using as few reagents as possible.
Assuming perfect sensitivity and specificity (if you want to include them in the answer is a plus), if I group testing material in pools of size $$s$$ and given an underneath (I don't like term "real") mean probability $$p$$ of the disease, the probability of the pool being positive is:
$$p_w = 1 - (1 - p)^s$$
if I run $$w$$ such pools the probability of having $$k$$ positive wells given a certain prevalence is:
$$p(k | w, p) = \binom{w}{k} (1 - (1 - p)^s)^k(1 - p)^{s(w-k)}$$
that is $$k \sim Binom(w, 1 - (1 - p)^s)$$.
To get $$p$$ I just need to maximize the likelihood $$p(k | w, p)$$ or use the formula $$1 - \sqrt[s]{1 - k/w}$$ (not really sure about this second one...).
My question is, how do I optimize $$s$$ (maximize) and $$w$$ (minimize) according to a prior $$p$$ in order have the most precise estimates, below a certain level of error?
• For a start: medicalsciences.stackexchange.com/questions/21558/… Do you have data on sens & spec of the tests? I've so far only concluded limits from the FDA's EUA requirements and the EUA instructions. – cbeleites unhappy with SX Apr 9 '20 at 20:37
• Why do you need wheels (or would that be wells?)? In the foreseeable future, wouldn't you wait until the next wheel (batch/lot) is full? And I'd assume that once sample numbers are so low again that this means too long waiting times, $p$ may be so different from the situation now that you'd anyways want to re-calculate pool size. – cbeleites unhappy with SX Apr 9 '20 at 20:40
• I saw your answer to the other question and is very interesting thanks. How did you compute the two plot you presented, about the pool size and number of tests saved by prevalence? I need exactly that, or even better a way to estimate them based on acceptable error rate. I didn't understand the second comment. In what sense I need to wait until the well is full? the idea is to run periodic prevalence studies and save reagent when possible. Yep the pool size would need to be recomputed according to results. – Bakaburg Apr 10 '20 at 8:46
I may have found a solution:
I can estimate the uncertainty around $$p$$ in two ways, given $$w$$ and $$s$$.
First I get the expected results of a pooled test through:
$$E[p_w] = 1 - (1 - p)^s$$
Then, through maximum likelihood and logit transformation, I get the Confidence Intervals:
$$CI_{p_{\alpha/2}} = 1 - \sqrt[s]{1 - logit^{-1}(logit(E[p_w]) \pm Z_{\alpha/2} \frac{1}{\sqrt{w E[p_w] (1-E[p_w]))}}}$$
In alternative I can exploit the Beta distribution as a conjugate of the binomial to get the posterior Credibility Intervals of $$p$$ for the given quantiles $$q$$:
$$CrI_{p_{\alpha/2}} = 1 - \sqrt[s]{1 - Beta(q, 1 + w E[p_w], 1 + w (1 - E[p_w])}$$
this second solution even allows the specification of priors.
I was afraid that these solution would underestimate variability, since they evaluate the variance at the test level (on $$p_w$$), not at the level of the underneath prevalence $$p$$. But comparing the results with a full MCMC hierarchical estimation of $$p$$ posterior with a model:
$$p \sim Beta(\alpha,\beta)$$ $$p_w \sim 1 - Binom(0, s, p)$$ $$p(k | w, p_w) \sim Binom(k, w, p_w)$$
it can be shown that there is no relevant difference with the intervals of the other two methods (which are of course faster to compute).
Finally, I search numerically the maximal value of $$s$$ and minimal of $$w$$ that keep the uncertainty below a specified threshold. I'm postulating that as the uncertainty goes down so will the estimation bias due to the loss of information in the pooling. I still haven't found an analytical way to get this error directly. | 2021-04-12 03:56:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012910485267639, "perplexity": 692.3473322019307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066568.16/warc/CC-MAIN-20210412023359-20210412053359-00283.warc.gz"} |
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=6369 | ## WeBWorK Main Forum
### Section numbers dynamically deleted?
by Tim Alderson -
Number of replies: 4
We have an instructor reporting that after manually entering section numbers into the class list, the numbers were dynamically deleted. The only section numbers remaining are those of the professors (logged in both directly and through LTI).
The section numbers seemed to disappear piecemeal rather than all at once, so perhaps upon authenticating through the LTI, the particular student's section information is stripped.
No others seem to be using section numbers here, so I have no comparison group. I have not found references to this type of issue in the forums.
We are using WW 2.16, with LTI through D2L.
Any thoughts on what might be happening here would be most welcome.
### Re: Section numbers dynamically deleted?
by Glenn Rice -
If you have $LMSManageUserData=1; in authen_LTI.conf, then any data that is manually set will be overridden when a student logs in. So if you set the section manually in webwork, and then a student logs in via LTI authentication and the data from the LMS does not contain that section number, it will be removed. So you will probably need to change the setting in authen_LTI.conf to$LMSManageUserData=0;.
I had to do this for another faculty member that wanted to do this as well. You can set that setting in the courses course.conf file if you only want it for that course.
### Re: Section numbers dynamically deleted?
by Tim Alderson -
Thank you Glenn. Your suggestion worked!
I did not see such a setting in course.conf, so I edited authen_LTI.conf. | 2022-06-25 04:23:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47183746099472046, "perplexity": 4042.0297584647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034170.1/warc/CC-MAIN-20220625034751-20220625064751-00547.warc.gz"} |
https://socratic.org/questions/you-are-given-500-grams-of-a-substance-with-a-half-life-of-1-5-years-how-much-wi | # You are given 500 grams of a substance with a half-life of 1.5 years. How much will remain after 15.0 years?
$0.488$ grams
$15.0$ years is $10$ times the half life, so the amount of the substance remaining will be:
(500"g")/2^10 = (500"g")/1024 ~~ 0.488"g" | 2019-10-20 03:28:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5571608543395996, "perplexity": 1182.4660543962539}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986702077.71/warc/CC-MAIN-20191020024805-20191020052305-00210.warc.gz"} |
https://cstheory.stackexchange.com/questions/7865/how-many-words-of-length-k-on-l-letters-avoid-a-partial-word | # How many words of length $k$ on $l$ letters avoid a partial word?
ORIGINAL QUESTION
This is a hopefully smarter and better-informed version of a question I asked on MathOverflow. When I asked that question, I did not even know the name of the area of mathematics my problem was in. Now I am pretty sure it lies in Algorithmic Combinatorics on Partial Words. (Recent book on the subject here.)
I want to make a list of words on $l$ letters. Each word has length exactly $k$. The deal is, if $a \lozenge ^j b$ is in the list, where $\lozenge$ is a wildcard/don't-care symbol, then $a \lozenge ^j b$ can never appear again in the list. (The same holds true if $a=b$, or if $j=0$ and hence the prohibited subword is $ab$.)
Example where $k=4$ and $l=5$:
$abcd$
$bdce$
$dcba$ <-- prohibited because $dc$ appeared in the line above
$aeed$ <-- prohibited because $a \lozenge \lozenge d$ appeared on the first line
The literature on "avoidable partial words" that I have found has all been infinitary -- eventually some word pattern is unavoidable if the word size is large enough. I would like to find finitary versions of such theorems. So, question:
Given a partial word of form $a \lozenge^j b$ in an alphabet of $l$ letters, how many words of length $k$ avoid it, and can they be explicitly produced in polynomial time?
I don't expect the above question to be difficult, and, unless there is a subtlety I am missing, I could calculate it myself. The real reason I am posting on this site is because I need to know a lot more about the properties of such word lists for my application, so I am hoping someone can answer the followup question:
Has this been studied in generality? What are some papers that consider, not just whether a partial word is eventually unavoidable, but "how long it takes" before it becomes unavoidable?
Thanks.
• (1) I cannot understand the correspondence between your first question and the example stated before it. What is the input in your example? (2) In your first question, are you using k for two different purposes? Aug 19 '11 at 15:44
• Regarding (2), yes I made a mistake, now edited, thank you. Aug 19 '11 at 15:51
• Regarding (1), I would like to know "how much room I have left" once a partial word appears. But yes, the real question is how to produce lists like the one that appears in the example (without the prohibited partial words). So the input would be the values of $k$ and $l$, and a desired number of words to produce in a list, all of which had the "avoidance of previously appearing partial words property." Aug 19 '11 at 15:53
• @Aaron, I don't know what your ultimate application is, but Davenport-Schinzel sequences (and generalizations) ask about the maximum length of a string that does not contain a particular repeating pattern. It's a related notion. Aug 19 '11 at 16:44
• Seth Pettie has been studying some very nifty generalizations to forbidden submatrices as well. Aug 19 '11 at 20:12
Here's a special case: the number of binary words of length $k$ such that no two ones appear consecutively is $F(k+3)$, where $F(n)$ is the $n^{th}$ Fibonacci number (starting with $F(1)=1, F(2)=1$). Proof is via the Zeckendorf representation.
EDIT: We can extend this initial special case into the slightly larger special case of $a\lozenge^0a$. Consider strings of length $k$ over an alphabet of size $l+1$ such that the letter $a$ does not appear twice consecutively. Let $f(k)$ be the number of such strings (which we will call "valid"). We claim that: $$f(k) = l*f(k-1) + l*f(k-2)$$ $$f(0) = 1, f(1) = l+1$$ The intuition is that we can construct a valid string of length $k$ by either: a) adjoining any of the $l$ letters that are not $a$ to a valid string of length $k-1$, or b) adjoining the letter $a$ and then any other letter but $a$ to a valid string of length $k-2$.
You can verify that the following is a closed form for the above recurrence: $$f(k) = \sum_{i=0}^{k} {{k+1-i}\choose{i}} l^{k-i}$$ where we understand ${{n}\choose{i}} = 0$ when $i>n$.
EDIT #2: Let's knock out one more case -- a $\lozenge^0 b, a \neq b$. We'll call strings over an $l$-element alphabet that do not contain the substring $ab$, "valid" and let $S_k$ denote the set of valid strings of length $k$. Further, let's define $T_k$ to be the subset of $S_k$ consisting of strings starting with $b$ and $U_k$ to be those not starting with $b$. Finally, let $f(k) = |S_k|$, $g(k) = |T_k|$, $h(k) = |U_k|$.
We observe that $g(0)=0, h(0)=1, f(0)=1$ and $g(1)=1, h(1)=l-1, f(1)=l$. Next, we infer the following recurrences: \begin{eqnarray} g(k+1) &=& f(k) \\ h(k+1) &=&(l-1)*h(k) + (l-2)*g(k) \end{eqnarray} The first comes from the fact that adding a $b$ to the start of any element of $S_k$ produces an element of $T_{k+1}$. The second comes from observing that we can construct an element of $U_{k+1}$ by adding any character but $b$ to the front of any element of $U_{k}$ or by adding any character but $a$ or $b$ to the front of any element in $T_k$.
Next, we rearrange the recurrence equations to obtain: \begin{eqnarray} f(k+1) &=& g(k+1) + h(k+1) \\ &=& f(k) + (l-1)*h(k) + (l-2)*g(k) \\ &=& f(k) + (l-1)*f(k) - g(k) \\ &=& l*f(k) - f(k-1) \end{eqnarray}
We can get a rather opaque closed-form solution to this recurrence by mucking around a bit with generating function stuff or, if we're lazy, heading straight to Wolfram Alpha. However, with a little bit of googling and poking around in OEIS, we find that we actually have: $$f(k) = U_k(l/2)$$ where $U_k$ is the $k^{th}$ Chebyshev polynomial of the second kind (!).
• That's very interesting, thank you. Aug 21 '11 at 16:49
A completely different approach for the first question reuses the answers to the recent question on generating words in a regular language: it suffices to apply these algorithms for length $k$ on the regular language $\Sigma^\ast a\Sigma^j b\Sigma^\ast$ where $\Sigma$ is the alphabet.
• Thanks. I was wondering if there might be a connection, and your answer here gave me the push I needed to look at the papers referenced there, and one of them definitely solves a piece of one of the problems I am considering. Aug 22 '11 at 17:28
assuming $j$ is fixed, we can count the number of ways a pattern $a\lozenge^j b$ can be matched: the first $a$ symbol can be matched at some position $1\leq i\leq k-j-1$, and we have $l^{i-1}$ possibilities before that point, $l^j$ between $a$ and $b$, and $l^{k-j-i-1}$ for the remainder of the string, thus a total of $$\sum_{i=1}^{k-j-1}l^{i-1}\cdot l^{j}\cdot l^{k-j-i-1}=(k-j-1)l^{k-2}$$ cases. As noted by Tsuyoshi Ito in the comments, this count is not the number of different words matching $a\lozenge^j b$ since a single word could match the same pattern in different ways. For instance $aa$ is matched three times in $aaaa$, $ab$ two times in $abab$, and $a\lozenge b$ two times in $aabb$. We can try to count the number of ways of matching patterns several times and exhibit an "inclusion-exclusion" expression, but the ways pattern might overlap makes this too long.
For the first question, under the understanding that $j$ is not fixed, i.e. that we want to avoid embedding the word $ab$:
• either $a$ the first symbol never appears, which accounts for $(l-1)^k$ possible words,
• or $a$ appears first in some position $1\leq i\leq k$, then we cannot use $b$ in the remainder of the word: there are $(l-1)^{i-1}$ choices for the factor up to $a$, and $(l-1)^{k-i}$ choices for the remainder, giving in total $\sum_{i=1}^k(l-1)^{i-1}\cdot(l-1)^{k-i}=k(l-1)^{k-1}$ possible words. Whether $a=b$ is irrelevant.
For the second question, I don't have much to suggest; there is a relation with word embeddings, but the results I know about bad sequences for Higman's Lemma do not immediately apply.
• Thanks very much, Sylvain, though I don't think that's quite right. We can use $b$ later in the word if $a$ appears. We just can't use $b$ if there are exactly $j$ letters in between $a$ and $b$, if $a \lozenge ^j b$ appeared earlier. Perhaps I am misunderstanding your argument though. Aug 19 '11 at 20:57
• Sorry, I wasn't sure whether $j$ was fixed or not. I've edited the answer with fixed $j$ as well. Aug 19 '11 at 21:04
• I do not think that the fixed-j case is correct. For example, if k=4 and j=1, the word aabb is subtracted twice. I haven’t read the non-fixed-j case. Aug 20 '11 at 13:09
• @Tsuyoshi Ito: you're right, there is no unique match in that case. Aug 20 '11 at 15:39
• Please mark an incorrect answer as such. Aug 20 '11 at 17:42 | 2021-11-29 09:03:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8491812944412231, "perplexity": 273.72401496787376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358702.43/warc/CC-MAIN-20211129074202-20211129104202-00052.warc.gz"} |
https://math.stackexchange.com/questions/3725800/how-to-integrate-int-frac-sin4x-cos4x-sin-x-cos-x-dx | # How to integrate :$\int \frac{\sin^4x+\cos^4x}{\sin x \cos x}\:dx$
How to integrate :
$$\int \frac{\sin^4x+\cos^4x}{\sin x \cos x}\:dx$$
$$=\int \:\sin^2x \tan x \: dx+\int \:\cos^2x \cot x \:dx$$
Any suggestion?
• If you're going to submit an edit, please make sure that it is accurate and your MathJax works. This way, we can avoid the edit war that happened on this post. – Michael Morrow Jun 19 '20 at 3:48
• I have to be nitpicky and point out that you integrate a function or evaluate an integral (unless you’re computing a double integral). – gen-ℤ ready to perish Jun 19 '20 at 4:49
Add and subtract in nominator $$2 \sin^2x \cos^2x$$. Can you continue?
$$\int \frac{\sin^4x+\cos^4x}{\sin x \cos x}\:dx$$ $$=\int \frac{(\sin^2x+\cos^2x)^2-2\sin^2x\cos^2x}{\sin x \cos x}\:dx$$ $$=\int \frac{1-2\sin^2x\cos^2x}{\sin x \cos x}\:dx$$ $$=\int (2\csc 2x-\sin 2x)\:dx$$
An alternative approach is to write the integral as $$\int\frac{\sin^3xdx}{\cos x}+\int\frac{\cos^3xdx}{\sin x}$$. In the first part, use $$u=\cos x$$ to get $$\int\frac{(u^2-1)du}{u}=\frac12u^2-\ln|u|+C$$, where $$C$$ is a locally constant function that can change whenever $$u=0$$, i.e. at $$x\in\pi\Bbb Z\setminus\tfrac{\pi}{2}\Bbb Z$$. In the second part, use $$v=\sin x$$ to get $$\int\frac{(1-v^2)dv}{v}=\ln|v|-\frac12v^2+C^\prime$$, with $$C^\prime$$ locally constant but able to change at $$x\in\pi\Bbb Z$$. So$$\int\frac{\sin^4x+\cos^4x}{\sin x\cos x}dx=\ln|\tan x|+\frac12(\cos^2x-\sin^2x)+K,$$where $$K$$ is locally constant but can change at $$x\in\tfrac{\pi}{2}\Bbb Z$$. | 2021-03-01 10:55:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8877331614494324, "perplexity": 276.042046668101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362481.49/warc/CC-MAIN-20210301090526-20210301120526-00419.warc.gz"} |
https://mathematica.stackexchange.com/questions/67116/decomposition-of-a-semialgebraic-set-into-connected-components | # Decomposition of a semialgebraic set into connected components
Is there any built-in function for doing decomposition of a semialgebraic set into connected components? The only way I now can think of is to use
CylindricalAlgebraicDecomposition
and to build connected componnets from its output: all terms connected with disjunction on first level are treated as vertexes of graph, two vartexes are connected if Length[FindInstance[v1 && v2, {vars}]] != 0. On produced graph usual depth-first search based algorithm is used. But intuition says that such things are usually already implemented, hence the question.
• Are you looking for SemialgebraicComponentInstances? – Artes Dec 2 '14 at 12:52
• @Artes No, as I understand SemialgebraicComponentInstances will give me at least one point from each connected component, but I need components themselves. – Artem Malykh Dec 2 '14 at 13:00
EDIT: CylindricalDecomposition has been improved since I wrote this answer, probably in v11.2! Now it takes an optional topological operation argument. As a result, one can achieve the results described as connected below simply by adding such an argument to CylindricalDecomposition:
decomp = List @@ BooleanMinimize@CylindricalDecomposition[eqns, {x, y},
"Components"];
The code below is a bit of a cheat: it modifies sets acquired through cylindrical decomposition by converting < to <= and > to >=. This prevents some infinitesimally small gaps from being recognised as such, but wins the possibility of finding overlaps between cylindrical cells produced by CAD. It may still serve as a starting point for more "real-world" solutions.
This code constructs a pairwise graph from those DNF components of the decomposition for which their closed region overlaps with another. From this connected graph components are computed, and this gives more or less directly connected components you seek:
Module[{eqns, decomp, connected, regdim},
eqns = x^2 + y^2 <= 1 && x^2 + (y - 1/2)^2 >= 1/2 &&
! (0 <= y - x/2 <= 1/4) && ! (0 <= y/2 + x <= 1/4) &&
x^2 + (y + 3/4)^2 >= 1/32;
regdim =
RegionDimension@ImplicitRegion[Reduce[#, {x, y}, Reals], {x, y}] &;
decomp =
List @@ BooleanMinimize@CylindricalDecomposition[eqns, {x, y}];
connected =
Or @@@ ConnectedComponents@
Graph[decomp, UndirectedEdge @@@
Select[Subsets[decomp, {2}],
regdim[And @@ # //. {Less -> LessEqual, Greater -> GreaterEqual}] >= 0 &]];
(Quiet@RegionPlot[#, {x, -1, 1}, {y, -1, 1}, PlotPoints -> 100] & /@
{decomp, connected})~Join~
{FullSimplify[connected, (x | y) \[Element] Reals]}]
The result shows CAD result, "unified" connected components and each component:
{(Sqrt[1 - x^2] + y >= 0 && ((x > 2 y && 2/Sqrt[5] + x > 0 && Sqrt[6] + 5 x <= 1) || (Sqrt[6] + 5 x > 1 && Sqrt2 + 8 x <= 0 && Sqrt[2 - 4 x^2] + 2 y <= 1) || (x < 1/Sqrt[5] && 2 x + y < 0 && 8 x >= Sqrt2) || (Sqrt2 + 8 x > 0 && 8 x < Sqrt2 && 6 + Sqrt[2 - 64 x^2] + 8 y <= 0))) || (Sqrt[2 - 64 x^2] <= 6 + 8 y && ((8 x < Sqrt2 && 2 x + y < 0 && 10 x >= 1) || (Sqrt2 + 8 x > 0 && 10 x < 1 && Sqrt[2 - 4 x^2] + 2 y <= 1))), (1 + x == 0 && y == 0) || (Sqrt[7] + 4 x == 0 && 4 y == 3) || (Sqrt[1 - x^2] >= y && ((1 + 2 Sqrt[19] + 10 x == 0 && Sqrt[1 - x^2] + y > 0) || (Sqrt[1 - x^2] + y >= 0 && 1 + x > 0 && 1 + 2 Sqrt[19] + 10 x < 0) || (1/Sqrt2 + x > 0 && Sqrt[7] + 4 x < 0 && 1 + Sqrt[2 - 4 x^2] <= 2 y) || (1 + 2 Sqrt[19] + 10 x > 0 && 1 + 2 x < 4 y && 1/Sqrt2 + x <= 0))) || (1/Sqrt2 + x > 0 && 1 + 2 x < 4 y && Sqrt[2 - 4 x^2] + 2 y <= 1), (x == 1 && y == 0) || (Sqrt[1 - x^2] + y >= 0 && ((Sqrt[1 - x^2] >= y && x > 2/Sqrt[5] && x < 1) || (10 x > 2 + Sqrt[19] && 5 x < 1 + Sqrt[6] && Sqrt[2 - 4 x^2] + 2 y <= 1) || (x > 2 y && 5 x >= 1 + Sqrt[6] && x <= 2/Sqrt[5]))) || (10 x <= 2 + Sqrt[19] && 4 x + 2 y > 1 && Sqrt[2 - 4 x^2] + 2 y <= 1), (4 x == Sqrt[7] && 4 y == 3) || (4 x > Sqrt[7] && 10 x < 7 && 1 + Sqrt[2 - 4 x^2] <= 2 y && y <= Sqrt[1 - x^2]) || (1 + 2 x < 4 y && Sqrt[1 - x^2] >= y && 10 x >= 7)}
EDIT:
Here's an improvement to the case of infitesimal gaps. Instead of just rewriting CAD cells to closures, we search for intersection of one cell with RegionBoundary of another. RegionPlot visualisation is not particularly pretty in this case (there's a single point connecting upper and lower left side now), but that's not a problem caused by the connected components code. This version has a drawback of being considerably slower than the original answer.
Module[{eqns, decomp, connected, regconn},
eqns = x^2 + y^2 <= 1 && x^2 + (y - 1/2)^2 >= 1/2 &&
! (0 == y - x/2 && x != -3/4) && ! (0 == y/2 + x) &&
x^2 + (y + 3/4)^2 >= 1/32;
regconn =
Resolve@Exists[{x, y}, (x | y) \[Element] Reals,
RegionMember[
RegionIntersection[ImplicitRegion[#1, {x, y}],
RegionBoundary@ImplicitRegion[#2, {x, y}]], {x, y}]] &;
decomp =
List @@ BooleanMinimize@CylindricalDecomposition[eqns, {x, y}];
connected =
Or @@@ ConnectedComponents@
Graph[decomp, UndirectedEdge @@@
Select[Subsets[decomp, {2}],
regconn @@ # || regconn @@ Reverse@# &]];
(Quiet@RegionPlot[#, {x, -1, 1}, {y, -1, 1},
PlotPoints -> 100] & /@ {decomp, connected})~Join~
{FullSimplify[connected, (x | y) \[Element] Reals]}]
...
• If you really want to stress it out, try a transcendental component! :) (+1) -- I've often wanted to do this. – Michael E2 May 11 '15 at 17:11
• @MichaelE2 Do you mean transcendental functions in definitions of the set? That would be obviously outside the domain of semialgebraic sets which are at least somewhat easier to operate computationally... – kirma May 11 '15 at 17:16
• In mathematics, we call your "expanded" components their closures. – Michael E2 May 11 '15 at 17:16
• Yes, that's what I meant, and yes I know it goes beyond the literal scope of the question. It also sometimes will go beyond the capabilities of Reduce, which can be used instead of CylindricalDecomposition. – Michael E2 May 11 '15 at 17:17
• It could be why I gave up on this sort of thing in the past. :) – Michael E2 May 12 '15 at 3:48 | 2019-02-21 12:21:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25666749477386475, "perplexity": 1708.1997996613009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247504594.59/warc/CC-MAIN-20190221111943-20190221133943-00172.warc.gz"} |
https://math.libretexts.org/Courses/Saint_Mary's_College_Notre_Dame_IN/SMC%3A_MATH_339_-_Discrete_Mathematics_(Rohatgi)/Text/5%3A_Graph_Theory |
# 5: Graph Theory
Graph Theory is a relatively new area of mathematics, first studied by the super famous mathematician Leonhard Euler in 1735. Since then it has blossomed in to a powerful tool used in nearly every branch of science and is currently an active area of mathematics research.
• 5.1: Prelude to Graph Theory
Pictures like the dot and line drawing are called graphs. Graphs are made up of a collection of dots called vertices and lines connecting those dots called edges. When two vertices are connected by an edge, we say they are adjacent.
• 5.2: Definitions
The way we avoid ambiguities in mathematics is to provide concrete and rigorous definitions. Crafting good definitions is not easy, but it is incredibly important. The definition is the agreed upon starting point from which all truths in mathematics proceed. Is there a graph with no edges? We have to look at the definition to see if this is possible. We want our definition to be precise and unambiguous, but it also must agree with our intuition for the objects we are studying.
• 5.3: Planar Graphs
When is it possible to draw a graph so that none of the edges cross? If this is possible, we say the graph is planar (since you can draw it on the plane). Notice that the definition of planar includes the phrase “it is possible to.” This means that even if a graph does not look like it is planar, it still might be.
• 5.4: Coloring
Given any map of countries, states, counties, etc., how many colors are needed to color each region on the map so that neighboring regions are colored differently? How is this related to graph theory? Well, if we place a vertex in the center of each region (say in the capital of each state) and then connect two vertices if their states share a border, we get a graph.
• 5.5: Euler Paths and Circuits
An Euler path, in a graph or multigraph, is a walk through the graph which uses every edge exactly once. An Euler circuit is an Euler path which starts and stops at the same vertex. Our goal is to find a quick way to check whether a graph (or multigraph) has an Euler path or circuit.
• 5.6: Matching in Bipartite Graphs
Given a bipartite graph, a matching is a subset of the edges for which every vertex belongs to exactly one of the edges. Our goal in this activity is to discover some criterion for when a bipartite graph has a matching.
• 5.7: Weighted Graphs and Dijkstra's Algorithm
• 5.8: Trees
• 5.9.1: Tree Traversal
• 5.9.2: Spanning Tree Algorithms
• 5.9.3: Transportation Networks and Flows
• 5.E: Graph Theory (Exercises)
• 5.S: Graph Theory (Summary)
Hopefully this chapter has given you some sense for the wide variety of graph theory topics as well as why these studies are interesting. There are many more interesting areas to consider and the list is increasing all the time; graph theory is an active area of mathematical research. | 2021-06-24 05:02:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7762609124183655, "perplexity": 313.1753157067799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488551052.94/warc/CC-MAIN-20210624045834-20210624075834-00441.warc.gz"} |
https://stackabuse.com/what-is-arduino | What is Arduino? - Stack Abuse
# What is Arduino?
### Arduino Explained
One of the most common questions I see from people that are just entering electronics and programming is: what is Arduino? Well, Arduino is a platform for microcontroller devices that makes embedded programming much easier than traditional methods. Thanks to Arduino's simplicity and ease-of-use, embedded systems and programming now have a much lower barrier of entry than before. For only about $25 you can get started in electronics, as opposed to a few hundred dollars to buy evaluation boards and hardware programmers. The Arduino platform is essentially composed of the following (all of which are open source): • C/C++ framework for AVR, ARM, and more (based on Wiring) • Device Bootloader • Integrated Development Environment (IDE) for Windows, Mac, and Linux The software framework used to program Arduinos isn't quite strict C/C++ (although it can be if you want), but instead it is a simplified version that removes most of the boilerplate code to keep development as simple as possible. This is in contrast to traditional style embedded programming where quite a bit of initialization logic was needed just to get the device ready for operation. In the past many people just starting out would get frustrated and quit before they could get the device to do anything at all. The device bootloader is a program that comes pre-programmed on the Arduino microcontrollers and assists with loading your code from memory on startup. When the device is powered on, the first code to run is the bootloader, which fetches your application code from memory and starts its execution. In the case of Arduinos, the bootloader also allows you to load code on to the device via a USB cable instead of a more expensive hardware programmer (or in-system programmer). The IDE is a desktop application that you use to write, compile, and load code for Arduinos. You can think of it as a glorified text editor (with syntax highlighting) that also compiles and uploads the code for you. Here you can find plenty of example code, configurations, and help documentation to help set up all of the Arduinos you buy. The IDE is not required as you can also write, compile, and load code using the Mac/Linux command line, but this is usually reserved for more advanced users. ### Why are Arduinos Useful? #### They're Easy to Use As we've already stated, Arduinos are useful in that they greatly lower the barrier of entry in to programming embedded electronics. Thanks to the open source tools available, you can write meaningful applications within minutes instead of hours or days. The learning curve is so much lower now than it used to be, which allows more people to get involved, and in turn expands the industry for everyone. The Arduino (and Wiring, the programming framework it's based off of) was created with designers, artists, and electronics novices in mind to help encourage a community of all skill levels and to allow them to develop and share their ideas. This opened up a whole new world of interactive art and hobbyist projects that couldn't have been developed otherwise. #### They're Open Source Since the Arduino platform is open source and it has allowed millions of people to get involved in embedded electronics, we've seen a huge number of open source projects/code flood sites like Github, which is great for the community. This means if you're trying to interface with the LSM9DS0 9-DOF sensor chip you can just pop on over to Adafruit's LSM9DS0 library on Github and download the code, reducing development time by hours, or even days depending on your skill level. Not only the code open source, but the hardware is as well. In my opinion, the only thing harder than writing code for microcontrollers is designing the hardware electronics for one. Things are getting better, but there never used to be a whole lot of documentation teaching you how to design a printed circuit board (PCB) board with a microcontroller and peripheral components. Now, there are hundreds of boards, shields, and peripheral components available to use as reference thanks to the open PCB designs. #### They're Cheap You can easily find some of the Arduino boards on the internet for around$15, which is far below the hundreds of dollars you used to have to pay for microprocessor/microcontroller evaluation boards. Although hobby electronics did exist, they weren't cheap and their tools were usually pretty poorly made. To get anything higher quality you had to pay top dollar.
Even worse, if you made a mistake and fried your board then you were pretty much SOL. And believe me, when you're just starting out you'll burn up a board or two.
#### Some Examples
I could write all day long about how great Arduino is and why you should use it, but that's not going to really tell you what they're capable of doing. So here are a few projects powered by the Arduino platform.
##### MultiWii Drones
Although the name may be confusing, this is actually a custom-made Arduino board used to control a drone. It's capable of powering RC planes, cars, and anything from tri-copters (3 propellers) to hexa-copters (6 propellers). The Arduino-powered microcontroller interfaces with accelerometers, gyros, barometers, GPS, and more. It's capable of receiving data from all of these sensors and from the transmitter up to 250 times per second to make adjustments mid-flight. I'd say that's pretty capable.
##### ArduSat Satellite
Believe it or not, there is a Arduino-based satellite orbiting earth right now containing a bunch of sensors for different experiments. Apparently the team built and launched this nanosat with the intention of allowing the general public to design and run their own space-based applications and experiments. Not bad for a \$25 hobbyist device.
##### Arduino Laser Harp
This, in my mind, is a good example of what the Arduino/Wiring creators had in mind when they say they created Arduino for artists and designers. It's a nice combination of visual and audio effects that would be difficult to create without the microcontroller.
### How do you use an Arduino?
Okay, enough talk about how easy Arduinos are to use, lets get in to the details. I'll walk you through the steps to write a small "Hello World" sketch for the Arduino Uno.
After installing the IDE, open it and click the 'New' button to start a new project. This should bring up a new text window with only the setup() and loop() functions in it. This is the only boilerplate code you need for the sketch.
Now, I won't be going through all of the details here (which I'll save for another post), but the gist of our sketch is that it'll turn on an LED for a half second, turn the LED off for a half second, and continuously repeat. The code should be simple enough to infer what each command is doing. This is about as simple as you can get.
Write the following code in the text window:
void setup() {
pinMode(13, OUTPUT);
}
void loop() {
digitalWrite(13, HIGH);
delay(500);
digitalWrite(13, LOW);
delay(500);
}
If you just want to verify that your code compiles, but you don't want to upload it to the board, you'll want to click the 'Verify' button. But before you do, make sure you've told the IDE which Arduino board you're using. In my case, I'm using an Uno, so I would tell the IDE this by clicking Tools->Board->Arduino Uno. Now the IDE knows the configuration of my board and how to compile the code.
If you haven't done so already, click 'Verify'. After a second or two you should see text appear in the lower console telling you the sketch "uses 1,108 bytes (3%) of program storage space" or something similar. Since no errors appeared we know this was accepted by the compiler as valid code.
## Better understand your data with visualizations.
• 30-day no-questions money-back guarantee
• Updated regularly (latest update June 2021)
• Updated with bonus resources and guides
To upload it to your board, you must first connect the board to your computer via the USB cable. Once connected, you may need to tell the IDE which port the Arduino is on (although most of the time it can find it automatically). You can do this by going to Tools->Port and selecting the port that ends in (Arduino Uno).
Finally, click the 'Upload' button. You'll know the upload worked if you see "Done uploading" just below the text editor window. You should also see the small LED on the board blinking on and off every second.
And that's it, you just wrote code to power a microcontroller!
### Conclusion
I hope this cleared some things up on what exactly an Arduino is, and why they're so popular. The platform isn't going away soon, and different variations are popping up all the time, so if you do a bit of searching you'll likely be able to find one that suits your needs. For help, check out the Arduino forums, which have a ton of people willing to help any and all skill levels.
I'll be writing up some tutorials on different projects you can make over the next couple of weeks, so be sure to subscribe to the newsletter!
What Arduino projects do you want to see made? Let us know in the comments!
Last Updated: January 1st, 2016
Get tutorials, guides, and dev jobs in your inbox. | 2021-06-14 11:41:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1861741989850998, "perplexity": 1874.4538942993486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612154.24/warc/CC-MAIN-20210614105241-20210614135241-00320.warc.gz"} |
https://uk.answers.yahoo.com/question/index?qid=20200216200810AAGmt6Y | Anonymous
Anonymous asked in Science & MathematicsChemistry · 2 months ago
# How do I find the radius of this?
Chromium forms a body centered cubic crystal. If the length of an edge is 2.884 angstroms, and the density is 7.20 g/cm^3 , what is the radius of a chromium atom in angstroms?
Relevance
• Dr W
Lv 7
2 months ago
I cover all the variations of these crystal cell problems in my answer here
spend some time and sort through it.
***********
this problem. see the image I've attached
in image #1.. note the edges of the cell start in the CENTER of each corner atom
in image #2.. note how the corner atom exists in 8 adjacent cells
in image #3.. note how 1/8th of each corner atom is in any given cell and how the central atom is entirely in the cell
in image #4..
.. (1) note that for simple cubic cells, the atoms touch on an edge
.. .. ... so that edge length = 2*r.
.. (2) For face centered cubic cells, the atoms touch on the diagonal
.. .. .. of the face. so that face diagonal length = 1r + 2r + 1r = 4r
.. .. . .then using p-theorem
.. .. .. ... . (edge length)² + (edge length)² = (face diagonal length)²
.. .. .. ... . 2*(edge length)² = (face diagonal length)²
.. .. .. ... . √ 2*(edge length) = (face diagonal length)
.. .. .. ... .√ 2*(edge length) = (4r).
.. .. .. .. ... edge length = 4r/√ 2
.. (3) for body centered cubic cells, the atoms touch on the diagonal
.. .. . .of the CELL. which has length = r+2r+r = 4r
.. .. ...with legs of 1 edge and 1 face diagonal so that
.. . . ...... . (edge length)² + (face diagonal length)² = (cell diagonal length)²
... .. .. ... . 3*(edge length)² = (cell diagonal length)²
.. .. .. ... . √3*(edge length) = (cell diagonal length)
.. .. .. ... .√ 3*(edge length) = (4r).
.. .. .. .. ... edge length = 4r/√ 3
see the pics
*************
all that behind us, you don't need density to finish this
.. r = √ 3 / 4 * (edge length) = √ 3 / 4 * (2.884Å ) = 1.249 Å | 2020-04-05 17:45:26 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8364532589912415, "perplexity": 6881.102280550594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371606067.71/warc/CC-MAIN-20200405150416-20200405180916-00255.warc.gz"} |
https://math.stackexchange.com/questions/2838126/error-of-the-intersection-of-two-linear-functions | # Error of the intersection of two linear functions
I have the following linear fits of two data sets
$L_1: y=(a_{1}\pm e_{1})x + (b_{1} \pm f_{1})$
$L_2: y=(a_{2}\pm e_{2})x + (b_{2} \pm f_{2})$
How do I calculate the intersection of $L_1$ and $L_2$ (coordinates + error)?
• This looks like a homework question, so it's better if you first present your ideas about solving it. – tst Jul 2 '18 at 14:56 | 2019-05-23 19:22:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6358281373977661, "perplexity": 359.0621919167862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257361.12/warc/CC-MAIN-20190523184048-20190523210048-00373.warc.gz"} |
https://www.albany.edu/~hammond/presentations/tug2014/diffic.html | # Math Examples
#### CSS3 styling of math
${C}^{\infty }\text{-fns}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}e=m{c}^{2}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}{e}^{\pi i}=-1\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}w={x}^{\left({y}^{z}\right)}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}x=\frac{-b±\sqrt{{b}^{2}-4ac}}{2a}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}\frac{{x}^{2}}{2{x}^{3}-4x+1}$ ${x}_{{m}_{k}}={{x}_{{n}_{j}}}^{1⁄2}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}\sqrt[3]{\frac{\alpha \xi +\beta }{\gamma \xi +\delta }}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}{n}^{2}\equiv 1\phantom{\rule{0.2em}{0ex}}\left(mod4\right)\phantom{\rule{0.5em}{0ex}}\text{if}\phantom{\rule{0.5em}{0ex}}n\equiv ±1\phantom{\rule{0.2em}{0ex}}\left(mod2\right)\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}\Delta ABC\cong \Delta DEF$ $r=||x||=\sqrt{{{x}_{1}}^{2}+{{x}_{2}}^{2}+\dots +{{x}_{n}}^{2}}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}\frac{1+\sqrt{5}}{2}=1+\frac{1}{1+\frac{1}{1+\frac{1}{1+\dots }}}=\sqrt{1+\sqrt{1+\sqrt{1+\sqrt{1+\sqrt{1+\dots }}}}}$ ${x}^{2}{y}^{2}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}M↦{}^{t}M^{-1}\phantom{\rule{0.5em}{0ex}}\text{has order}\phantom{\rule{0.5em}{0ex}}2\phantom{\rule{0.5em}{0ex}}\text{in}\phantom{\rule{0.5em}{0ex}}{\mathrm{GL}}_{n}\left(\mathbf{R}\right)\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}{}_{2}F_{3}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}z=x+{y}^{\left(\frac{2}{k+1}\right)}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}\frac{a}{b⁄2}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}\left(\genfrac{}{}{0}{}{n}{k⁄2}\right)$ ${\left(\frac{a}{b}\right)}^{\frac{1}{2}}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}\sqrt{\frac{a}{b}}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}\sqrt{\frac{\frac{a}{b}}{\frac{c}{d}}}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}{e}^{t}=\sum _{k=0}^{\infty }\phantom{\rule{0.2em}{0ex}}\frac{{t}^{k}}{k!}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}\phantom{\rule{0.1em}{0ex}}\mathrm{sin}ax\phantom{\rule{0.1em}{0ex}}\mathrm{cos}bx\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}\int {\int }_{S}\phantom{\rule{0.2em}{0ex}}\left(\mathbf{curl}\phantom{\rule{0.2em}{0ex}}\mathbf{F}\phantom{\rule{0.2em}{0ex}}·\phantom{\rule{0.2em}{0ex}}\mathbf{N}\right)\phantom{\rule{0.2em}{0ex}}d\sigma ={\int }_{\partial S}\phantom{\rule{0.2em}{0ex}}\left(\mathbf{F}\phantom{\rule{0.2em}{0ex}}·\phantom{\rule{0.2em}{0ex}}\mathbf{T}\right)\phantom{\rule{0.2em}{0ex}}ds$ ${\left(1+t\right)}^{r}=\sum _{k=0}^{\infty }\phantom{\rule{0.2em}{0ex}}\frac{r\left(r-1\right)\left(r-2\right)\dots \left(r-k+1\right)}{k!}\phantom{\rule{0.2em}{0ex}}{t}^{k}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}{D}^{2}y-3x{\left(Dy\right)}^{2}=x\phantom{\rule{0.1em}{0ex}}\mathrm{cos}x$ $\left(\frac{{\partial }^{2}}{\partial {x}^{2}}+\frac{{\partial }^{2}}{\partial {y}^{2}}\right){\left|\phi \left(x+iy\right)\right|}^{2}=0\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}{}_{b}{}^{a}\mathrm{Hom}_{c}^{d}\left(X,Y\right)\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}X\stackrel{f}{\to }Y\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}X\underset{f}{\to }Y\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}\stackrel{A}{X}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}\underset{A}{X}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}\mathrm{Gal}\left(\overline{\mathbf{Q}}⁄\mathbf{Q}\right)$ $\overline{X}\phantom{\rule{1.8em}{0ex}}\stackrel{˘}{X}\phantom{\rule{1.8em}{0ex}}\stackrel{ˇ}{X}\phantom{\rule{1.8em}{0ex}}\stackrel{˙}{X}\phantom{\rule{1.8em}{0ex}}\stackrel{¨}{X}\phantom{\rule{1.8em}{0ex}}\stackrel{^}{X}\phantom{\rule{1.8em}{0ex}}\stackrel{\sim }{X}\phantom{\rule{1.8em}{0ex}}\stackrel{⇀}{X}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}T_{{j}_{1}{j}_{2}\dots {j}_{q}}^{{i}_{1}{i}_{2}\dots {i}_{p}}\phantom{\rule{0.2em}{0ex}},\phantom{\rule{1.8em}{0ex}}E_{2}^{pq}={H}^{p}\left(B,{H}^{q}\left(F\right)\right)⇒{H}^{*}\left(X\right)$ $\frac{1}{1+\frac{{e}^{-2\pi \sqrt{5}}}{1+\frac{{e}^{-4\pi \sqrt{5}}}{1+\frac{{e}^{-6\pi \sqrt{5}}}{\dots }}}}=\left(\frac{\sqrt{5}}{1+\sqrt[5]{{5}^{3⁄4}{\left(\frac{\sqrt{5}-1}{2}\right)}^{5⁄2}-1}}-\frac{\sqrt{5}+1}{2}\right){e}^{2\pi ⁄\sqrt{5}}\phantom{\rule{0.5em}{0ex}}\phantom{\rule{0.5em}{0ex}}\text{(Ramanujan)}$ | 2018-02-25 10:01:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.411887526512146, "perplexity": 2262.167400455755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816351.97/warc/CC-MAIN-20180225090753-20180225110753-00550.warc.gz"} |
https://hmbc.cz/tr6gac/9eef83-oxidation-state-of-h3po2 | Oxidation number (also called oxidation state) is a measure of the degree of oxidation of an atom in a substance (see: Rules for assigning oxidation numbers). H3PO3 - +3. The H bound to the phosphorous is not acidic. Strong reducing behaviour of H3PO2 is due to : Option 1) Presence of one - OH group and two P - H bonds Option 2) High electron gain enthalpy of phosphorus Option 3) High oxidation state of phosphorus Option 4) Presence of two - OH groups and one P - H bond Due to hgh electronegativity, it forms O 2 ' ion in most of the metal oxides. The electronegativities of S, Se, Te are low hence their compounds even with most electropositive elements are not more than 50% ionic. To have a stronger acid, you need a stable base which delocalizes the negative charge. reducing agents. In H3PO4 oxidation state of P is +5, which is highest for P. Hence H3PO4 cannot serve as a reducing agent Explanation: Hypophosphorous Acid() is an oxyacidof phosphor and is a strong reducing agent. The H3PO2 is a neutral molecule, so the overall charge is zero (0). Click hereto get an answer to your question ️ The order of the oxidation state of the phosphorus atom in H3PO2, H3PO4, H3PO3 , and H4P2O6 is: What charge of Phosphorus (P) will give you an answer of 0? The s orbital in these elements is completely filled and p orbitals are half filled, making their electronic configuration extra stable. Now, find the n-factor(total change in oxidation number per molecule) and then equivalent mass = molar mass/n-factor. What are the oxidation states for all of the elements in H3PO4? Oxidation number (also called oxidation state) is a measure of the degree of oxidation of an atom in a substance (see: Rules for assigning oxidation numbers). *except H3PO4 is phosphoric acid and the charge is zero so if that is the compound youre asking for the oxidation number is +5 If the sample weighs about 5 g and if the ? Explain. during extraction of a metal the ore is roasted if it is a? Favourite answer Since H3PO2 has an overall neutral (zero) charge, and H has a charge of 1+, 3 times 1 = 3+ O has a charge of 2- by default, so, since there are 2 Oxygen atoms, 2 times -2 = … oxidation number of of phosphorus in orthophosphoric acid hypophosphorous acid and phosphorus acid Loading... Autoplay When autoplay is enabled, … So, when H3PO2 changes into PH3, oxidation state of phosphorous changes from +1 to -3 so as n-factor is 4. AIPMT 1994: Phosphorus has the oxidation state of + 3 in (A) Phosphorous acid (B) Orthophosphoric acid (C) Hypophosphorous acid (D) Metaphosphoric aci ... (H3PO4)}$O.N. is +3 as phosphorus gets +1 from the hydrogen and +2 from the two OH groups. +3 - 4 = -1. What do you think of the answers? Oxidation number of hydrogen = + 1 Oxidation number of oxygen = − 2 Let oxidation number of phosphorous be a. Equivalent mass of$\ce{H3PO2}$in this disproportionation reaction $$= E_\text{oxidation}+E_\text{reduction}= M/4+M/4= M/2$$ Now, I am not able to comprehend that, why these two types of equivalent mass has been added to arrive the equivalent mass of$\ce{H3PO2}$. On heating lead, nitrate forms oxides of nitrogen and lead. (a) Negative oxidation state: Except the compound OF 2 oxygen shows-2 oxidation state in all its compounds. The stability of + 5 oxidation state decreases down the group. Oxidation-Reduction Reactions - ppt download Redox Review worksheet Some of the known Phosphorous oxoacids are the following, list the oxidation state for phosphorous., H3PO4, H3PO2, H5P3O10, H4P2O7 The density of solid sand (without air spaces) is about 2.84 g/mL. Here the oxidation numbers are +2 for the calcium, … What is the oxidation number of phosphorus in H3PO2? that gives the compound a total charge of -5 without the chrage of P yet. Ramesh of P is + 5, in hypophosphorous acid$\ce{(H3PO2)}$it is + 1 while in metaphosphoric acid$\ce{(HPO3)}\$, it is + 5. asked Apr 4, 2019 in Redox reactions and electrochemistry by Simrank ( 72.0k points) Hence they can act as. The sum of oxidation numbers of all the atoms in a compound is 0. Determine the volume of a solid gold thing which weights 500 grams? H3PO4 - +5. Look at the structures of H3PO2 and H3PO4. Copyright © CurlyArrows Education Private Limited Door #2, Alankrita, Panampilly Nagar 10th B Cross Road Near South Indian Bank, Kochi, Kerala 682036 Ph: +9170347 84565. Also number of P - OH bond increases the acidity of species decreases Obviously, A is the correct answer (+1), The oxidation no. Strong reducing behaviour of H3PO2 is due to (i) Low oxidation state of phosphorus (ii) Presence of two –OH groups and one P–H bond (iii) Presence of one –OH group and two P–H bonds ... (iv) oxidation state of oxygen changes in the overall reaction involved in the. When the. The individual elements in the compounds have oxidation numbers. Indicate the oxidation number of phosphorus in each of the following acids: (a) HPO3, (b) H3PO2, (c) H3PO3, (d) H,PO4, (e) H2P2O7, (f) H5P3O10. My take: The concept of equivalent mass collapses in this situation! The chief was seen coughing and not wearing a mask. Solution 46P:Here we have to calculate the oxidation state of P in the given compounds.Here all the compounds contain H, … Phosphorous can have a varying oxidation state from -3 to +5 of which Phosphorous oxyacids may have oxidation states from +1 to +5. The oxidation state of Phosphorous in H 3 P O2 is +1 which is the lowest oxidation state of phosphorous. So phosphorous can oxidize to higher oxidation states and acts as good reducing agent. The acidic hydrogens are the ones bound to the oxygens. Questions from AIPMT 1994 1. The stability of + 5 oxidation state decreases and that of + 3 state increases (due to invert pair effect) down the group. 7. process. Thus 1 + 2 = +3. The acid strength is also explained by the oxidation number of central atom ( here P ) H3PO2 - +1. Also when H3PO2 changes into H3PO4, oxidation state of phosphorous changes from +1 to +5 so as n-factor is again 4. Strong reducing behaviour of H3PO2 is due to (i) Low oxidation state of phosphorus asked Aug 7, 2018 in Chemistry by Anukriti bharti ( 38.1k points) p - block element The oxidation number of Phosphorus (P) is the unknown here. Note: It has been pointed out to me that there are a handful of obscure compounds of the elements sodium to caesium where the metal forms a negative ion - for example, Na-.That would give an oxidation state of -1. The oxidation state of sulphur is +4 in SO 2 i.e it can lose its two more electron to attain +6 oxidation state and can act as oxidising agent but it can also gain electrons to reach the oxidation state of -2 state observed in it due to this ability of SO 2 to lose and gain electron and it can acts as oxidising as well as reducing agents. Let us consider it as x. To confirm, substitute the value obtained in the formula, the answer should be zero as the charge on the molecule is zero. of phosphorus in H3PO3 , H3PO4 , H3PO2 or phosphorus, acid, Phosphoric acid, phosphinic acid respectively ? Strong reducing behaviour of H3PO2 is due to (i) The low oxidation state of phosphorus (ii) Presence of two –OH groups and one P–H bond (iii) Presence of one –OH group and two P–H bonds (iv) High electron gain enthalpy of phosphorusSolution: Option (iii) is the answer. An analyst wishes to analyze spectrophotometrically the copper content in a bronze sample. You can ignore these if you are doing chemistry at A level or its equivalent. This is why H3PO2 has only one pKa value. Join Yahoo Answers and get 100 points today. What is the oxidation number of P in H3PO2 molecule. You can sign in to give your opinion on the answer. The Oxidation Number of Hydrogen (H) is + 1 The Oxidation Number of Oxygen (O) is -2 The H3PO2 is a neutral molecule, so the overall charge is zero (0). And + 4 oxidation states from +1 to +5 of which phosphorous may... State of P in H3PO2 and H3PO3 oxidation state the oxidation number of hydrogen = + 1 ' ion most... Oxidize to higher oxidation state whereas other halogens exhibit + 1, + 5 and 7! Negative charge in H3PO3, H3PO4, H3PO2 or phosphorus, acid, acid. –1 oxidation state H3PO4, H3PO2 or phosphorus, acid, phosphinic respectively... Stronger acid, you need a stable base which delocalizes the negative.. H3Po2 is + 1, + 5 and + 7 oxidation states from +1 to +5 oxidation number P... And is a colorless low-melting compound, which is soluble in water, dioxane, and alcohols of metal! To the oxygens further be oxidized to a higher state and thus resulting in a bronze sample on!, it forms O 2 ' ion in most of the elements in the compounds have numbers... Two OH groups state of phosphorous be a opinion on the molecule is zero 0! At a level or its equivalent analyze spectrophotometrically the copper content in a sample. A compound is 0, oxidation state 1 oxidation number for P is +1 and +3,... And not wearing a mask, the oxidation number of P yet only oxidation! Unique platform where students can interact with teachers/experts/students to get solutions to their queries welcome to Sarthaks eConnect a. And 0.300 mol C3H8 this is why H3PO2 has an overall neutral ( zero ) charge and... By the oxidation number of hydrogen = + 1, + 3, + 4 states! Oxyacids may have oxidation states oxidation state of h3po2 +1 to +5 of which phosphorous oxyacids may have oxidation states its. Of equivalent mass collapses in this situation a total charge of phosphorus in H3PO2 molecule stronger... Of phosphorous changes from +1 to -3 so as n-factor is again 4 nitrate forms of. Not exhibit any Positive oxidation state whereas other halogens exhibit + 1, +,... Is +4 nitrogen exhibits + 1, + 5 oxidation state whereas other halogens exhibit + 1 +... Why H3PO2 has an overall neutral ( zero ) charge, and 0.300 mol C3H8 +,... Water, dioxane, and H has a charge of 1+, 3 times =. The oxidation number of hydrogen = + 1 what are the ones bound to the phosphorous is not acidic elements... An answer of 0 state and thus resulting in a compound is 0 charge the oxidation number of be. Weights 500 grams the two OH groups ore is roasted if it is a strong reducing.. H bound to the phosphorous is not acidic the metal oxides and +3,... Into H3PO4, oxidation state decreases down the group some oxo acids which weights 500 grams any oxidation. The answer if you are doing chemistry at a level or its.. The ore is roasted if it is a strong reducing agent weighs about 5 and. Of oxidation numbers to their queries, nitrate forms oxides of nitrogen and.. Are doing chemistry at a level or its equivalent copper content in a compound is.... Of P in H3PO2 a metal the ore is roasted if it is a molecule! Stronger acid, you need a stable base which delocalizes the negative charge oxidation states also stable which. H3Po2 and H3PO3 oxidation state decreases down the group again 4 the answer should zero! Number for P is +4 decreases down the group exhibit any Positive oxidation state of P is +1 +... An overall neutral ( zero ) charge, and alcohols ones bound to the phosphorous is acidic! May have oxidation numbers into PH3, oxidation state whereas other halogens exhibit + 1 the formula, answer... A Thanksgiving dinner with over 100 guests and is a colorless low-melting,.: the concept of equivalent mass collapses in this situation base which delocalizes the negative charge states and acts good! The sum of oxidation numbers of all the atoms in a compound is 0 +2 from the OH. O 2 ' ion in most of the elements in the compounds have states! Entir ecompound have oxidation state of h3po2 -1 charge the oxidation number of P yet are doing chemistry at a level or equivalent. Mol C3H8 have a varying oxidation state central atom ( here P will. Oxyacidof phosphor and is a neutral molecule, so the overall charge is zero this is why H3PO2 has overall... ( ) is an oxyacidof phosphor and is a colorless low-melting compound which. Exhibit any Positive oxidation state ( here P ) H3PO2 - +1 a Thanksgiving dinner with over guests! In to give your opinion on the molecule is zero only one pKa value,... + 7 oxidation states and acts as good reducing agent, you need a stable base which the... Is zero ( 0 ) ) will give you an answer of 0 the. − 2 Let oxidation number of central atom ( here P ) H3PO2 - +1 phosphorus also shows and... Which weights 500 grams which delocalizes the negative charge will give you an answer of 0 the molecule zero... Analyze spectrophotometrically the copper content in a bronze sample of the metal oxides into PH3, oxidation state of be! My take: the concept of equivalent mass collapses in this situation 4 oxidation states signify ability! Of central atom ( here P ) H3PO2 - +1 g and if the sample about. And if the sample weighs about 5 g and if the oxidized to a higher state and resulting! Hypophosphorous acid ( ) is + 1, + 3, + 2 +...
2020 oxidation state of h3po2 | 2021-06-15 09:51:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7020942568778992, "perplexity": 3777.293991973382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00529.warc.gz"} |
http://math.stackexchange.com/questions/750446/what-test-could-i-use-to-test-mu-1-and-mu-2-instead-of-barx-and-mu/750480 | What test could I use to test $\mu_1$ and $\mu_2$ instead of $\bar{x}$ and $\mu$?
A single sample $t$ test is only against a sample and a population correct? Can it test against a population and itself or two populations? ie $\mu_1$ vs. $\mu_2$....if not what tests would you use $\mu_1$ and $\mu_2$? it seems like a dependent t test could do that....I could be wrong about everything though. Thanks in advance.
-
A single sample $t$ test is not "against a sample and a population". It is a test looking for evidence against some hypothesis regarding some parameter.
For example, you might have the hypothesis that a population mean is $28$. That is, the hypothesis that $\mu=28$. Your sample value of $\bar{x}$ might provide evidence against this. Or your sample might be consistent with this hypothesis, in which case you have no evidence of anything.
Or you might have the hypothesis that a population proportion is $0.45$. That is, the hypothesis that $p=0.45$. Your sample value of $\hat{p}$ might provide evidence against this. Or your sample might be consistent with this hypothesis, in which case you have no evidence of anything.
Or you might have the hypothesis that a one population's mean is the same as another population's mean. That is, the hypothesis that $\mu_1-\mu_2=0$. Your sample value of $\bar{x}_1-\bar{x}_2$ might provide evidence against this. Or your sample might be consistent with this hypothesis, in which case you have no evidence of anything. | 2015-05-23 14:14:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7696707844734192, "perplexity": 31.400102085693575}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927634.1/warc/CC-MAIN-20150521113207-00161-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://resources.quizalize.com/view/quiz/gc2-3rd-monthly-exam-20222023-231d22eb-07fe-44a1-aa95-f69f207c74c0 | GC2 3rd Monthly Exam 2022-2023
Quiz by Karen Lagaña
General Chemistry 2
Philippines Curriculum: SHS Specialized Subjects (MELC)
Feel free to use or edit a copy
includes Teacher and Student dashboards
### Measure skillsfrom any curriculum
Tag the questions with any skills you have. Your dashboard will track each student's mastery of each skill.
• edit the questions
• save a copy for later
• start a class game
• view complete results in the Gradebook and Mastery Dashboards
• automatically assign follow-up activities based on students’ scores
• assign as homework
• share a link with colleagues
• print as a bubble sheet
### Our brand new solo games combine with your quiz, on the same screen
Correct quiz answers unlock more play!
23 questions
• Q1
A dispersion force exists in both polar and nonpolar molecules.
true
false
True or False
60s
Edit
Delete
• Q2
A gas with a high density or molar mass diffuses or effuses faster than a gas with lower density or molar mass.
false
true
True or False
60s
Edit
Delete
• Q3
A real gas is a gas that does not behave according to the assumptions of the kinetic-molecular theory.
true
false
True or False
60s
Edit
Delete
• Q4
Diamagnetic substances are those that contain net unpaired spins and are attracted by a magnet.
false
true
True or False
60s
Edit
Delete
• Q5
Electronegativity is a property of an atom to attract a pair of shared electrons.
true
false
True or False
60s
Edit
Delete
• Q6
NaCl is a covalentcompound.
false
true
True or False
60s
Edit
Delete
• Q7
The arrangement of atoms ina molecule is called geometry.
true
false
True or False
60s
Edit
Delete
• Q8
The average kinetic energy of the molecules is proportional to the absolute temperature.
true
false
True or False
60s
Edit
Delete
• Q9
The energy needed to remove anelectron from an atom in the gaseous state is called ionization energy.
true
false
True or False
60s
Edit
Delete
• Q10
The molecules average kineticenergy decreases when the temperature increases.
false
true
True or False
60s
Edit
Delete
• Q11
Chemical bonds are forces of attraction that hold atomstogether to form compounds. What type of bond is formed when nonmetals shareelectrons?
ionic
covalent
metallic
electrovalent
60s
Edit
Delete
• Q12
In which sublevel do the electrons have the lowest energy?
4d
4f
4p
4s
60s
Edit
Delete
• Q13
Magnesium has 2 electrons in its outer shell. How does magnesium attain a stable configuration?
Mg atom accepts one electron.
Mg atom accepts six electrons.
Mg atom loses one electron.
Mg atom loses two electrons.
60s
Edit
Delete
• Q14
Which element are called alkaline earth metals and are located in the s-block?
phosphorus, silicon, selenium
neon, argon, krypton
manganese, gold, iron
calcium, magnesium, barium
60s
Edit
Delete
• Q15
Which of the following is a diamagnetic?
silicon
sulfur
magnesium
aluminum
60s
Edit
Delete
• Q16
Which of the following are not correctly paired?
Ar, noble gas
Br, halogen
Na, alkali metal
Sn, lanthanide
60s
Edit
Delete
• Q17
Which of the following compoundshas the strongest chemical bond?
ethanol
water
sugar
salt
60s
Edit
Delete
• Q18
Which of the following willdiffuse the fastest?
$H_2$
$NH_3$
$CO_2$
SO
60s
Edit
Delete
• Q19
Which of the following properties of gases indicate that the molecules are at constant random motion?
It leaks out of the container.
It is difficult to compress.
It has low density.
It is usually visible.
60s
Edit
Delete
• Q20
Which of the following statements about the Kinetic Molecular Theory is correct?
Molecules with lower molar mass will have slower velocity.
The average kinetic energy of a gas is dependent on temperature.
As the temperature of gas increases, the average kinetic energy decreases.
At a given temperature, molecules with greater molar mass will have higher average kinetic energy.
60s
Edit
Delete
Teachers give this quiz to your class | 2023-03-25 17:44:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3533056974411011, "perplexity": 8535.249796000004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945368.6/warc/CC-MAIN-20230325161021-20230325191021-00098.warc.gz"} |
https://space.stackexchange.com/questions/35366/properly-calculating-g-factor | # Properly calculating G-factor
This may sound rather simple, but I was told that technically, G-force readout should actually indicate zero-G on the earth's surface, as well as in a stable orbit. This is due to the fact that G-force is measured using accelerometers (which measure a change in velocity) and since we down here on Earth surface, when standing still, don't change our velocity (neglecting earth's rotation effects), our G-factor should be 0. However, I'm pretty sure we here on Earth experience 1G. Now, I understand the physics of it: Our "fall" towards the center of the Earth is being blocked by ground, and we feel that as "Normal" force, or "weight". But how do you quantify and calculate this G-value in a unifying formula that works for any point in the universe?
And - let's keep things simple:
• Earth being a perfect, non-rotating sphere with uniform mass
• No influences of other solar bodies
All I want to know is how the G-factor is calculated using a force vector summation formula that can be applied to on-ground, in LEO and BEO scenarios.
• +1 A good answer to your excellent and challenging question will go into some depth, and point out both that you can't neglect Earth's rotation, gravity changes depending on altitude, latitude, what's underneath the ground locally, proximity to mountains, direction towards the Moon, and the Sun, etc, and that real-world accelerometers usually measure much more than "a change in velocity". – uhoh Apr 8 '19 at 6:30
• See for example @DavidHammen's table of related effects and his other answer and possibly this answer for example. To explore three ways down (or up) can be defined, see this answer. – uhoh Apr 8 '19 at 6:30
• Okay that's a cleaner question for sure. You should edit your question and modify it directly, rather than do that in comments because people don't always read through comments before writing answers. Thanks! – uhoh Apr 8 '19 at 22:35
• Great,I just updated it! Thanks! – Mitch99 Apr 9 '19 at 7:09
• Looks great, thanks! – uhoh Apr 9 '19 at 7:20
This may sound rather simple, but I was told that technically, G-force readout should actually indicate zero-G on the earth's surface, as well as in a stable orbit.
Whoever told you that was wrong. The G force reading should indicate zero g for a non-thrusting spacecraft in orbit well above the Earth's atmosphere, and also zero g the moment after a bungee jumper steps off a bridge. The G-force reading for a person standing still on the surface of the Earth watching the bungee jumper will be about 1 g, directed upward.
Suppose the spacecraft mentioned above starts thrusting in order to, for example, transfer to a higher orbit, or to go beyond Earth orbit. Its "G force" sensor will now register a non-zero value. The "force" in "G force" is a bit of a misnomer. G force has units of acceleration, not force, with 1 g being 9.80665 m/s2. What's common between the non-thrusting spacecraft and the bungee jumper is that the only force acting on them is gravitation. What's common between the thrusting spacecraft and the person standing still on the surface of the Earth is that some force in addition to gravitation acts on them. This suggests a better name for "G force": Net non-gravitational acceleration. That's a mouthful. An even better name is proper acceleration.
This is due to the fact that G-force is measured using accelerometers (which measure a change in velocity) ...
Accelerometers do not measure change in velocity. An accelerometer in a non-rotating, non-thrusting spacecraft above the Earth's atmosphere will register 0 g, even though the spacecraft's velocity vector is changing all the time. An accelerometer at rest on a table on the surface of the Earth will register 1 g directed upward. Smart phones use an accelerometer to determine which direction is up (or down), and this in turn is used to determine whether the cell phone should operate in landscape or portrait mode. Accelerometers instead sense (imperfectly) proper weight per unit mass -- i.e., proper acceleration, or "G force".
A perfect accelerometer would be the ideal device for measuring "G force". Note that I qualified what accelerometers measure with "imperfectly". Real accelerometers, as opposed to perfect ones, have a number of imperfections. A real accelerometer might register 1 g while a perfect one would register 1.005 g, or 2 g in a situation where a perfect accelerometer would register 2.01 g. This is called a scale factor error. Every reading is incorrect by a common factor. Another error is bias. For example A real accelerometer with a bias might register 0.005 g while a perfect one would register 0.0 g, or 1.005 g in a situation where a perfect accelerometer would register 1 g. Every reading is off by a constant amount. Yet another kind of error is noise. The readings from cheap accelerometers (e.g., the ones used in cell phones) are rather noisy. Even the very best cryogenically cooled, superconducting accelerometers exhibit some amount of noise.
All I want to know is how the G-factor is calculated using a force vector summation formula that can be applied to on-ground, in LEO and BEO scenarios.
That accelerometers do not measure acceleration due to gravity presents a challenge for spacecraft that self-navigate their position/velocity state. Such spacecraft need to have an onboard model of gravitation so that the acceleration due to gravity can be added to the accelerometer reading. This calculated value will be somewhat erroneous. The model of gravitation is never perfect (it's a model), and if the spacecraft's estimate of where it is in somewhat incorrect, the calculated gravitational acceleration vector will be incorrect. That accelerometers have errors that make their measurements of acceleration due to non-gravitational forces presents a challenge for spacecraft that self-navigate. The combination of the errors from computing the gravitational acceleration and the errors from the accelerometer means that the integrated acceleration (i.e., velocity) will drift from truth, and the doubly integrated acceleration (i.e., position) will do worse than drift from truth.
This means self-navigating spacecraft (and also self-navigating cars) need outside help to keep their estimated position/velocity state close to reality. GPS works great for vehicles in low Earth orbit and up to perhaps a bit beyond geostationary orbit. Beyond that, self-navigating position/velocity state gets much more challenging. A lot of spacecraft do not self-navigate their position/velocity state. For example, the New Horizons spacecraft that flew past Pluto and more recently Ultima Thule, had no clue where it was in space. Like many spacecraft, the onboard navigation only computed the vehicle's attitude/attitude rate state. New Horizons took pictures of Pluto and Ultima Thule via timed commands that told the vehicle times at which to change it's attitude/attitude rate and times at which to operate various instruments.
That accelerometers do not measure gravitational acceleration and that they imperfectly measure non-gravitational acceleration similarly presents challenges for self-driving cars on the surface of the Earth. The errors that inherently result from accelerometer-based dead reckoning would quickly make the deduced position and velocity worthless. Self-driving cars need outside help such as GPS (but GPS is rather lousy in cities) and maps (but maps are always out of date), and also nonlocal sensors (e.g., cameras).
• "The G-force reading for a person standing still on the surface of the Earth watching the bungee jumper will be about 1 g, directed upward", ->why upward? – Hobbes Apr 9 '19 at 14:20
• @Hobbes - Because "G-force sensors" (aka accelerometers) cannot sense gravity. (No local experiment can per Einstein's equivalence principle.) The Newtonian forces acting on a person standing still on the ground are the downward gravitational force and the upward normal force that keeps the person from sinking into the Earth. Accelerometers can only detect the latter, so 1 g upward. – David Hammen Apr 9 '19 at 14:46
• Another way to look at it: Accelerometers measure acceleration relative to a local stream of free-falling apples. To a person standing still on the surface of the Earth, that stream of free-falling apples is accelerating downward. Relative to one of the free-falling apples, the person is accelerating upward. – David Hammen Apr 9 '19 at 14:46
This may sound rather simple, but I was told that technically, G-force readout should actually indicate zero-G on the earth's surface, as well as in a stable orbit. This is due to the fact that G-force is measured using accelerometers
This is incorrect. G is not zero on Earth's surface. You're conflating two things: G-force, which is a force as a result of gravity, and acceleration, which produces a force as a result of movement.
This is due to the fact that G-force is measured using accelerometers
Accelerometers are a cheap way to measure acceleration. They cannot measure force, so they cannot be used to measure G due to gravity directly. You need e.g. a weighing scale to measure G.
Now, you can combine gravity and acceleration into a net force that acts on a body. This is just the addition of two vectors: the gravity vector, pointing at the center of the body that produces the gravitational force, and the acceleration vector produced by the vehicle you're in. | 2020-08-04 12:04:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.608913242816925, "perplexity": 1063.4238285092617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735867.93/warc/CC-MAIN-20200804102630-20200804132630-00126.warc.gz"} |
https://asmedigitalcollection.asme.org/memagazineselect/article/124/03/48/379348/Feature-Focus-Rapid-Prototyping-Rapid-EvolutionNew | This article highlights the striking facts about rapid prototyping; a process that fabricates physical objects directly from computer-aided design sources. The use of rapid prototyping as a replacement for injection molding is still the overwhelming exception and may always be limited to a very narrow niche. Three-dimensional printing has also seen the introduction of materials that improve the durability and appearance of conceptual prototype parts. Z Corp. of Burlington is incorporating new pigments into its binders for its starch- and plaster-based materials. The pigments result in truer and brighter colors and replace the dyes that were previously incorporated into the liquid binders. The company has recently introduced a urethane infiltrant that increases part strength significantly and allows parts with delicate geometries to be.
## Article
One of the most striking facts about rapid prototyping, a process that fabricates physical objects directly from computer-aided design sources, is the broadening reach in how it is used.
Since the late 1980s, when stereolithography first opened the possibility of creating solid objects directly from computer models, rapid prototyping has evolved quickly in terms of process technology and new applications. In the roughly 15 years since it first hit the market, stereolithography has been joined by a host of other prototyping techniques, each carving out a niche with varying degrees of success. The evolution of these techniques as a means of producing tooling, concept model parts, and functional prototypes has been impressive.
Lately, rapid prototyping has been bridging the gap to rapid manufacturing. “This is one of the more interesting trends,” noted Terry Wohlers, president of Wohlers Associates, an industry consultant based in Fort Collins, Colo., who said that a handful of companies, for example, have been using rapid prototyping techniques to produce parts that would normally be injection molded. The use of rapid prototyping as a replacement for injection molding is still the overwhelming exception and may always be limited to a very narrow niche. Yet it speaks volumes about a range of advances in rapid prototyping materials and processes to make products with improved mechanical properties, accuracy, and aesthetics.
While material advancements are most evident in plastics, research is also taking place in metals and ceramics. One new process currently under development is based on photoreactive pastes to produce various composites for rapid prototyping.
## Emulating Thermoplastics
David Rosen, an associate professor of mechanical engineering at Georgia Institute of Technology in Atlanta and chair of the Computers in Information and Engineering Division of ASME, has seen a big push for plastics and metals that better mimic the properties of production materials. That’s a significant challenge for all of the processes.
One process that has seen developments in this area is stereolithography. In stereolithography, parts are built from a photosensitive polymer fluid that cures under exposure to a laser beam.
The resins used in stereolithography are photosensitive thermosets that crosslink during the curing process, and are fundamentally different from the thermoplastics used in injection molding that they are designed to emulate.
“Usually, these materials are good at matching a couple of mechanical properties, such as elastic modulus and yield strength,” said Chuck Hull, chief technology officer of 3D Systems in Valencia, Calif., a supplier of stereolithography machines, selective laser sintering systems, and three-dimensional printers. He said that the industry has had some success in making resins that mimic polypropylene, a widely used thermoplastic. He also expects stereolithography resin suppliers to continue to make progress in creating materials that have selected thermoplastic properties, which will drive specific applications.
Mahesh Kotnis, technical marketing manager of Van-tico Inc. in East Lansing, Mich., a major stereolithography resin supplier, said that, over the last two years, better stereolithography materials have yielded functional prototype parts that mimic the thermoplastic properties of final parts. Vantico is currently marketing polypropylene-like stereolithography resins, and plans to follow that with the introduction of a resin that mimics acrylonitrile-butadiene-styrene, or ABS, later this year.
Kotnis acknowledges the challenges of approximating the properties of thermoplastics, particularly impact strength and tensile elongation, which measure the ability of a material to resist shock. Improvements in toughness and rigidity usually reduce a material’s heat resistance, and vice versa. The polypropylene-like grade of stereolithography resin has a flexural modulus of 180,000 psi, notched izod impact strength of 0.8 ft.-lbs./in., and heat deflection temperature of about 180°F. According to Kotnis, these properties still fall short of matching the properties of polypropylene, but stereolithography resins have come a long way and development continues.
Kotnis said that stereolithography resins are, in a few cases, being used as end-use products. One example of where this is happening is in the medical device industry, where stereolithography resins are being used to produce hearing aid shells, he said. Widex, a hearing aid manufacturer based in Vaerloese, Denmark, developed a process to digitize the ear canal and create the stereolithography part directly from the CAD data. This eliminates the laborious process of creating a wax pattern from an impression of the ear canal, which is used to make the silicone mold to shoot the part.
Kotnis sees medical instrument applications as a big growth area for stereolithography. The company markets a line of Stereocol medical-grade resins, which pass USP Class 6 tests—a standard to measure the biological response to plastic materials. The resins stand up to standard sterilization techniques.
## See-Through Resin
Other developments in stereolithography resins are adding to the fit and function capabilities of prototype parts, said Jim Reitz, business director of DSM Somos, a unit of DSM Desotech. DSM Somos, based in New Castle, Del., recently introduced a line of WaterClear resins for building transparent prototype parts.
According to Reitz, potential applications include fluid flow analysis, in which researchers can see how gases mix in a manifold prototype, or pump housings that allow viewing of how internal assemblies work together.
Rosen of Georgia Tech said that clear resins open up new opportunities for prototyping. For example, they may allow soft drink suppliers to design prototype bottles without investing in molds. If the materials can be made truly clear, they might even be suitable for lenses, he said.
Reitz added that the WaterClear resins have a fast photo speed, allowing parts to be formed quickly, and low viscosity for easy cleanup. The stiffness and toughness of the material allow parts to be tapped and drilled. The clarity of a prototype is limited to the flat surfaces; sidewalls must be finished to allow for internal viewing, Reitz said.
The company’s newest stereolithography resin is Raven, introduced last December. It is not a transparent, but a general-purpose grade, marketed for a range of applications, from conceptual models to functional prototypes to patterns for molds, Reitz said. Although it is not a “super fast” curing material, it is set at a lower price—around $180 per kilogram versus$225 to \$235/kg for the company’s other general-purpose products. The material, which is clear as a liquid in the vat, cures to a dark color as the build takes place. This allows the customer to view the prototype as it is being formed, he said.
Last month, DSM Somos introduced a photosensitive polymer called Somos 11120 Watershed, which resists humidity. High humidity can degrade the mechanical properties of stereolithography resins. The company is targeting markets in humid climates such as the Asia/ Pacific region.
The company is also developing an elevated-temperature resin, which is expected to retain its useful mechanical strength at temperatures to 250°F without growing brittle, Reitz said.
## Not Just Parts
According to Kotnis, the use of stereolithography resins to create master patterns for tooling for a secondary process, such as plastic injection molding or rubber molding, is the original and still dominant market application of the process. He said that stereolithography resins were originally used to make master patterns for silicone tooling, which was then used to mold polyurethane parts, and that this is still an important application.
Hull said that rapid prototyping techniques can be used to create forms that are used in casting. “We have three different approaches that can help investment casters, and this has become a significant focus of what we do,” said Hull. The company said that stereolithography casting patterns have been used successfully in shell investment casting, sand casting, die casting, and other techniques. Three-dimensional printing has also been used to build models in a material similar to casting wax, although with less accuracy than stereolithography.
Also, a significant part of the selective laser sintering business, which 3D Systems acquired last year, was used to create patterns from a polystyrene material, Hull said. Selective laser sintering spreads a thermoplastic powder layer. The part of it exposed to a laser beam melts and bonds to form the structure. The process has also been applied to ceramics and metals.
3D Systems’ laser sintering process can also be used to form metal parts, Hull said. The system is being used to form green metal tools, which are partly sintered, and then infiltrated with bronze to get full density, Hull said. He said the process has been used to create injectionmolding tools.
Hull said that advanced stereolithography materials and improvements in the laser sintering process are leading to some crossover in applications between the two processes.
## Rapid Composites
One new process now under development may bring rapid prototyping into the realm of composites. In December 2001, 3D Systems formed a joint venture with DSM Desotech called OptoForm LLC to develop a rapid prototyping process, called direct composite manufacturing, which uses photosensitive paste. The technology was originally developed by a French company, OptoForm SARL, which was acquired by 3D Systems last year. The joint venture is now refining the process and materials in evaluation testing with a few customers.
Chuck Hull of 3D Systems, said that direct composite manufacturing brings rapid prototyping and rapid manufacturing into the composites arena. “You get to work with higher-viscosity toughening agents and other things to get better physical properties than you might get with a liquid material,” he said. Although Hull said it is too early in research to predict the market for the technology, he sees potential in prototyping and in manufacturing applications.
Although it uses a stereolithography-like technique, direct composite manufacturing differs in some key aspects from conventional stereolithography systems. For one thing, the equipment is vatless; because it uses a viscous paste, there is no liquid resin in which to form the part. Instead, the paste is pushed up through a cylinder, where a special coating system smooths out the paste to a solid layer.
Mirrors, driven by a computer, direct a laser beam to build the pattern, explained Reitz of DSM Desotech. Because there is no liquid resin or waiting for the liquid resin in the vat to settle before the build is dipped in it to form the next layer, direct composite manufacturing is a very quick process, he said.
## Thermoplastic Progress
Stratasys of Eden Prairie, Minn., a supplier of fused deposition modeling machines, is extending the range of thermoplastics used in its systems. A widely used rapid prototyping technology, fused deposition modeling, is based on a thermoplastic filament that is extruded from a nozzle that moves over a platform to build the part by depositing the plastic in the required geometry.
Stereolithography Cuts Its Teeth
A Stereolithography Application that has made the transition to rapid manufacturing is the Invisalign process, developed by Align Technology of Santa Clara, Calif., to manufacture teeth aligners—a clear plastic replacement for wire braces.
The process is an example of stereolithography used for mass customization. The company worked with 3D Systems of Valencia, Calif., which supplied the high-end SLA-7000 solid imaging machines to create the thermoforming tools on which the plastic aligners are formed.
Because each patient’s teeth are unique, the process starts with a set of dental impressions, explained Len Hedge, vice president of manufacturing at Align Technology. Plastic is poured into the impressions to create a representation of the patient’s teeth. That physical model is scanned and converted into a digital file. Then a suite of software tools, developed in-house, calculates the orthodontic treatment, which consists of the tooth movements that a series of aligners will produce over time.
Once the digital representation of the treatment is done, stereolithography takes it back to the physical world. Hedge said that the process required a rapid prototyping technique that was capable of high throughput and high accuracy, and selected the SLA-7000 machine, which had just been introduced. The SLA-7000 has dual beam capability: a 10-mil-diameter laser beam for detailed components and, to speed the process, a 30-mil-diameter beam for cross sections that do not require as much accuracy, Hedge said.
The machine has a large platform that can hold 90 aligner patterns—about two and a half patients’ worth. The standard for accuracy of a build is within 1.5 thousandths of an inch.
After the teeth reproductions are formed, they are brought to a thermoforming machine and used as tools to form the plastic aligners. The aligners are pressure-formed in the thermoforming machine, which uses air pressure to slide the heated plastic over the mold. The aligners are clear, made from a blend of polycarbonate and polyurethane to impart the desired mechanical properties and tooth movements. The thermoforming mold of stereolithography resin has to withstand the heat, temperatures, and pressures of the thermoforming process. The plastic used for the aligners, which are 30 to 40 thousandths of an inch thick, has a melting point of 425°F. The stereolithography resin also has small shrinkage and a fast build time, Hedge said.
Depending on the length of an individual’s treatment, the patient is supplied with a series of 12 to 48 aligners. Each aligner is worn for about six weeks, correcting the teeth in progressive stages.
Align Technology has ordered 39 SLA-7000 systems from 3D Systems, and currently operates 16 at its Santa Clara location. Last year, the company produced 1.1 million molds, and expects to manufacture 4 million molds this year, said Hedge.
Jon Cobb, vice president of marketing and customer service, said the company supplies two main types of materials: ABS and polycarbonate. Because the process builds prototypes from thermoplastics, the prototypes closely replicate the actual injection-molded parts. Typically, ABS parts are 80 to 90 percent of the strength of the injection-molded part, he said.
Stratasys plans to introduce a polyphenyl sulfone resin for its machines this summer. PPS is a high-performance thermoplastic that can be autoclaved, has high chemical resistance, and high heat deflection temperature.
The company also plans to introduce a fine feature detail capability on its FDM Maxum, a high-speed, large-envelope machine, which will be capable of producing high-detail parts, Cobb said. The company is also working on a project to use the FDM process to produce small, finely detailed components in disposable cameras. Cobb added that Stratasys is also working on using the process to produce hearing aid housings.
## Colorful Concepts
Three-dimensional printing has also seen the introduction of materials that improve the durability and appearance of conceptual prototype parts.
Z Corp. of Burlington, Mass., is incorporating new pigments into its binders for its starch- and plaster-based materials, according to the company’s CEO, Marina Hatsopou-los. The pigments result in truer and brighter colors, and replace the dyes that were previously incorporated into the liquid binders, said Hat-sopoulos, an ASME member. She believes that color is an important aspect of concept modeling, to give a clearer idea of what the final product will look like. But it also has other uses. It can reproduce an FEA pattern on an actual model of a soft drink container to locate stresses, for example.
Z Corp. is also developing materials to produce stronger parts. The company recently introduced a large format machine, producing parts as large as 16 x 20 x 24 inches. Often, larger parts have more complex geometries and higher strength-to-weight requirements. The company is working with Vantico on infiltrants—liquids that can be absorbed into the porous material to increase strength. Z Corp. recently introduced a urethane infiltrant that increases part strength significantly, and allows parts with delicate geometries to be handled, Hatsopoulos said. | 2019-10-18 21:25:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2165313959121704, "perplexity": 3770.8051945541433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684854.67/warc/CC-MAIN-20191018204336-20191018231836-00288.warc.gz"} |
https://www.biostars.org/p/427173/ | DissTOM and Cytoscape - WGCNA
2
0
Entering edit mode
20 months ago
Hi,
In the WGCNA why we want to calculate de dissimilarity TOM? The other question that I have is, once that I have my network visualization with the igraph package how I can export it to cytoscape? Which steps I should follow?
Thanks,
Silvia
wgcna dissTOM cytoscape • 859 views
1
Entering edit mode
20 months ago
scooter ▴ 470
Hi Silvia, WGCNA uses the dissimilarity matrix from the topological overlap matrix to reduce the effects of noise and spurious associations (from the manual), before doing the hierarchical clustering.
As to your second question, probably the easiest way to get from igraph in R to Cytoscape is to use the RCy3 package, which includes specific functions that will talk to the running cytoscape and push the igraph network to it. You can read more about RCy3 at https://github.com/cytoscape/cytoscape-automation/wiki
-- scooter
0
Entering edit mode
Ok, thank you! And it’s the same reason for when we do the dissimilarity of eigengenes coexpression [MEDiss = 1-cor(MEs)]?
Silvia | 2021-12-01 12:37:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36886537075042725, "perplexity": 4378.204070620107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360803.0/warc/CC-MAIN-20211201113241-20211201143241-00163.warc.gz"} |
https://www.dsprelated.com/freebooks/mdft/Changing_Coordinates.html | ### Changing Coordinates
What's more interesting is when we project a signal onto a set of vectors other than the coordinate set. This can be viewed as a change of coordinates in . In the case of the DFT, the new vectors will be chosen to be sampled complex sinusoids.
#### An Example of Changing Coordinates in 2D
As a simple example, let's pick the following pair of new coordinate vectors in 2D:
These happen to be the DFT sinusoids for having frequencies (dc'') and (half the sampling rate). (The sampled complex sinusoids of the DFT reduce to real numbers only for and .) We already showed in an earlier example that these vectors are orthogonal. However, they are not orthonormal since the norm is in each case. Let's try projecting onto these vectors and seeing if we can reconstruct by summing the projections.
The projection of onto is, by definition,5.12
Similarly, the projection of onto is
The sum of these projections is then
It worked!
Next Section:
Projection onto Linearly Dependent Vectors
Previous Section:
Projection | 2021-06-16 01:54:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9761019349098206, "perplexity": 635.6273256946283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621699.22/warc/CC-MAIN-20210616001810-20210616031810-00162.warc.gz"} |
http://mathoverflow.net/questions/109062/index-formula-for-pseudors | # Index formula for Pseudors
For elliptic differential operators $P$ on a compact manifold $M$, we have the formula
$$\mathrm{ind}(D) = \mathrm{tr}(e^{-tP^*P}) - \mathrm{tr}(e^{-tPP^{\star}})$$
I would think that this holds for Pseudo-Differential operators of positive order as well, but no text book states that. Is it not true? If not, what goes wrong?
-
You only need $P$ to be an elliptic ps.d.o. on a compact manifold. – Liviu Nicolaescu Oct 7 '12 at 15:14
Ok, two things that I keep forgetting to mention. Thank you! – Kofi Oct 7 '12 at 23:31 | 2014-04-16 22:04:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8234807848930359, "perplexity": 424.2341018310326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |