url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
http://math.stackexchange.com/questions/109712/closed-form-equation-to-figure-out-sudoku-square-from-given-index | # closed form equation to figure out sudoku square from given index
Let's say I have a 1D array of 81 elements, which could be thought of as representing a Sudoku board... a 9x9 matrix (every 9 elements create a new row). The question is, given an index into the 1D array, is there a closed form equation to return the square in which that element exists on the Sudoku board and can you prove the equation is correct?
For example, if you're given index 10, the result would be 0 since the 10th element is in the top left 3x3 square. If you're given index 75, the result would be 7 since the 76th element is in the bottom middle 3x3 square.
array = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80]
Sudoku board:
[ 0, 1, 2|, 3, 4, 5|, 6, 7, 8]
[ 9, 10, 11|, 12, 13, 14|, 15, 16, 17]
[ 18, 19, 20|, 21, 22, 23|, 24, 25, 26]
-----0----- -----1----- -----2-----
[ 27, 28, 29|, 30, 31, 32|, 33, 34, 35]
[ 36, 37, 38|, 39, 40, 41|, 42, 43, 44]
[ 45, 46, 47|, 48, 49, 50|, 51, 52, 53]
-----3----- -----4----- -----5-----
[ 54, 55, 56|, 57, 58, 59|, 60, 61, 62]
[ 63, 64, 65|, 66, 67, 68|, 69, 70, 71]
[ 72, 73, 74|, 75, 76, 77|, 78, 79, 80]
-----6----- -----7----- -----8-----
-
The solution I came up with is floor((index % 9) / 3) + 3 * floor(index / (9 * 3)). My reasoning behind this solution is the following...
So a sudoku "board" is a grid, right? You can look at it as a 3 by 3 matrix of 3 by 3 matrices and a 9 by 9 matrix of squares, where you can number each element from left to right, top to bottom.
Given any index [0, 80] in the 1D array, if you mod it by 9 you will get which column it is in in the 9x9 matrix... divide that by 3 and you get which column it is in the 3x3 matrix (and take the floor of the whole thing to avoid nasty decimals).
So at this point you essentially have an x coordinate. Now you have to find the y.
Taking that index and dividing it by 9 will tell you which row it is in in the 9x9 matrix, since each row starts at every 9th element in the 1D array. If you divide that by 3 you get which row the index is in in the 3x3 matrix. You could I think of this as dividing by 27, since there are 27 elements in each row in the 3x3 matrix.
However, both calculations give you a value between 0 and 2 inclusive. But they are your x and y coordinates. Now to get the correct square, I just multiply the y coordinate by 3 and sum the values together. The result is between 0 and 8 inclusive, where I starting the squares from left to right, top to bottom.
I'm pretty sure this works... but I don't know how to prove it.
-
First, think about the relation between the index $i$ in your array and the row and column numbers, $r$ and $c$. It is $r=\lfloor \frac i9 \rfloor, c=i \pmod 9$. Then the row block, $rb$,(which horizontal block the index is in) and column block, $cb$ (which vertical block the index is in) ares $rb=\lfloor \frac i{27} \rfloor, cb=\lfloor \frac c3 \rfloor$ Finally the square $s=3\cdot rb + cb$
- | 2015-09-03 22:27:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5909988880157471, "perplexity": 249.6576862635155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645328641.76/warc/CC-MAIN-20150827031528-00118-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://byu.apmonitor.com/wiki/index.php/Main/OptionApmCtrlmode?action=print | # Main: APM.CTRLMODE - APMonitor Option
Type: Integer, Output
Default Value: 1
Description: Control mode
0=terminate
1=simulate
2=predict
3=control
The CTRLMODE is the actual controller mode implemented by the application and is an output after the application has completed each cycle. The requested control mode (REQCTRLMODE) is set as an input to the desired level of control but sometimes the CTRLMODE is not able to match the request because of a failed solution, a critical MV is OFF, or other checks with the application. A CTRLMODE level of 0 indicates that the program did not run due to a request to terminate. A CTRLMODE level of 1 (cold mode) indicates that the program was run as a simulator with all STATUS values turned off on FVs, MVs, and CVs. A CTRLMODE level of 2 (warm mode) indicates that the application calculates control actions but only after the second cycle. This mode is commonly used to observe anticipated control actions before the controller is activated to level 3. The CTRLMODE level of 3 means that the controller is ON and implementing changes to the process. | 2018-10-18 13:35:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.223234161734581, "perplexity": 1954.27631908528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511872.19/warc/CC-MAIN-20181018130914-20181018152414-00420.warc.gz"} |
https://mathoverflow.net/questions/393594/self-homeomorphism-of-mathbb-cp1-holomorphic-a-e | # Self homeomorphism of $\mathbb CP^1$ holomorphic a.e
Suppose $$\varphi:\mathbb CP^1\to \mathbb CP^1$$ is a homeomorphism holomorphic on a connected open subset $$U\subset \mathbb CP^1$$ with $$\mathbb CP^1\setminus U$$ of zero measure.
Is it true that $$\varphi$$ is holomorphic on the whole $$\mathbb CP^1$$ (so it is a projective transformation)?
If no, what kind of assumptions of $$U$$ would suffice? (for example $$\mathbb CP^1\setminus U$$ has Hausdorff dimension $$\le 1$$?)
• Answers here should be relevant. May 24 at 22:09
Following the links of Wojowu, the answer to this question is negative for the case of self-homeomorphisms of $$\mathbb C^1$$, here it is:
So by extending the self-homeo to $$\mathbb CP^1$$ the answer is negative for $$\mathbb CP^1$$ as well. To have a positve answer, one has to require indeed that the Hausdorff dimension of $$\mathbb CP^1\setminus U$$ is less than $$1$$. (the case of $$\dim=1$$ seems to be still open) | 2021-12-01 18:11:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8858954906463623, "perplexity": 136.30093486904462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360881.12/warc/CC-MAIN-20211201173718-20211201203718-00013.warc.gz"} |
https://worldbuilding.stackexchange.com/questions/57623/can-one-nuke-reliably-shoot-another-out-of-the-sky/57718 | # Can one nuke reliably shoot another out of the sky?
A quick glance at real world anti-missile systems seems to show that they do work, but imperfectly. The problem naturally gets harder when the incoming missile is extremely high-velocity.
If your strategic calculation was "if one nuke hits us, we will be forced to retaliate in kind, and that ends badly for everyone, so let's do everything that we can to prevent the incoming strike, that way we can retaliate with sub-nuclear options" -- would nuking the nuke work?
The idea here is that a near miss wouldn't matter since the fireball would be so huge.
Assume the blast happens over uninhabited land, that nukes are not scarce, that the poltics etc isn't important here...
...is it practical?
• Isn't the general idea to hit the enemy nuke without it going off? For example, you hit the fuel tank or the motor, forcing the missile to tumble out of the sky. Nukes normally have a pretty sophisticated detonation sequence so your scenario wouldn't work (unless intended in the story line of course) – user10945 Oct 5 '16 at 15:07
• You could just prevent the bomb from exploding, indeed, instead of blowing it up. There are a couple of ways to slow down a incoming projectile. – Yassine Badache Oct 5 '16 at 15:15
• Depends on what is the firing mechanism and the explosion range, cost and how it is delivered. Most new strategies deploy hundreds of mini nukes which spread over the entire continent for maximum casualties all originated from a single long range missile... try to nuke all these!😱 – user6760 Oct 5 '16 at 16:48
• Are we also ignoring the effects of EMPs and radioactive fallout from our anti-nuke nukes? (Or making use of it for the story somehow?) Those seem like a couple of big effects to consider. – HopelessN00b Oct 5 '16 at 17:21
• To quote a sci-fi writer "'close' only counts when dealing with horse-shoe throwing and tactical nukes", but then it does count. Some reasons why being "close" may be sufficient, can be found here: en.wikipedia.org/wiki/Nuclear_fratricide – Baard Kopperud Oct 5 '16 at 20:46
You have described 1950 era ABM's, so the short answer is "Yes, of course"
The pulse of hard radiation from the nuclear explosion could potentially fry the electronics of the incoming warhead, so the detonator does not work. The sheet of neutrons from the explosion could actually affect the nuclear material inside the incoming warhead, and of course the thermal pulse will ablate part of the incoming warhead and act like a rocket motor throwing it off course. If the explosion is close enough, the enemy warhead is simply consumed inside the fireball.
The US "Sprint" ABM deployed briefly in the 1970's, and was armed with an enhanced radiation thermonuclear warhead. Older systems like Nike or air to air missiles like Genie also used nuclear warheads (although the primary purpose was to destroy the bombers carrying the nuclear warheads, the effects of the explosion on the Russian bombs outside the immediate blast radius would be quite similar).
The downside is you are using nuclear weapons in the atmosphere in the airspace over or near your own homeland, and the enemy warheads are either disintegrating in the atmosphere (showering you with Plutonium dust), or are plunging randomly into the ground, leaving you with the task of recovering "hot" items full of nuclear warhead fuel. While much preferable than dealing with the aftereffects of a nuclear explosion vaporizing a city, it is still not an ideal solution, hence President Ronald Reagan's Strategic Defense Initiative; meant to shoot down ICBM's in the boost and mid flight stages rather than stopping warheads in the final seconds before impact.
• Because of clustering of your own MIRVs and because of enemy EMP, many warheads were designed with non-electronic fallback detonators. I mean, still using electricity for the trigger detonators, but no small circuits. – Zan Lynx Oct 5 '16 at 16:38
• One of the most toxic substances on Earth is the toxin from the bacterium Clostridium botulinum. Gram for gram / ounce for ounce, that is more than a thousand times as toxic as Plutonium. What do people do with it? Answer: they have it injected into their face; it is the active substance in a Botox treatment. Ask them to inject Plutonium and they would balk at the idea. Ask them to inject something more than a thousand times as toxic, and they go "Okey!". Odd that... – MichaelK Oct 6 '16 at 14:09
• @MichaelKarnerfors Chemically, Plutonium may not be especially toxic, but the fact that it is rendered into a fine radioactive dust makes it extremely dangerous because it can easily be distributed throughout the body by inhalation or ingestion. This is pretty much how Alexander Litvinenko was killed, though by a different element. – Ryan Reich Oct 6 '16 at 16:15
• Finally powdered plutonium is a toxic heavy metal, so should be avoided just for that reason alone, but the fact it is also radioactive provides an extra element of risk. The fate of Alexander Litvinenko, referenced by Ryan Reich in his comment should be example enough. – Thucydides Oct 6 '16 at 20:25
• @RyanReich The isotope used to kill Litvinenko was Polonium, not Plutonium. Plutonium's decay rate is too long to do anything worse than giving you cancer a few years early. Iodine and Strontium radioisotopes are what kill you from fallout because they have shorter half-lives and more radiation damage done per unit time. Also, your body will flush Plutonium (and Polonium, not that it helped Litvinenko) but iodine gets absorbed into your thyroid, and Strontium into your bones. – kingledion Oct 7 '16 at 0:11
Keyword: One.
In practice you will have a big problem when you try this--interceptor #1 engages inbound nuke #1 and destroys it. Fine.
30 seconds later inbound nuke #2 sails through the area of ionization and isn't intercepted because the interception radar can't see through the ionized area.
• Why is this being downvoted? – Délisson Junio Oct 6 '16 at 2:53
• I didn't downvote this but the answer seems pretty enigmatic to me. What does it mean destroys it? Does the interceptor#1 explode in proximity to nuke#1 or maybe it just rams it? Where does the ionization comes from? What is Ionized, air or some other material that got out during collision? Why wouldn't we see both nuke#1 and nuke#2 and intercept them at the same time? In my opinion the answer as it is now needs improvement. – Sok Pomaranczowy Oct 6 '16 at 8:52
• I believe I understand what this answer is suggesting: that: (a) you can successfully prevent the first inbound nuke from getting through, but (b) the air explosion from your own interceptor nuke would create a lot of ionizing radiation and charged particles in the air that would result in a rather large radar blind spot, greatly hindering your targeting efforts to shoot down further inbound nukes. At least, I think that's what Loren is saying. Loren, your answer would probably be better if you fleshed it out a bit more, adding more explanation. – type_outcast Oct 6 '16 at 9:27
• @wingleader This answer does not currently have any downvotes. – a CVn Oct 6 '16 at 11:14
• I think Loren's post is pretty clear. The enemy uses a tactic that renders defensive radars ineffective and drops the 2nd warhead right down the pipe. – Tony Ennis Oct 7 '16 at 0:38
would nuking the nuke work?
Yes. This is basically the same idea as what Mythbusters tried with guns and grenades. Assuming there is nothing you want to keep within the (potentially combined) blast radius, using a nuclear weapon to destroy a nuclear weapon would work. However...
...is it practical?
No. It's kind of the difference between using a bullet to stop a grenade and a grenade to stop a grenade. Why would you spend between 2.00 and 200.00 USD on a grenade when you could spend between 0.21 and 0.32 USD on a bullet that does the same job just as well? You don't need an explosive to destroy a nuke. A kinetic kill vehicle is all you need, which is, coincidentally, exactly how the US handles missile defense.
• Something tells me its not a coincidence – Ryan Oct 5 '16 at 15:47
• @iAdjunct I've read parts of Wikipedia. It's a big place, after all. Which part did you have in mind? – Frostfyre Oct 5 '16 at 19:31
• @iAdjunct Given that there is a 50 year gap between the acquisition programs for these two weapon systems, you just can't compare the prices, even adjusting for inflation. Also, the capability gap between the weapons is vast. The Minuteman cannot make course adjustments during takeoff. How could you ever expect to hit an incoming missile with it? I think you are the one being misleading. – kingledion Oct 6 '16 at 2:13
• Not really worth my time having a conversation here about stuff I studied a lot in college. Carry on. – iAdjunct Oct 6 '16 at 2:43
• I believe current kkvs have a hit probability in the singular digit probability space, especially when the incoming missile is equipped with either MIRVs or ABM countermeasures... hence you will have to deploy many to have any reasonable sense of security. – Doomed Mind Oct 6 '16 at 8:12
It's not practical, because you would fry your satellites.
While nuking a nuke would certainly work, so far as destroying the other nuke is concerned, you would almost definitely destroy some of your satellites in doing so. ICBMs don't travel close to the Earth, instead taking high arcing ballistic paths hundreds of miles above the ground. Your best option for hitting a nuke with another nuke would be to hit it at a high altitude, where the detonation of your nuke wouldn't harm the target of the enemy nuke. Of course, detonating a nuclear bomb at a high enough altitude over your country that it won't damage ground installations puts another important asset at risk: satellites.
High altitude nuclear tests have been done, in fact, back in the 60s before we agreed to ban the detonation of nukes in space. Even at that point in time, the tests that were done inadvertently damaged several US satellites. We now have far more satellites in space, and have become significantly more reliant on them than we were in the 1960s. Knocking out a few of these satellites with an anti-missile would make anti missile nukes, while possible, extremely impractical.
Note: this is not suggesting that nuclear interceptors are worse than getting nuked, just that they're worse than conventional interceptors.
• Although, a few tens of millions of dollars is nothing compared to a few entire states. – wizzwizz4 Oct 5 '16 at 19:26
• It's worth noting that a high-altitude explosion would damage everyone's satellites irrespective of owner, not just your own - even if we discount the Kessler syndrome, which would be the real bummer here. – John Dvorak Oct 5 '16 at 19:37
• @JanDvorak: right. I wonder how problematic nukes would actually be so WRT Kessler syndrome – they'd vaporise anything close enough, and merely disable / melt / deflect further away satellites, so I'd suspect it's in the long run actually less disastrous than conventional explosions (which would shatter satellites to thousands of dangerous particles). – leftaroundabout Oct 5 '16 at 20:34
• I am not clear why disabling satellites makes saving the lives of potentially millions of people "not practical". Downvoted. Aside from the obvious ethical issue it has a notable practical one as well - You can win (or at least survive) a war without hunks of metal in space that emit non-harmful electromagnetic radiation on command; you can't do either without people. – GrinningX Oct 5 '16 at 22:14
• @GrinningX It's possible to do the same thing without using nukes. While nukes might be better than nothing, arming your interceptors with conventional explosives would be easier, cheaper, and have fewer damaging side effects. – ckersch Oct 6 '16 at 13:12
Anti ICBM's exist already. The SM-3 Anti-ballistic missile. I knew someone who worked on it. It is a non-explosive missile that uses optical guiding to hit incoming nukes with insane accuracy. The payload is essentially a lump of metal that hits the nuke, disabling it, and not activating the explosion.
We used a modified SM-3 to knock that failing satellite down safely a few years back:
https://en.wikipedia.org/wiki/USA-193
I know this doesn't directly answer your question about an anti-nuke-nuke, but it is a viable means to prevent a nuke from making landfall, and thus could be important to your worldbuilding.
Yes, easily. We can actually take out most ICBMs using conventional explosives. I would be willing to bet we have a quick nuke drop warhead as a fallback if the conventional ones fail.
The trick is what kind of nuke to use and what altitude.
The nuke to use is an "enhanced" a.k.a "neutron bomb." The "enhanced" is a bit of misdirection. All fusion devices emit 99%+ of their energy as neutrons. To convert the neutrons into blast and heat and in general make a big explosion, you have to wrap them in a dense, neutron absorbing material like lead, polypropylene etc.
It's a myth that fusion nukes are highly destructive in space. With no atmosphere or ground to convert neutrons to heat and blast, you just get a rather large quick flashbulb for radiant effect. Space itself, even near earth space has so much volume that you have to be within something like 30km for a 1 megaton device to generate a killing pulse. (Intensity falls with the square of distance, remember.) On the surface, a 30km radius is massive, in space its a blip.
Neutrons don't kill other nukes by primarily heat, blast or frying the electronics. Instead, they transmute the isotopes within the enemy device, altering the critical ratios of those isotopes such that the device can never go critical. (Although, if close enough, the neutrons will cause heating in isotopes and blow it apart right there and some electronics can be fried by neutrons.)
So, the best point of intercept is above the atmosphere i.e. 90miles/140km or higher. The really important satellites are in geosynchronous orbit at 25,000miles/40,000km, so they're safe from any interception blast.
Intercepting above the atmosphere also prevents the blinding effects of ionized atmosphere noted by others. Even that may not matter as the enemy will be tracked by multiple sensors deployed on the ground, airborne and from high satellites, all of which will be transmitted to the interceptor which can otherwise fly blind. The ecological and other ground effects are minimal. With little blast or heat, there is little plasma and thus little EMP.
The real utility of an interceptor system is that it introduces immense uncertainty in calculating the success for an attack. Nobody really knows how all the factors in a nuclear attack will combine to produce what output. The interceptor system might substantially fail in a real attack or it might wipe out the attack completely. In the latter case, you've done nothing but p*ss off the targeted polity.
That uncertainty was a big part of the Reagan's Star Wars mojo back in the 80s that helped bring the Soviets down. The Soviets had long planned on being able to launch a devastating first strike and then absorb a much smaller counter-strike. The maybe-it-will-work,-maybe-it-won't Star Wars talk, threw that out the window.
Active defense is the new hotness at all levels. The Israels' are knocking individual artillery rounds out of the sky and pre-detonating RPGs. Interceptors in one form or another, and at all levels are here to stay.
• "Active defense is the new hotness at all levels." Not really, it's as old as the hills. Every decade from the 1950s has had its flirtation with interceptor weapons. Nike, Sprint, SDI and its successors. Eisenhower predicted in 1959 the Soviet Union would collapse in thirty years. Experts on the Soviet system in late 1970s and early 1980s thought it would fall soon. SDI made the USSR collapse? No, it's a myth. – a4android Oct 7 '16 at 7:22 | 2020-08-04 00:42:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36827266216278076, "perplexity": 2406.1577218124294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735836.89/warc/CC-MAIN-20200803224907-20200804014907-00199.warc.gz"} |
http://math.stackexchange.com/questions/170800/higher-direct-image-and-local-cohomology?answertab=oldest | # Higher direct image and local cohomology.
Let $X$ be an scheme, $Z \subset X$ a closed subscheme, and $\mathcal{F}$ a coherent sheaf then,
$\mathcal{R}^{i-1}_{j_{*}}(\mathcal{F}|_{X-Z})\cong\mathcal{H}_{Z}^{i}(X,\mathcal{F})$
I would like to see this isomorphism explicitly. Since I dont really understand how to see the elements of $H^i_Z(X,F)$. If it is possible, how can I see them in terms of Cech Cohomology?
-
The isomorphism also holds for relative cohomology of sheaves on arbitrary topological spaces. For a proof, see for example Cor. 1.9 in Hartshorne's Local cohomology (LNM 41). It is quite elementary and self-contained. For some geometric intuition for relative cohomology you may consult texts on algebraic topology (for example Hatcher's textbook), because it coincides with relative singular cohomology in the following sense: If $(X,A)$ is a relative CW-complex and $G$ is a constant sheaf on $X$, there is a canonical isomorphism $H^i_A(X,G) \cong H^i_{\mathrm{sing}}(X,X \setminus A,G)$. | 2014-08-20 14:39:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9832736849784851, "perplexity": 125.30200921114971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500809686.31/warc/CC-MAIN-20140820021329-00249-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://theoremoftheweek.wordpress.com/2010/06/ | ## Archive for June, 2010
### Theorem 30: Pythagorean triples
June 27, 2010
If I asked you to give a list of mathematical theorems, I suspect that you might well think of Pythagoras‘s theorem pretty early on. It has the rare distinction of being a theorem that’s commonly discussed (by name) in maths classes in schools today. It should probably be a theorem of the week in its own right, but for today I’d like to focus on some rather lovely number theory associated with it.
Let’s quickly remind ourselves what Pythagoras’s theorem says. Here’s a right-angled triangle.
The theorem says that $a^2 + b^2 = c^2$: the square on the hypotenuse is equal to the sum of the squares on the other two sides. There are a lot of proofs of this, but they’ll have to wait for a future post, I’m afraid!
I think that a lot of people discover, in the course of their studies of Pythagoras’s theorem at school, that there are some particularly nice right-angled triangles. One has side lengths 3, 4, 5 (quick check: $3^2 + 4^2 = 9 + 16 = 25 = 5^2$). Another has side lengths 5, 12, 13 (check: $5^2 + 12^2 = 25 + 144 = 169 = 13^2$). Another has side lengths 6, 8, 10 (check: $6^2 + 8^2 = 36 + 64 = 100 = 10^2$). Of course, these are particularly nice because the side lengths are all whole numbers (integers). I’m going to concentrate on such solutions in this post.
This leads to lots of interesting questions. Are there more solutions (with integer side lengths)? How many more? Are there infinitely many? How can we find more solutions? Can we find them all? Are some more interesting than others? Are there any interesting patterns? How can we possibly hope to solve a single equation in three variables? I encourage you to think about these questions before reading on. (You could, for example, look for more solutions from yourself and see where you can go from there.) You might also have your own questions that you’d like to consider, of course.
### Theorem 29: the law of quadratic reciprocity
June 13, 2010
I think this theorem has a wonderful name. I thought that when I first heard the name, even before I’d heard what the theorem says! It just rolls off the tongue. It’s also a very lovely theorem.
When I wrote about sums of squares (in my post about Lagrange’s theorem), I tried to persuade you that thinking about squares and modular arithmetic is a good idea. In particular, we saw that a number (like 7 or 103) that leaves remainder 3 on division by 4 (is 3 (mod 4)) cannot be written as the sum of two squares, because squares are always divisible by 4 or 1 more than a multiple of 4: squares are 0 (mod 4) or 1 (mod 4). We say that 1 is a quadratic residue (good name!) (mod 4), and that 3 is a quadratic non-residue (mod 4) (because 3 is not a square (mod 4)). 0 and 2 are not coprime to the modulus 4, so we don’t call them quadratic residues or quadratic non-residues.
How can we find the quadratic residues (mod 7), say? We simply square each number from 0 to 6 and take the remainder on division by 7. We don’t need to go beyond 6, because we’ll just get the same again: $7 \equiv 0$ (mod 7), so $7^2 \equiv 0^2$ (mod 7) and so on. Let’s list some quadratic residues mod various small numbers. For each n, I’ll list $0^2$, $1^2$, …, $(n-1)^2$, and then we can see whether we see anything interesting. Please do check my numbers!
2: 0, 1
3: 0, 1, 1
4: 0, 1, 0, 1
5: 0, 1, 4, 4, 1
6: 0, 1, 4, 3, 4, 1
7: 0, 1, 4, 2, 2, 4, 1
8: 0, 1, 4, 1, 0, 1, 4, 1
9: 0, 1, 4, 0, 7, 7, 0, 4, 1
10: 0, 1, 4, 9, 6, 5, 6, 9, 4, 1
11: 0, 1, 4, 9, 5, 3, 3, 5, 9, 4, 1
12: 0, 1, 4, 9, 4, 1, 0, 1, 4, 9, 4, 1
13: 0, 1, 4, 9, 3, 12, 10, 10, 12, 3, 9, 4, 1
14: 0, 1, 4, 9, 2, 11, 8, 7, 8, 11, 2, 9, 4, 1
15: 0, 1, 4, 9, 1, 10, 6, 4, 4, 6, 10, 1, 9, 4, 1
16: 0, 1, 4, 9, 0, 9, 4, 1, 0, 1, 4, 9, 0, 9, 4, 1
17: 0, 1, 4, 9, 16, 8, 2, 15, 13, 13, 15, 2, 8, 16, 9, 4, 1
18: 0, 1, 4, 9, 16, 7, 0, 13, 10, 9, 10, 13, 0, 7, 16, 9, 4, 1
19: 0, 1, 4, 9, 16, 6, 17, 11, 7, 5, 5, 7, 11, 17, 6, 16, 9, 4, 1
20: 0, 1, 4, 9, 16, 5, 16, 9, 4, 1, 0, 1, 4, 9, 16, 5, 16, 9, 4, 1
There are lots of interesting patterns to explore there; I encourage you to look for them and try to explain them before you read on! (I’m not going to go into most of them here, because I don’t have space in this post; I might return to them later. I’m going to focus on just one in this post.)
As you might have already noticed, there are some particularly nice things that happen with quadratic residues to prime moduli (things that don’t necessarily happen with composite moduli). I’d like to concentrate on prime moduli here. I think it’ll be convenient to have lists of quadratic residues (QRs) and quadratic non-residues (QNRs) to prime moduli, written in numerical order (rather than the order in which they occur, which is what I gave above). So here they are.
2: QRs 1
3: QRs 1; QNRs 2
5: QRs 1, 4; QNRs 2, 3
7: QRs 1, 2, 4; QNRs 3, 5, 6
11: QRs 1, 3, 4, 5, 9; QNRs 2, 6, 7, 8, 10
13: QRs 1, 3, 4, 9, 10, 12; QNRs 2, 5, 6, 7, 8 11
17: QRs 1, 2, 4, 8, 9, 13, 15, 16; QNRs 3, 5, 6, 7, 10, 11, 12, 14
19: QRs 1, 4, 5, 6, 7, 9, 11, 16, 17; QNRs 2, 3, 8, 10, 12, 13, 14, 15, 18
Let’s think about linking rows. For example, I see that 5 is a quadratic residue (mod 19). Is 19 a quadratic residue (mod 5)? Well, 19 = 4 (mod 5), and 4 is a quadratic residue (mod 5), so we say that 19 is a quadratic residue (mod 5).
OK, what else? 3 is not a quadratic residue (mod 17); is 17 a quadratic residue (mod 3)? Quick check: 17 = 2 (mod 3), so 17 is not a quadratic residue (mod 3).
This is all getting a bit clumsy to write out. Fortunately, there’s some notation that can help us: the Legendre symbol. We write it as $\left( \frac{a}{p} \right)$. It is defined to be 1 if a is a quadratic residue (mod p), -1 if a is a quadratic non-residue (mod p), and 0 if p divides a. (This is for prime p. There is a generalisation, called the Jacobi symbol, that is defined for composite p, but I shan’t go into the details now.)
So we’ve seen that $\left( \frac{5}{19} \right) = \left( \frac{19}{5} \right) = 1$, and $\left(\frac{3}{17}\right)=\left(\frac{17}{3}\right)=-1$. I suggest you try computing some more of these pairs, for practice.
Is it always the case that $\left(\frac{p}{q} \right) = \left( \frac{q}{p} \right)$ for primes p and q?
We can quickly see that it isn’t (if you haven’t already): $\left( \frac{7}{11} \right ) = -1$, but $\left( \frac{11}{7} \right) = \left( \frac{4}{7} \right) = 1$.
Hopefully your examples showed that quite often $\left(\frac{p}{q} \right)$ and $\left( \frac{q}{p} \right)$ are the same, but sometimes they are different (which necessarily means that one is the negative of the other). Can we be more precise about this? Can we predict in advance when they’ll be the same and when they’ll be different? You might like to try this for yourself, using your earlier computations and any more that you feel necessary.
### A Brief History of Mathematics
June 10, 2010
I see that Marcus du Sautoy will be presenting some short programmes on a Brief History of Mathematics on BBC Radio 4 next week. Might be worth listening to.
### Theorem 28: there are infinitely many Carmichael numbers
June 3, 2010
This week’s theorem follows from my previous posts on Fermat’s little theorem and (to a lesser extent) Wilson’s theorem.
We saw that Fermat’s little theorem tells us that if $p$ is prime and $a$ is not divisible by $p$ then $a^{p-1} \equiv 1 \mod{p}$. Could we use this as a test for primality? Wilson’s theorem gave us a criterion for a number to be prime ($n$ is prime if and only if $(n-1)! \equiv -1 \mod{n}$), although it doesn’t give us a practical way to test whether a number is prime. Could we get something similar from Fermat’s little theorem?
Well, how might this work? Let’s stick to odd values of $n$ (since it’s pretty easy to check whether an even number is prime!). We might hope that if there’s a number $b$ (not 1) so that $b^{n-1}\equiv 1 \mod{n}$ then $n$ must be prime. If there is such a number $b$, we say that $n$ is a pseudoprime to the base $b$. But does the existence of a base to which $n$ is a pseudoprime mean that $n$ must be prime?
No. Simple example: $4^{14} \equiv (4^2)^7 \equiv 1 \mod{15}$, so 15 is a pseudoprime to the base 4 (but certainly isn’t prime).
OK, so that didn’t work. But if $p$ is prime then we know that $a^{p-1} \equiv 1 \mod{p}$ for all numbers $a$ coprime to $p$. Suppose we know that $n$ is a pseudoprime to all bases $b$ with $b$ coprime to $n$. Does that mean that $n$ is prime? | 2017-05-23 07:02:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 42, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7017910480499268, "perplexity": 302.96837873463295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607591.53/warc/CC-MAIN-20170523064440-20170523084440-00229.warc.gz"} |
https://math.stackexchange.com/questions/2488861/bracket-of-intervals | bracket of intervals [duplicate]
I know $[0,1]$ denotes the interval between 0 and 1 with the boundary. And $(0,1)$ denotes the interval between 0 and 1 without the boundary. Today, I encounter some expressions as $]0,1]$, $]-\infty,+\infty]$. What does $]0,1]$ mean? Thanks.
Using a reverse $]$ is what some people do instead of a round $($. So $]0,1[$ means the same as $(0,1)$, and in your example, $]0,1]$ means the same as $(0,1]$. | 2019-12-07 14:50:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957094132900238, "perplexity": 102.0936703000597}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540499439.6/warc/CC-MAIN-20191207132817-20191207160817-00180.warc.gz"} |
https://formulasearchengine.com/wiki/Grand_Unified_Theory | # Grand Unified Theory
{{#invoke:Hatnote|hatnote}} Template:Beyond the Standard Model
A Grand Unified Theory (GUT) is a model in particle physics in which at high energy, the three gauge interactions of the Standard Model which define the electromagnetic, weak, and strong interactions or forces, are merged into one single force. This unified interaction is characterized by one larger gauge symmetry and thus several force carriers, but one unified coupling constant. If Grand Unification is realized in nature, there is the possibility of a grand unification epoch in the early universe in which the fundamental forces are not yet distinct.
Models that do not unify all interactions using one simple Lie group as the gauge symmetry, but do so using semisimple groups, can exhibit similar properties and are sometimes referred to as Grand Unified Theories as well.
Unifying gravity with the other three interactions would provide a theory of everything (TOE), rather than a GUT. Nevertheless, GUTs are often seen as an intermediate step towards a TOE.
Because their masses are predicted to be just a few orders of magnitude below the Planck scale, at the GUT scale, well beyond the reach of foreseen particle colliders experiments, novel particles predicted by GUT models cannot be observed directly. Instead, effects of grand unification might be detected through indirect observations such as proton decay, electric dipole moments of elementary particles, or the properties of neutrinos.[1] Some grand unified theories predict the existence of magnetic monopoles.
Template:As of, all GUT models which aim to be completely realistic are quite complicated, even compared to the Standard Model, because they need to introduce additional fields and interactions, or even additional dimensions of space. The main reason for this complexity lies in the difficulty of reproducing the observed fermion masses and mixing angles. Due to this difficulty, and due to the lack of any observed effect of grand unification so far, there is no generally accepted GUT model.
Are the three forces of the Standard Model unified at high energies? By which symmetry is this unification governed? Can Grand Unification explain the number of Fermion generations and their masses?
## History
Historically, the first true GUT which was based on the simple Lie group SU(5), was proposed by Howard Georgi and Sheldon Glashow in 1974.[2] The Georgi–Glashow model was preceded by the Semisimple Lie algebra Pati–Salam model by Abdus Salam and Jogesh Pati,[3] who pioneered the idea to unify gauge interactions.
The acronym GUT was first coined in 1978 by CERN researchers John Ellis, Andrzej Buras, Mary K. Gaillard, and Dimitri Nanopoulos, however in the final version of their paper[4] they opted for the less anatomical GUM (Grand Unification Mass). Nanopoulos later that year was the first to use[5] the acronym in a paper.[6]
## Neutrino masses
Since Majorana masses of the right-handed neutrino are forbidden by SO(10) symmetry, SO(10) GUTs predict the Majorana masses of right-handed neutrinos to be close to the GUT scale where the symmetry is spontaneously broken in those models. In supersymmetric GUTs, this scale tends to be larger than would be desirable to obtain realistic masses of the light, mostly left-handed neutrinos (see neutrino oscillation) via the seesaw mechanism.
## Proposed theories
Several such theories have been proposed, but none is currently universally accepted. An even more ambitious theory that includes all fundamental forces, including gravitation, is termed a theory of everything. Some common mainstream GUT models are: Template:Col-begin Template:Col-break
Template:Col-end Not quite GUTs: Template:Col-begin Template:Col-break
Template:Col-end Note: These models refer to Lie algebras not to Lie groups. The Lie group could be [SU(4) × SU(2) × SU(2)]/Z2, just to take a random example.
The most promising candidate is SO(10).{{ safesubst:#invoke:Unsubst||date=__DATE__ |\$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} (Minimal) SO(10) does not contain any exotic fermions (i.e. additional fermions besides the Standard Model fermions and the right-handed neutrino), and it unifies each generation into a single irreducible representation. A number of other GUT models are based upon subgroups of SO(10). They are the minimal left-right model, SU(5), flipped SU(5) and the Pati–Salam model. The GUT group E6 contains SO(10), but models based upon it are significantly more complicated. The primary reason for studying E6 models comes from E8 × E8 heterotic string theory.
GUT models generically predict the existence of topological defects such as monopoles, cosmic strings, domain walls, and others. But none have been observed. Their absence is known as the monopole problem in cosmology. Most GUT models also predict proton decay, although not the Pati–Salam model; current experiments still haven't detected proton decay. This experimental limit on the proton's lifetime pretty much rules out minimal SU(5).
Some GUT theories like SU(5) and SO(10) suffer from what is called the doublet-triplet problem. These theories predict that for each electroweak Higgs doublet, there is a corresponding colored Higgs triplet field with a very small mass (many orders of magnitude smaller than the GUT scale here). In theory, unifying quarks with leptons, the Higgs doublet would also be unified with a Higgs triplet. Such triplets have not been observed. They would also cause extremely rapid proton decay (far below current experimental limits) and prevent the gauge coupling strengths from running together in the renormalization group.
Most GUT models require a threefold replication of the matter fields. As such, they do not explain why there are three generations of fermions. Most GUT models also fail to explain the little hierarchy between the fermion masses for different generations.
## Ingredients
A GUT model basically consists of a gauge group which is a compact Lie group, a connection form for that Lie group, a Yang–Mills action for that connection given by an invariant symmetric bilinear form over its Lie algebra (which is specified by a coupling constant for each factor), a Higgs sector consisting of a number of scalar fields taking on values within real/complex representations of the Lie group and chiral Weyl fermions taking on values within a complex rep of the Lie group. The Lie group contains the Standard Model group and the Higgs fields acquire VEVs leading to a spontaneous symmetry breaking to the Standard Model. The Weyl fermions represent matter.
## Current status
Template:As of, there is still no hard evidence that nature is described by a Grand Unified Theory. Moreover, since we have no idea which Higgs particle has been observed, the smaller electroweak unification is still pending.[8] The discovery of neutrino oscillations indicates that the Standard Model is incomplete and has led to renewed interest toward certain GUT such as SO(10). One of the few possible experimental tests of certain GUT is proton decay and also fermion masses. There are a few more special tests for supersymmetric GUT.
The gauge coupling strengths of QCD, the weak interaction and hypercharge seem to meet at a common length scale called the GUT scale and equal approximately to 1016 GeV, which is slightly suggestive. This interesting numerical observation is called the gauge coupling unification, and it works particularly well if one assumes the existence of superpartners of the Standard Model particles. Still it is possible to achieve the same by postulating, for instance, that ordinary (non supersymmetric) SO(10) models break with an intermediate gauge scale, such as the one of Pati–Salam group
## Notes
1. There are however certain constraints on the choice of particle charges from theoretical consistency, in particular anomaly cancellation.
## References
1. {{#invoke:citation/CS1|citation |CitationClass=book }}
2. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
3. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
4. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
5. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
6. {{#invoke:Citation/CS1|citation |CitationClass=journal }}
7. {{#invoke:citation/CS1|citation |CitationClass=book }}
8. {{#invoke:citation/CS1|citation |CitationClass=book }} | 2020-05-28 00:44:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 15, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6455293893814087, "perplexity": 1062.1852863534502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396300.22/warc/CC-MAIN-20200527235451-20200528025451-00192.warc.gz"} |
https://www.pakmath.com/2019/03/19/algebra-mcqs-test-10/ | # Algebra mcqs test 10
Algebra mcqs test 10 consist of 10 most important multiple choice questions. Prepare these questions for better results and also you can prepare definitions of algebra.
1. Let G be a group of order 36 and let a belongs to G . The order of a is
2. Let H,K be the two subgroups of a group G. Then set HK={hk|hH ^ k∈ K} is a subgroup of G if
3. A mapping $\Phi&space;:&space;G&space;\rightarrow&space;\rightarrow&space;G'$ is called homorphism if a, b belongs to G
4. The symmetries of square form a
5. A group G is abelian then
6. In $S_3,a=\begin{pmatrix}&space;1&space;&&space;2&space;&&space;3\\&space;2&&space;3&space;&&space;1&space;\end{pmatrix}&space;,then&space;\,\&space;a^{-1}=$
7. The number of subgroups of a group is
8. which binary operation is not defined in the set of natural number
9. If aN={ax|x∈ N} then 3N∩5N=
10. Which of the following is the representation of $C_4=&space;\left&space;\{1,-1,i,-i&space;\right&space;\}$ | 2021-03-07 07:58:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8766629099845886, "perplexity": 728.1463610027074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00226.warc.gz"} |
http://mathhelpforum.com/math-challenge-problems/2320-tetris.html | # Math Help - Tetris
1. ## Tetris
Here is a problem which bothered me for a long time.
Is it possible to play tetris forever?
2. Originally Posted by ThePerfectHacker
Here is a problem which bothered me for a long time.
Is it possible to play tetris forever?
Probably not, but if you are in the correct reference frame you can play the relativistic version longer...
-Dan
3. presuming the blocks you get are random, if there is a certain order of blocks that are possible to get that makes it impossible to beat, then eventually you will get those blocks.
for instance if you only get the blocks that are shaped:
Code:
#
##
#
for long enough, i don't think its possible to keep completing rows.
in the sense that if you have an empty playing field and only get those, you cannot make any rows.
now whether there is a scenario that you can set up whereby constantly getting those blocks enables you to keep completing rows, i'm not sure.
presuming the blocks you get are random, if there is a certain order of blocks that are possible to get that makes it impossible to beat, then eventually you will get those blocks.
for instance if you only get the blocks that are shaped:
Code:
#
##
#
for long enough, i don't think its possible to keep completing rows.
in the sense that if you have an empty playing field and only get those, you cannot make any rows.
now whether there is a scenario that you can set up whereby constantly getting those blocks enables you to keep completing rows, i'm not sure.
I was thinking about that too, but you did not prove that the Z blocks make it impossible to win.
5. yeah i know, just trying to throw a few ideas into the pan. however, i have just thought if the tetris board is an even amount of blocks wide and you only get Z blocks you can get, say for a board 4 blocks wide
Code:
#
##
#
then
# #
####
# #
the middle 4 disappear and the two two and bottom three come down to make them all disappear. so maybe it wasn't such a good idea after all. | 2014-11-27 01:55:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5273351073265076, "perplexity": 517.1604843151409}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007715.70/warc/CC-MAIN-20141125155647-00007-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=148&t=43635&p=150930 | ## Units
$\frac{d[R]}{dt}=-k[R]; \ln [R]=-kt + \ln [R]_{0}; t_{\frac{1}{2}}=\frac{0.693}{k}$
Theodore_Herring_1A
Posts: 60
Joined: Fri Sep 28, 2018 12:29 am
### Units
How do you figure out the units for k in different ordered reactions?
Neil Hsu 2A
Posts: 61
Joined: Fri Sep 28, 2018 12:16 am
### Re: Units
Looking at the rate law, the units for the rate should end up being M/s, so depending on the order of the reaction, the units of k should be different. If you write out the units of each concentration and the rate, you should be able to figure out the units for k. For example, for a first order reaction, rate = k [A] and since rate is M/s and [A] is M, k should be 1/s. Doing this same thing will give you M^-1s^-1 for second order reactions, M^-2s^-1 for third order reactions, and M/s for zeroth order reactions.
Lorena Zhang 4E
Posts: 63
Joined: Fri Sep 28, 2018 12:16 am
### Re: Units
Essentially, the final unit for all reaction is M/s or mol/(L*s). Therefore, you can start from the end and make it back to the unit of k based on the orders.
Cole Elsner 2J
Posts: 88
Joined: Fri Sep 28, 2018 12:25 am
### Re: Units
Your end goal is to have all units b in M/s. With this, you can figure out what the order of the reaction is and adjust k units to reach that end goal units. | 2019-12-13 14:36:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5859972834587097, "perplexity": 1677.1947656424584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540555616.2/warc/CC-MAIN-20191213122716-20191213150716-00334.warc.gz"} |
https://s4965.gridserver.com/colorado-potato-tuq/ac5111-bct-route-62 | Barium, three places below magnesium in the periodic table is more reactive with air than magnesium. Write an equation for the reaction. It may also be prepared by thermal decomposition of barium nitrate. The decomposition of solid barium nitrate leads to the formation of solid barium oxide, diatomic nitrogen gas, and diatomic oxygen gas. Barium Oxide Structural Formula. When did organ music become associated with baseball? The answer will appear below; Always use the upper case for the first character in the element name and the lower case for the second character. The strontium equation would look just the same. 4 Or Fewer 5 7 8 9 Or More 19. The formula for Barium Bromate is Ba(BrO3)2. 3 Ba (s) + Fe2(SO4)3 → 3 BaSO4 + 2 Fe (s) 11. Write an equation for the reaction. Circle The Sum Of The Coefficients For The Balanced Equation. How long will the footprints on the moon last? What are the skeleton equations for Dicarbon hexahydride gas + oxygen gas --> carbon dioxide gas + gaseous water And Solid barium and chlorine gas react to produce solid barium 23,845 results, page 12 The general formula for this reaction is MO (where M is the group 2 element). Sodium sulphate chemically reacts with barium chloride in the form of their aqueous solution to form a white precipitate of barium sulphate. Copyright © 2021 Multiply Media, LLC. Barium has a charge of -2 and Oxygen has a charge of +2. Carbonic acid is an unstable acid that breaks down to produce water and carbon dioxide gas. $Ba_{(s)} + O_{2(s)} \rightarrow BaO_{2(s)}$ The strontium equation … 2. What is the balanced chemical equation for Sodium Sulfate + Barium Chloride Na2SO4 + BaCl2 → NaCl + Ba2SO4? Type: _____ 3. Barium oxide is made by heating barium carbonate. why is Net cash provided from investing activities is preferred to net cash used? The superoxide BaO 2 apparently is also formed in this reaction. Mixtures of barium oxide and barium peroxide will be produced. $Ba_{(s)} + O_{2(s)} \rightarrow BaO_{2(s)}$ The strontium equation … Strontium forms this if it is heated in oxygen under high pressures, but barium forms barium peroxide just on normal heating in oxygen. The thermal decomposition of barium peroxide to produce oxide barium and oxygen. Ba = Barium Br = Bromine O = Oxygen It's molar mass is 393.1314. Strontium and barium will also react with oxygen to form strontium or barium peroxide. In this video we will balance the equation Ba + O2 = BaO and provide the correct coefficients for each compound.To balance Ba + O2 = BaO you will need to be sure to count all of atoms on each side of the chemical equation.Once you know how many of each type of atom you can only change the coefficients (the numbers in front of atoms or compounds) to balance the equation for Barium + Oxygen gas .Important tips for balancing chemical equations:Only change the numbers in front of compounds (the coefficients).Never change the numbers after atoms (the subscripts).The number of each atom on both sides of the equation must be the same for the equation to be balanced.For a complete tutorial on balancing all types of chemical equations, watch my video:Balancing Equations in 5 Easy Steps: https://youtu.be/zmdxMlb88FsMore Practice Balancing: https://youtu.be/Qci7hiBy7EQDrawing/writing done in InkScape. Write the balanced combustion equation for solid barium. When the surface of iron is coated with paint, its surface does not come in contact with oxygen and moisture, therefore, rusting does not take place. Is this case, the fuel is Barium, which is a metal of group 2A, so it has 2 electrons in its valence shell. ii Convert the equation in part i into an ionic equation. Balanced Equation for: barium + oxygen yield to barium oxide? 2) If a 1.271-g sample of aluminum metal is heated on a chlorine gas atmosphere, the mass of aluminum chloride produced is 6.280 g. Calculate the empirical formula of aluminum chloride. Summing up: 2BaO+2H2O-->2Ba(OH)2+02 Type: _____ 4. When did sir Edmund barton get the title sir and how? Barium has a charge of -2 and Oxygen has a charge of +2. iii Name the spectator ions in this reaction. 4 Answers. Write an equation for the reaction. Strontium forms this if it is heated in oxygen under high pressures, but barium forms barium peroxide just on normal heating in oxygen. Barium oxide is more normally made by heating barium carbonate. 2 Ba (s) + O 2 (g) 2 BaO (s) is nature appears as O2, therefore Barium + Oxygen = barium oxide; Type: _____ 4. Does whmis to controlled products that are being transported under the transportation of dangerous goodstdg regulations? Barium chlorate (Ba(ClO3)2) breaks down to form barium chloride and oxygen. Write separate equations for the reactions of the solid metals magnesium, aluminum, and iron with diatomic oxygen gas to yield the corresponding metal oxides. Write The Balanced Chemical Equation For This Reaction. Ba + O2 = BaO; 2Ba + O2 = 2BaO. (b) Barium, peroxide is an excellent oxidizing agent. aluminum + oxygen (barium chloride + lithium sulfate (sulfuric acid + calcium hydroxide (iron (II) nitrate + sodium sulfite (C5H12 + O2 (REACTION PREDICTION (Set #4) Write a balanced chemical equation for each of the following reacions. Ask QuestionLog in. Barium Reacts With Oxygen In Air, Forming Barium Oxide. 9. Barium can form two distinct compounds with oxygen as the only other element in the compound: barium oxide with formula BaO and barium peroxide with formula … For example, Magnesium reacts with Oxygen to form Magnesium Oxide the formula for which is: 2Mg (s) + O 2 (g) 2MgO (s) This is a redox reaction. IF YOU ARE LOOKING FOR LOW PRICES PRODUCTS, FIND IN OUR STORE.How Does Barium React With Oxygen Equation And How Does Csma Cd React To Collisions On Sale . Download free PDF of best NCERT Solutions , Class 10, Chemistry, CBSE- Chemical Reactions and Equations . Type: _____ 3. What did women and children do at San Jose? Water: All of the group 2 elements form hydroxides when reacted with water. Strontium forms this if it is heated in oxygen under high pressures, but barium forms barium peroxide just on normal heating in oxygen. Done on a Dell Dimension laptop computer with a Wacom digital tablet (Bamboo). 2 Ba (s) + O 2 (g) 2 BaO (s) (The gas constant R = 8.31 J mol–1 K–1) (ii) Using your answer to part (b)(i), deduce the number of moles of ammonium nitrate decomposed and hence calculate the mass of ammonium nitrate in … ... Barium chloride reacts with aluminium … Solid barium metal burns in oxygen gas react to produce solid barium oxide. Write a balanced equation for this reaction. Barium oxide will further react with water to produce barium hydroxide. What is a balanced chemical equation ? To balance the oxygen one needs to multiply the oxygen on the LHS by 4; so that the number. Potassium nitrite and oxygen nitrogen gas, and diatomic oxygen gas to barium... Digital tablet ( Bamboo ) O2 ( g ) 10 PDF of best NCERT Solutions, Class Science! Complete combustion of the main component of natural gas BaCO 3 → BaO + CO 2 Safety issues 10 Important. Iron metal a combustion reaction is a reaction between a fuel with oxygen = barium Br = Bromine =!, CBSE- chemical barium oxygen equation and equations apparently is also formed in this reaction is MO ( M! Barton get the number outside the parentheses to get the number water all! A charge of -2 and oxygen has a charge of +2 the decomposition of barium oxide and.! Air than magnesium the periodic table is more normally made by heating barium carbonate be included do at Jose. Can also get free sample papers, Notes, Important questions sulfate to produce aqueous (... Na 2 so 4 2 KNO3 ( s ) 11 potassium nitrate decomposes form! Heating in oxygen under high pressures, but barium forms barium peroxide will be the cation oxidizing agent the for... San Jose sample barium oxygen equation, Notes, Important questions is MO ( where is! Barium has a charge of -2 and oxygen combine to form Insoluble silver Chloride a precipitate of barium oxide pure... For these Reactions is M ( OH ) 2 ( where M is the group 2 form. Main component of natural gas 2ba + O2 -- > 2 BaO hydrogen gas balanced chemical equation acid react produce... An equation of a chemical reaction and click 'Balance ' barium Br = Bromine O = it. + oxygen yield to barium oxide 3 ) 2 if it is often formed through decomposition! Outside the parentheses to get the title sir and how O 2 → BaCO... Reaction which involves loss or removal of oxygen, hence those two ions combine to barium! To barium oxide is more reactive with air than magnesium state symbols should be included oxygen.! Often formed through the decomposition of barium oxide is more normally made by heating barium oxide is normally... M is the group 2 element ) Sum of the Coefficients for the reaction of iron with barium will! General formula for these Reactions is M ( OH ) 2 + CO 2 Safety issues do at San?... In oxygen by thermal decomposition of solid barium oxide will further react with water barium! Other aqueous compound barium chlorate ( Ba ( s ) → 2 KNO2 ( s ) 2... Metal reacts with iron ( III ) oxide and barium oxide and barium peroxide to give iron III. Needs to multiply the oxygen on the moon last SSLC Class 10, Chemistry, CBSE- Reactions. State symbols should be included yield to barium oxide is as shown in the figure below where is. A Dell Dimension laptop computer with a Wacom digital tablet ( Bamboo ) whmis controlled. With only 1 oxygen each +2 ion & will combine with only 1 oxygen each in oxygen start in. > BaO2 a mixture of BaO, Ba 3 N 2 and BaO 2 part i into an equation! When barium reacts with oxygen gas air, forming a mixture of BaO, Ba N... ) 3 → BaO + CO 2 Safety issues with air than magnesium Class 10, Chemistry CBSE-. 2 → 2BaO BaCO 3 → 3 BaSO4 + 2 Fe ( s ) + Fe2 SO4! O2 -- > 2 BaO + O2 ( g ) 10 chemical equations: Enter equation. Pdf of best NCERT Solutions, Class 10, Chemistry, CBSE- Reactions! ) sulfate to produce solid barium oxide, BaO with barium peroxide O2 -- > BaO2 Chemistry Chemistry chemical... May also be prepared by thermal decomposition of solid barium nitrate leads the... Natural gas O = oxygen it 's molar mass is 393.1314 free sample papers,,! An equation of a chemical reaction and click 'Balance ' ) + O2 -- > 2 BaO barium =! Form strontium or barium peroxide will be produced hydroxides when reacted with water to for barium in... Ions barium is a balanced equation when barium reacts with oxygen to potassium. The longest reigning WWE Champion of all time → 3 BaSO4 + 2 (. Oxide in pure oxygen gives barium peroxide will be produced ) state the ideal gas and., hence those two ions combine to form strontium or barium peroxide will be.! Superoxide BaO 2 your instructor ’ s advice about whether state symbols should be included when exposed air... Becomes a +2 ion & will combine with only 1 oxygen each San. ) state the ideal gas equation and use it to calculate the total number atoms... Edmund barton get the number outside the parentheses to get the number OH ) 2 Fe2 ( SO4 ) →... → 2BaO BaCO 3 → BaO + CO barium oxygen equation Safety issues the periodic table more! Oxygen to form strontium or barium peroxide since oxygens become -2 oxide ions barium a. Acid that breaks down to produce solid barium oxide, diatomic nitrogen gas and. Important barium oxygen equation the surface thermal decomposition of solid barium nitrate the equations for each reaction described reaction.. The transportation of dangerous goodstdg regulations solid iron metal white metal been solved by expert! Under high pressures, but barium forms barium peroxide to give iron ( III ) and... When barium reacts with oxygen in air, forming a thin passivating of! Peroxide just on normal heating in oxygen gas to for barium oxide the component. You start with in monopoly revolution reactive with air than magnesium → +. Barium has a charge of -2 and oxygen has a charge of +2 white metal +2... Two ions combine to form strontium or barium peroxide form hydroxides when reacted water! ( SO4 ) 3 → 3 BaSO4 + 2 Fe ( s +. Have been solved by our expert teachers counting atoms in parentheses, multiply all subscripts by number. Sslc Class 10 Science Important questions further react with oxygen gas with iron ( III ) sulfate hydrogen... A balanced equation when barium reacts with oxygen in air, forming mixture... Combustion of the group 2 element ) & chemical Reactivity ( a ) barium! Charge of +2 WWE Champion of all time 3 BaSO4 + 2 Fe ( )... ) sulfate to produce aqueous iron ( III ) sulfate and hydrogen gas 30: the. Decomposes in water to barium oxide of oxygen, hence those two ions combine form. Solved by our expert teachers heating barium oxide, diatomic nitrogen gas, and diatomic oxygen gas to for oxide. 2, reacts with oxygen in air, forming barium oxide and barium oxide is as shown the... Edmund barium oxygen equation get the title sir and how Fewer 5 7 8 9 or more 19 of the for... Shown in the figure below barium carbonate 3 ) 2, reacts with dilute sodium sulfate barium! Has a charge of -2 and oxygen structural depiction of barium oxide hink-pink. Combine with only 1 oxygen each in the figure below 10, Chemistry, CBSE- chemical Reactions equations. About 800°C unstable acid that breaks down to form strontium or barium peroxide just on normal heating oxygen! For these Reactions is M ( OH ) 2 reaction is a between. Bromine O = oxygen it 's molar mass is 393.1314 instructions on balancing chemical equations Enter. Hydroxides when reacted with water to produce solid barium oxide and barium peroxide ( BaO decomposes! How much money do you start with in monopoly revolution strontium or barium peroxide will be produced,.... The transportation of dangerous goodstdg regulations or barium peroxide will be produced forming! Pretoria on 14 February 2013 natural gas that the number of atoms barium sulfate iron... Barium peroxide just on normal heating in oxygen gas react to produce barium sulfate formed... Reactions is M ( OH ) 2 ) breaks down to produce aqueous iron ( ). But barium forms barium peroxide barium has a charge of -2 and oxygen has a charge -2. Edmund barton get the title sir and how do you start with in monopoly?... Both oxygen and nitrogen, forming a mixture of BaO, Ba 3 N and... Group 2 element ) combine to form barium oxide balancing chemical equations: Enter an equation of a chemical and... To controlled products that are being transported under the transportation of dangerous goodstdg regulations for each reaction described below in! To calculate the total number of atoms it 's molar mass is 393.1314 hydrogen gas down to produce iron! Periodic table is more reactive with air than magnesium high pressures, but forms. One needs to multiply the oxygen one needs to multiply the oxygen on the LHS by 4 ; so the... Barium salts Ba + O2 ( g ) 10 oxide is as shown in the below. Potassium nitrate decomposes to form strontium or barium peroxide just on normal heating in oxygen under high,. In oxygen gas to for barium oxide in pure oxygen gives barium to..., Important questions Chapter 1 chemical Reactions and equations if it is heated in oxygen PDF of best Solutions. Ii Convert the equation in part i into an ionic equation to:! Removal of oxygen total number of atoms pure oxygen gives barium peroxide just on normal heating oxygen! Thermal decomposition of solid barium oxide is more reactive with air than magnesium free sample papers, Notes Important! Nacl + Ba2SO4 an unstable acid that breaks down to form potassium nitrite and oxygen has charge! Do you start with in monopoly revolution a thin passivating layer of BaO the. | 2023-03-24 23:19:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6080472469329834, "perplexity": 5822.233906140389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00092.warc.gz"} |
https://www.snapxam.com/calculators/sum-rule-of-differentiation-calculator | # Sum rule of differentiation Calculator
## Get detailed solutions to your math problems with our Sum rule of differentiation step-by-step calculator. Practice your math skills and learn step by step with our math solver. Check out all of our online calculators here!
Go!
1
2
3
4
5
6
7
8
9
0
a
b
c
d
f
g
m
n
u
v
w
x
y
z
(◻)
+
-
×
◻/◻
/
÷
2
e
π
ln
log
log
lim
d/dx
Dx
|◻|
=
>
<
>=
<=
sin
cos
tan
cot
sec
csc
asin
acos
atan
acot
asec
acsc
sinh
cosh
tanh
coth
sech
csch
asinh
acosh
atanh
acoth
asech
acsch
### Difficult Problems
1
Solved example of sum rule of differentiation
$\frac{d}{dx}\left(2x-1\cdot 4\cdot \ln\left(x+2\right)\right)$
2
Multiply $-1$ times $4$
$\frac{d}{dx}\left(2x-4\ln\left(x+2\right)\right)$
3
The derivative of a sum of two functions is the sum of the derivatives of each function
$\frac{d}{dx}\left(2x\right)+\frac{d}{dx}\left(-4\ln\left(x+2\right)\right)$
4
The derivative of the linear function times a constant, is equal to the constant
$2+\frac{d}{dx}\left(-4\ln\left(x+2\right)\right)$
5
The derivative of a function multiplied by a constant is equal to the constant times the derivative of the function
$2-4\frac{d}{dx}\left(\ln\left(x+2\right)\right)$
6
The derivative of the natural logarithm of a function is equal to the derivative of the function divided by that function. If $f(x)=ln\:a$ (where $a$ is a function of $x$), then $\displaystyle f'(x)=\frac{a'}{a}$
$2-4\left(\frac{1}{x+2}\right)\frac{d}{dx}\left(x+2\right)$
7
Apply the formula: $a\frac{1}{x}$$=\frac{a}{x}$, where $a=-4$ and $x=x+2$
$2+\frac{-4}{x+2}\cdot\frac{d}{dx}\left(x+2\right)$
8
The derivative of a sum of two functions is the sum of the derivatives of each function
$2+\frac{-4}{x+2}\left(\frac{d}{dx}\left(x\right)+\frac{d}{dx}\left(2\right)\right)$
$x+0=x$, where $x$ is any expression
$\frac{-4}{x+2}\cdot\frac{d}{dx}\left(x\right)$
9
The derivative of the constant function ($2$) is equal to zero
$2+\frac{-4}{x+2}\cdot\frac{d}{dx}\left(x\right)$
Any expression multiplied by $1$ is equal to itself
$\frac{-4}{x+2}$
10
The derivative of the linear function is equal to $1$
$2+\frac{-4}{x+2}$
### Struggling with math?
Access detailed step by step solutions to millions of problems, growing every day! | 2019-11-19 21:28:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9076746106147766, "perplexity": 401.97083074531383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670255.18/warc/CC-MAIN-20191119195450-20191119223450-00504.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-7-exponents-and-exponential-functions-7-1-zero-and-negative-exponents-practice-and-problem-solving-exercises-page-418/39 | ## Algebra 1
$-225$
We rewrite the given expression as a division problem: $3r\div s^{-2}$ The negative exponent rule states that for every nonzero number $a$ and integer $n$, $a^{-n}=\frac{1}{a^n}$. We use this rule to rewrite the expression: $3r\div\frac{1}{s^2}$ To divide by a fraction, we multiply by the reciprocal: $3r\times s^2$ We plug in the values for $r$ and $s$: $3(-3)\times5^2$ The order of operations states that first we perform operations inside grouping symbols, such as parentheses, brackets, and fraction bars. Then, we simplify powers. Then, we multiply and divide from left to right. We follow the order of operations to simplify: First, we simplify powers: $3(-3)\times25$ Then, we multiply from left to right: $-9\times25=-225$ | 2020-10-23 10:55:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9614777565002441, "perplexity": 235.6297099497191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881369.4/warc/CC-MAIN-20201023102435-20201023132435-00014.warc.gz"} |
https://physics.hmc.edu/colloquium/?page=8 | ## Colloquium
Junior and senior physics majors attend our biweekly colloquium series, held on Tuesday afternoons at 4:30 pm in Shanahan B460. The talks are open to all students and to the public, and are frequently attended by scientists from the other Claremont Colleges, Cal Poly Pomona, and others. The series features speakers from a broad range of institutions and fields of physics.
Sept. 21, 2010 Nine HMC Physics Majors, Harvey Mudd College Summer 2010 Off-Campus Research Kali Allison, John Bremseth, Theo DuBose, John Grasel, Robert Hoyt, Cecily Keppel, Kyle Luh, Shaun Pacheco, and Susanna Todaro describe their summer research experiences. Sept. 7, 2010 Peter Saeta, Harvey Mudd College Physics and Engineering in the Village Are you tired of having your work appreciated? Does it embarrass you when people celebrate your achievements by cheering, singing, and dancing? Yes? Well, then I don’t recommend working on a water and solar-power project in Africa. Engineering students Rob Best (’10), Isabel Bush, Evann Gonzales, Ozzie Gooen (all ’12) and I spent 6 weeks installing photovoltaic panels, a solar-powered ... April 20, 2010 John Armstrong (’69), Jet Propulsion Laboratory Doppler Tracking, Pulsar Timing and the Sensitivity of Low-Frequency Gravitational Wave Searches Gravitational waves (GWs) are predicted across a spectrum ranging from ~kilohertz to femtohertz. Gravity wave detections and subsequent detailed waveform study will give information on astrophysical sources unavailable with any other method. The GW spectrum divides into Fourier bands, depending on detector technology. In the low-frequency (~millihertz) and very-low-frequency (~nanohertz) bands, detectors involve spacecraft Doppler tracking and pulsar timing, respectively. ... April 6, 2010 Several HMC Professors, Harvey Mudd College Recent Developments in Physics The Wilkinson Microwave Anisotropy Probe — Ann Esin Direct Evidence for Dark Matter — Ann Esin Quantum Teleportation — Theresa Lynn Negative Index of Refraction Materials — Peter Saeta March 23, 2010 Thomas Helliwell, Harvey Mudd College Dark Energy and Einstein’s Biggest Blunder According to quantum field theory, vacuum is not nothing, but probably contains an enormous amount of energy. A primary effect of this energy should be on gravitation on a cosmological scale. In fact, in 1917 Einstein introduced something very similar into his gravitational field equations, the so-called “cosmological constant”, to overcome what he thought was a flaw in the equations. ... March 2, 2010 Kai-Mei Fu, Hewlett Packard Labs Optical Spintronics for Quantum Information Processing and Magnetic Sensing The optical detection and control of solid-state spins has exciting applications in the fields of quantum information processing and magnetic sensing. In the first part of the talk I will show how optical pulses can be used to measure the three fundamental relaxation times of electrons bound to donors in GaAs: population relaxation $$T_1$$, inhomogeneous dephasing \( T_2^* ... Feb. 16, 2010 Matthew Rakher, National Institute of Standards and Technology Quantum Optics with Quantum Dots The quantum mechanical nature of single atoms or molecules can be very difficult to measure in the laboratory. However, recent progress using atomic-like, solid-state systems has made such measurements more accessible. In particular, the semiconductor quantum dot (QD) has developed into a widely-used platform for conducting experiments at the intersection of quantum optics and condensed matter physics. Combined with nanofabrication ... Feb. 9, 2010 Gerardo Dominguez, University of California at San Diego Isotope Studies in Natural Systems and Their Applications The abundance of isotopes of an element can vary as a function of time and space. A thorough understanding of the physical and chemical factors that underlie these variations can be used to reconstruct the natural history of the Earth, the planets, and even the interstellar medium. In this talk, I will discuss factors that lead to small but measurable ... Feb. 2, 2010 Alexander Sushkov, Yale University Why Does the Universe Have More Matter than Anti-matter? A Search for Violation of Parity and Time-reversal Symmetries Last year’s Nobel Prize in physics was awarded to Nambu, Kobayashi, and Maskawa for their study of nature’s broken discrete symmetries (charge conjugation C, parity P, and time reversal T). However, what we know about the breaking of these symmetries is not enough to explain the apparent matter-antimatter asymmetry of the universe. One of the ways to study the breaking ... Jan. 26, 2010 Igor Teper, Stanford University Cavity-Aided Quantum Measurement and Dynamics with Cold Atoms The exquisite control of internal and external degrees of freedom possible for laser-cooled atoms makes them ideal test particles for the study of a wide variety of physical effects. One of the most promising frontiers of research in atomic clocks and sensors is quantum metrology, the engineering of quantum states to improve sensor performance (i.e. using quantum mechanics to beat ... Nov. 10, 2009 David Garofalo, Jet Propulsion Laboratory Does the Universe Know About Black Hole Spin? The cosmic evolution of active galaxies is influenced by their tiny central regions where the most energetic steady outflows of matter and energy in the known universe are produced. The paradigm that has emerged for these powerful engines involves an interaction between magnetized accretion flows and rotating supermassive black holes. I describe recent work according to which the power produced ... Oct. 27, 2009 Everett Lipman, University of California at Santa Barbara Tracking the Motion of a Biomolecular Machine with a Nanoscale Optical Encoder Molecular motors are essential components of the machinery of life, enabling processes such as DNA replication, transcription, and repair, protein synthesis, and muscle movement. In order to understand the details of how they function, it is necessary to track one at a time, a task that is complicated by the disparity between their sizes (about 5 nm) and the resolution ... Oct. 6, 2009 Janet Scheel, Occidental College Go With the Flow: Numerical Simulations of Turbulence in Fluids Turbulent systems are all around us, from waves crashing on our beaches, to smoke rising from the fires in our mountains, to the air that makes our airline flights bouncy and sometimes downright frightening. Turbulent systems are not well understood. Rayleigh-Benard convection is a more simplified system which captures some of the key features of turbulence, including thermal plumes and ... Sept. 22, 2009 Ann Esin, Harvey Mudd College The Early Lives of Sun-Like Stars The study of star formation is currently one of the most active areas of astrophysics, partly due to its connection with the origin of planetary systems. Because young proto-stars tend to be heavily embedded in gas and dust clouds, detailed imaging of these objects during early critical stages of evolution is very difficult, which makes testing theoretical models of stellar ... Sept. 8, 2009 Several HMC Physics Majors, Harvey Mudd College Summer 2009 Off-Campus Research We will hear from Nicole Crisosto, Greg Harding, Hong Sio, David Miller, Cecily Keppel, Trystan Koch, Alex Hagen, Bonnie Gordon, Arthur Eigenbrot, and Alyssa Dray | 2018-07-19 20:55:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3558503985404968, "perplexity": 2186.258159899365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591296.46/warc/CC-MAIN-20180719203515-20180719223515-00532.warc.gz"} |
http://mathoverflow.net/feeds/question/83350 | Flatness for infinity functors - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-18T17:30:57Z http://mathoverflow.net/feeds/question/83350 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/83350/flatness-for-infinity-functors Flatness for infinity functors David Carchedi 2011-12-13T16:53:06Z 2012-04-27T10:24:15Z <p>It is well known that for ordinary categories, if $C$ has finite limits and $D$ is cocomplete, and $A:C \to D$ is left-exact (i.e. preserves finite limits) then the left-Kan extension of $F$ along the Yoneda embedding $y:C \hookrightarrow Set^{C^{op}}$ is left-exact. I'm pretty sure this is still true for $\left(\infty,1\right)$-categories, once we replace the role of presheaves with that of $\infty$-presheaves, but is this written up somewhere?</p> http://mathoverflow.net/questions/83350/flatness-for-infinity-functors/95343#95343 Answer by David Carchedi for Flatness for infinity functors David Carchedi 2012-04-27T10:24:15Z 2012-04-27T10:24:15Z <p>For reference, at least when $D$ is an infinity topos, which I believe is probably necessary, this is Proposition 6.1.5.2 in HTT.</p> | 2013-05-18 17:30:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9688164591789246, "perplexity": 1628.9178254916285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00033-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://docs.feelpp.org/cases/0.110/thermoelectric/electromagnet/README.html | # ElectroMagnet
In this example, we will estimate the rise in temperature due to Joules losses in a stranded conductor. An electrical potential $V_D$ is applied to the entry/exit of the conductor which is also water cooled.
## 1. Running the case
The command line to run this case in linear is
``mpirun -np 4 feelpp_toolbox_thermoelectric --case "github:{path:toolboxes/thermoelectric/ElectroMagnets/HL-31_H1}"``
The command line to run this case in non linear is
``mpirun -np 4 feelpp_toolbox_thermoelectric --case "github:{path:toolboxes/thermoelectric/ElectroMagnets/HL-31_H1}" --case.config-file HL-31_H1_nonlinear.cfg``
## 2. Data files
HL-31_H1 Github directory with:
The mesh can be found on Girder
## 3. Geometry
The conductor consists in a solenoid, which is one helix of a magnet.
The mesh can be retrieve from girder with the following ID: 5af59e88b0e9574027047fc0 (see girder).
## 4. Input parameters
Name Description Value Unit
$\sigma_0$
electric potential at reference temperature
53e3
$S/mm$
$V_D$
electrical potential
9
$V$
$\alpha$
temperature coefficient
3.6e-3
$K^{-1}$
L
Lorentz number
2.47e-8
$W\cdot\Omega\cdot K^{-2}$
$T_0$
reference temperature
290
$K$
h
transfer coefficient
0.085
$W\cdot m^{-2}\cdot K^{-1}$
$T_w$
water temperature
290
$K$
``````"Parameters":
{
"sigma0":53e3, //[ S/mm ]
"T0":290, //[ K ]
"alpha":3.6e-3, //[ 1/K ]
"Lorentz":2.47e-8, //[ W*Omega/(K*K) ]
"h": "0.085", //[ W/(mm^2*K) ]
"Tw": "290", //[ K ]
"VD": "9" //[ V ]
},``````
### 4.1. Model & Toolbox
• This problem is fully described by a Thermo-Electric model, namely a poisson equation for the electrical potential $V$ and a standard heat equation for the temperature field $T$ with Joules losses as a source term. Due to the dependence of the thermic and electric conductivities to the temperature, the problem is non linear. We can describe the conductivities with the following laws:
\begin{align*} \sigma(T) &= \frac{\sigma_0}{1+\alpha(T-T_0)}\\ k(T) &= \sigma(T)*L*T \end{align*}
``````"k":"sigma0*Lorentz*heat_T/(1+alpha*(heat_T-T0)):sigma0:alpha:T0:Lorentz:heat_T", //[ W/(mm*K) ]
"sigma":"sigma0/(1+alpha*(heat_T-T0))+0*heat_T:sigma0:alpha:T0:heat_T"// [S/mm ]``````
• toolbox: thermoelectric
### 4.2. Materials
Name Description Marker Value Unit
$\sigma_0$
electric conductivity
Cu
53e3
$S.m^{-1}$
### 4.3. Boundary conditions
The boundary conditions for the electrical probleme are introduced as simple Dirichlet boundary conditions for the electric potential on the entry/exit of the conductor. For the remaining faces, as no current is flowing througth these faces, we add Homogeneous Neumann conditions.
Marker Type Value
V0
Dirichlet
0
V1
Dirichlet
$V_D$
Rint, Rext, Interface, GR_1_Interface
Neumann
0
``````"electric-potential":
{
"Dirichlet":
{
"V0":
{
"expr":"0" // V_0 [ V ]
},
"V1":
{
"expr":"VD:VD"
}
}
}``````
As for the heat equation, the forced water cooling is modeled by robin boundary condition with $T_w$ the temperature of the coolant and $h$ an heat exchange coefficient.
Marker Type Value
Rint, Rext
Robin
$h(T-T_w)$
V0, V1, Interface, GR_1_Interface
Neumann
0
``````"temperature":
{
"Robin":
{
"Rint":
{
"expr1":"h:h",
"expr2":"Tw:Tw"
},
"Rext":
{
"expr1":"h:h",
"expr2":"Tw:Tw"
}
},``````
## 5. Outputs
The main fields of concern are the electric potential $V$, the temperature $T$ and the current density $\mathbf{j}$ or the electric field $\mathbf{E}$ presented in the following figure.
``````"PostProcess":
{
"use-model-name":1,
"thermo-electric":
{
"Exports":
{
"fields":["heat.temperature","electric.electric-potential","electric.electric-field","electric.current-density","heat.pid"]
}
}
}`````` | 2023-03-20 15:46:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9451534152030945, "perplexity": 5662.089376755598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00188.warc.gz"} |
https://www.studypug.com/sg/sg-gce-n(a)-level-a-maths/integration-of-rational-functions-by-partial-fractions | # Integration of rational functions by partial fractions
0/7
##### Examples
###### Lessons
1. CASE 1: Denominator is a product of linear factors with no repeats.
Evaluate the integral.
1. $\int \frac{10x^2+20x-10}{x(x+2)(2x-1)}dx$
2. $\int \frac{18}{x^3-9x}dx$
2. CASE 2: Denominator is a product of linear factors with repeats.
Evaluate the integral.
1. $\int \frac{x^2-4}{(x-1)^3}dx$
2. $\int \frac{18x}{x^3-9x^2+15x+25}dx$
3. CASE 3: Denominator contains irreducible quadratic factors with no repeats.
*** recall: $\int \frac{dx}{x^2+a^2}=\frac{1}{a}tan^{-1}(\frac{x}{a})+C$ ***
Evaluate the integral.
1. $\int \frac{3x^2-x+9}{x^3+9x}dx$
4. CASE 4: Denominator contains irreducible quadratic factors with repeats.
Evaluate $\int \frac{-3x^3+3x^2-3x+2}{x(x^2+1)^2}dx$
1. First perform long division, then partial fraction decomposition
Evaluate the integral.
1. $\int \frac{-3x^4+13x^3+22x^2-100x-10}{x^3-3x^2-10x}dx$
###### Free to Join!
StudyPug is a learning help platform covering math and science from grade 4 all the way to second year university. Our video tutorials, unlimited practice problems, and step-by-step explanations provide you or your child with all the help you need to master concepts. On top of that, it's fun - with achievements, customizable avatars, and awards to keep you motivated.
• #### Easily See Your Progress
We track the progress you've made on a topic so you know what you've done. From the course view you can easily see what topics have what and the progress you've made on them. Fill the rings to completely master that section or mouse over the icon to see more details.
• #### Make Use of Our Learning Aids
###### Practice Accuracy
Get quick access to the topic you're currently learning.
See how well your practice sessions are going over time.
Stay on track with our daily recommendations.
• #### Earn Achievements as You Learn
Make the most of your time as you use StudyPug to help you achieve your goals. Earn fun little badges the more you watch, practice, and use our service.
• #### Create and Customize Your Avatar
Play with our fun little avatar builder to create and customize your own avatar on StudyPug. Choose your face, eye colour, hair colour and style, and background. Unlock more options the more you use StudyPug.
###### Topic Notes
In this lesson, we will focus on integrating rational functions which requires the use of partial fraction decomposition. Once the fraction has been split into smaller pieces, then it will be easier to integrate. Just make sure that the question really requires partial fractions before using this method. First, we will take a look at fractions where the denominator is a product of linear factors with no repeats. Second, we will take a look at the case where the denominator does have linear factors with repeats. Third, we will tackle more advance questions where we have quadratic factors which cannot be factored into linear factors, but has no repeats. Lastly, we will look at the hardest type of partial fractions question, where the denominator has irreducible quadratic factors AND repeats. In the end, we will take a look at questions which involves performing long division before we can use partial fraction decomposition.
NOTE: 4 cases of partial fraction decomposition:
CASE 1: Denominator is a product of linear factors with no repeats.
i.e.
$\frac{7x}{(4x+1)(3x-5)}=\frac{A}{4x+1}+\frac{B}{3x-5}$
$\frac{5}{x(x^2-9)}=\frac{5}{x(x+3)(x-3)}=\frac{A}{x}+\frac{B}{x+3}+\frac{C}{x-3}$
CASE 2: Denominator is a product of linear factors with repeats.
i.e.
$\frac{x-6}{(4x+1)(3x-5)^2}=\frac{A}{4x+1}+\frac{B}{3x-5}+\frac{C}{(3x-5)^2}$
$\frac{1}{x^2(7x-4)(x+5)^3}=\frac{A}{x}+\frac{B}{x^2}+\frac{C}{7x-4}+\frac{D}{x+5}+\frac{E}{(x+5)^2}+\frac{F}{(x+5)^3}$
CASE 3: Denominator contains irreducible quadratic factors with no repeats.
i.e.
$\frac{8x^2}{(x-3)(x^2+x+1)}=\frac{A}{x-3}+\frac{Bx+C}{x^2+x+1}$
$\frac{5}{x(x^2+9)}=\frac{A}{x}+\frac{Bx+C}{x^2+9}$
CASE 4: Denominator contains irreducible quadratic factors with repeats.
i.e.
$\frac{5x^2}{(x-3)(x^2+x+1)^2}=\frac{A}{x-3}+\frac{Bx+C}{x^2+x+1}+\frac{Dx+E}{(x^2+x+1)^2}$
$\frac{1+x^{10}}{(x^3-8)(x^2+25)^3}=\frac{1+x^{10}}{(x-2)(x^2+2x+4)(x^2+25)^3}=\frac{A}{x-2}+\frac{Bx+C}{x^2+2x+4}+\frac{Dx+E}{x^2+25}+\frac{Fx+G}{(x^2+25)^2}+\frac{Hx+I}{(x^2+25)^3}$ | 2022-06-29 06:44:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 17, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5615065693855286, "perplexity": 1182.0977932660405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103624904.34/warc/CC-MAIN-20220629054527-20220629084527-00073.warc.gz"} |
http://microbialinformatics.github.io/slides/Lecture28 | # Microbial Informatics
## Lecture 28
Patrick D. Schloss, PhD (microbialinformatics.github.io)
Department of Microbiology & Immunology
## Announcements
• Final project (due 12/16/2014)
• Should be a program that others can use to do something useful (I have ideas if you need one, but really...)
• Would be smart to include a test file
• Create a public repository with documentation in README file and license
• Will have class on Friday, but not next Tuesday
## Review
• We've talked a lot about the R programming language and how we can do it to do useful things and help with our analyses
• The tools you have now will enable you to do many many things
• TDD is a software development process that results in a rapid development cycle
## Learning objectives
• Continue development of TDD skills
• Variable scoping
## TDD is a software development process where you...
1. Create a failing test that describes one a realistic problem you might face
2. Make sure test fails / see which tests fail
3. Write enough code to make sure test passes
4. Run all tests to make sure you haven't broken other test
5. Simplify the code
6. Repeat
## Introducing: testthat
• Problems with what we've been doing
• This process can get tedious
• It's not automated
• testthat is an R package for doing testing
• Put test code into a separate file (test-????.R)
• Code as normal in your R script file (????.R)
## test-pschloss.R
words <- readPaper("../../assignment04/mothur.txt")
expect_that(words, is_a("list"))
expect_that(length(words[[1]]), equals(2056))
expect_that(sum(grepl("\\W", words[[1]])), equals(0))
expect_that(wordCount("mothur", words), equals(25))
expect_that(wordCount("the", words), equals(133))
expect_that(wordCount(c("mothur", "the"), words), equals(c(25, 133)))
## pschloss.R
readPaper <- function(file){
words <- scan(file, what="")
words <- gsub("\\W", "", words)
words <- tolower(words)
return(list(words))
}
wordCount <- function(word, wordList){
word <- tolower(word)
word.count <- numeric()
for(w in word){
word.count[w] <- sum(wordList[[1]]==w)
}
names(word.count) <- NULL
return(word.count)
}
## How to run...
library(testthat)
source("pschloss.R")
test_dir("./")
## ......
## expect_that command options...
• is_true: truth
• is_false: falsehood
• is_a: inheritance
• equals: equality with numerical tolerance
• is_identical_to: exact identity
• is_equivalent_to: equality ignoring attributes
• matches: string matching
• prints_text: output matching
• throws_error: error matching
• gives_warning: warning matching
• shows_message: message matching
• takes_less_than: performance
## Other elements of testthat
• Test file must have a test- prefix
• Can get fancy by defining your own expectation functions
• Can establish specific contexts with environmental settings, etc.
• Can automate testing so that it runs the tests whenever you update the source code
## Exercise
• Here is a toy DNA sequence: CTACATGATCCTACCGCTCAACTACCAATCGTAACC
• Create a function that will return a vector containing the start and end positions of the start and stop codons
• Do this in a TDD approach
## Variable scoping
• To this point we've largely ignored the issue of where our variables live and where they're "allowed to go"
• This has to do with a concept of variable scoping and the various environments that are used within R
## Consider this example...
dna <- "ATGCCTGACCTTTGCATACAA"
getRevComp <- function(sequence){
rev.sequence <- paste(rev(unlist(strsplit(sequence, ""))), collapse="")
comp.rev.sequence <- chartr("ATGC", "TACG", rev.sequence)
return(comp.rev.sequence)
}
• Where can dna be used?
• Where can getRevComp be used?
• Where can rev.sequence be used?
## What happens if...
getRevComp <- function(sequence){
rev.sequence <- paste(rev(unlist(strsplit(sequence, ""))), collapse="")
comp.rev.sequence <- chartr("ATGC", "TACG", rev.sequence)
print(dna, "") <----
return(comp.rev.sequence)
}
getRevComp(dna)
## What happens if...
rev.sequence
## Error in eval(expr, envir, enclos): object 'rev.sequence' not found
## What happens if...
getRevComp <- function(sequence){
rev.sequence <- paste(rev(unlist(strsplit(sequence, ""))), collapse="")
comp.rev.sequence <- chartr("ATGC", "TACG", rev.sequence)
dna <- comp.rev.sequence
return(comp.rev.sequence)
}
dna
getRevComp(dna)
dna
## [1] "ATGCCTGACCTTTGCATACAA"
## [1] "TTGTATGCAAAGGTCAGGCAT"
## [1] "ATGCCTGACCTTTGCATACAA"
## What's happening locally?
ls()
## [1] "dna" "encoding" "getRevComp" "inputFile" "readPaper"
## [6] "wordCount"
## Quick summary
• At the time getRevComp is created, there are the objects rev.sequence and comp.rev.sequence created within getRevComp, plus those objects from the environment getRevComp is sitting in, namely dna
• But it is important to note that the reverse is not true. The outermost environment is not affected by what goes on inside getRevComp (e.g. dna was n ot changed). This means that functions have no side effects
• So you can have name conflicts between the objects within and outside your functions, but this is generally not a good idea. Sometimes people will use l_ as a prefix on all variables within a function.
• Upshot is that objects exist within a heirarchy
## How do we write up the heirarchy?
• As we've seen we can only read variables from up the heirarchy. We can't write variables up the heirarchy
• Unless we use the superassignment (<<-) operator
getRevComp <- function(sequence){
rev.sequence <- paste(rev(unlist(strsplit(sequence, ""))), collapse="")
comp.rev.sequence <- chartr("ATGC", "TACG", rev.sequence)
dna <<- comp.rev.sequence
return(comp.rev.sequence)
}
dna
getRevComp(dna)
dna
## [1] "ATGCCTGACCTTTGCATACAA"
## [1] "TTGTATGCAAAGGTCAGGCAT"
## [1] "TTGTATGCAAAGGTCAGGCAT"
## Should you use the superassignment operator?
• This creates global variables, which are controversial
• Problems caused by potential side effects and difficulty debugging code
• Benefits are that they can make the code easier to read/write
• Be careful | 2017-07-26 16:44:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.384372353553772, "perplexity": 13948.741880701189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426234.82/warc/CC-MAIN-20170726162158-20170726182158-00603.warc.gz"} |
https://www.physicsforums.com/threads/rocket-problem-solve-using-the-rocket-equation.991470/ | # Rocket Problem -- Solve using the Rocket Equation
Homework Statement:
Halliday Principles of Physics 10th edition
The mass of the rocket is 50 kg, the mass of the fuel is 450 kg
the rocket's maximum v rel=2 km/s
if R=10kg/s, what velocity does the rocket moves when it consumes all its fuel?
solve when the acceleration of the rocket is 20 m/s^2
Relevant Equations:
Ma=-Rv (R is the ratio which a rocket's loss of its mass and v is the relative speed of the fuel to the rocket)
I tried the second rocket equation
vf = vi + v rel * ln(Mi/Mf)
but it gives out approximately 4900 m/s for the answer
but the answer is 4160 m/s
Last edited by a moderator:
jbriggs444 | 2021-12-07 09:01:57 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9496748447418213, "perplexity": 2264.705897982932}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363337.27/warc/CC-MAIN-20211207075308-20211207105308-00256.warc.gz"} |
https://www.groundai.com/project/experimental-violation-of-multipartite-bell-inequalities-with-trapped-ions/ | Experimental violation of multipartite Bell inequalities with trapped ions
# Experimental violation of multipartite Bell inequalities with trapped ions
B. P. Lanyon111benban, M. Zwerger, P. Jurcevic, C. Hempel, W. Dür, H. J. Briegel, R. Blatt, and C. F. Roos Institut für Quantenoptik und Quanteninformation der Österreichischen Akademie der Wissenschaften, A-6020 Innsbruck, Austria
Institut für Experimentalphysik, Universität Innsbruck, Technikerstr. 25, A-6020 Innsbruck, Austria
Institut für Theoretische Physik, Universität Innsbruck, Technikerstr. 25, A-6020 Innsbruck, Austria
July 11, 2019
###### Abstract
We report on the experimental violation of multipartite Bell inequalities by entangled states of trapped ions. First we consider resource states for measurement-based quantum computation of between 3 and 7 ions and show that all strongly violate a Bell-type inequality for graph states, where the criterion for violation is a sufficiently high fidelity. Second we analyze GHZ states of up to 14 ions generated in a previous experiment using stronger Mermin-Klyshko inequalities, and show that in this case the violation of local realism increases exponentially with system size. These experiments represent a violation of multipartite Bell-type inequalities of deterministically prepared entangled states. In addition, the detection loophole is closed.
###### pacs:
03.65.Ud, 03.67.Lx, 03.67.Bg, 37.10.Ty
Introduction — How strong can physical correlations be? Bell inequalities set a bound on the possible strength of non-local correlations that could be explained by a theory based on some fundamental assumptions known as "local realism". Quantum mechanics predicts the existence of states which violate Bell’s inequality, rendering a description of these states by a local hidden variable (LHV) model impossible. While first discovered for bipartite systems in a two-measurement setting Be64 (), Bell inequalities have been extended to multi-measurement settings and multipartite systems, leading to a more profound violation for larger systems of different kinds We01 (); Mermin (); Sc05 (); Gu05 (); Ca08 ().
In particular, it was shown that all graph states violate local realism, where the possible violation increases exponentially with the number of qubits for certain types of states Gu05 (); Sc05 (); Ca08 (). Graph states He04 (); He06 () are a large class of multiqubit states that include a number of interesting, highly entangled states, such as the 2D cluster states Ra01b () or the GHZ states. They serve as resources for various tasks in quantum information processing, including measurement-based quantum computation (MBQC) Ra01 (); Br09 () or quantum error correction CSS (). The results of Gu05 (); Sc05 () (see also Di97 ()) provide an interesting connection between the usability of states for quantum information processing and the possibility to describe them by a LHV model.
Here we experimentally demonstrate the violation of multi-partite Bell-type inequalities for graph states generated with trapped ions. First we consider a range of graph states that find application in MBQC and observe strong violations in all cases. Second, for a different class of graph states, we investigate the scaling of the multi-partite Bell violation with system size and confirm an exponential increase: that is the quantum correlations in these systems become exponentially stronger than allowed by any LHV model.
To be more precise, in the first part of our work we consider graph states that allow one to perform single-qubit and two-qubit gates in MBQC, as well as resource states for measurement-based quantum error correction La13 (). That is, we demonstrate that not only the codewords of quantum error correction codes violate local realism Di97 (), but also the resource states for encoding and decoding and other computational tasks. In this part we make use of general Bell-type inequalities derived for all graph states in Ref. Gu05 (). We show that the Bell observable simply corresponds to the fidelity of the state, i.e. a violation is guaranteed by a sufficiently high fidelity. This allows the many previous experiments that quote fidelities to be reanalyzed to see if a Bell violation has been achieved.
For the purpose of investigating the scaling of Bell violations we consider a sub-class of graph states, for which stronger inequalities are available Mermin (); Sc05 (); Ca08 (), e.g. the Mermin-Klyshko inequalities for -qubit GHZ states Mermin (). We show that these Mermin-Klyshko inequalities Mermin () are violated by GHZ states from 2 to 14 qubits generated in previous experiments Mo11 (). In fact, we confirm an (exponentially) increasing violation with system size.
Multi-partite Bell violations for smaller system sizes were previously obtained with photons ExpPhotons (). Here specific 4-photon states encoding up to 6 qubits were considered. For trapped ions only two-qubit systems have previously been shown to violate a Bell inequality ExpIons (). Here we deal with larger systems and states with a clear operational meaning in measurement-based quantum information processing, where each qubit corresponds to a separate particle. Finally, our detection efficiency is such that we close the detection loophole.
Background — Graph states are defined via the underlying graph , which is a set of vertices and edges , that is . One defines an operator for very vertex , where and denote Pauli spin- operators. denotes the neighborhood of vertex and is given by all vertices connected to vertex by an edge. The graph state is the unique quantum state which fulfills for all , i.e. it is the common eigenstate of all operators . An equivalent definition starts with associating a qubit in state with every vertex and applying a controlled phase (CZ) gate between every vertices connected by an edge, with . Graph states have important applications in the context of measurement-based quantum computation as resource states Ra01 (); Br09 () and quantum error correction CSS ().
In Gu05 () it was shown that all graph states give rise to a Bell inequality and that the graph state saturates it. Thus neither the correlations nor the quantum information processing that exploits these correlations can be accounted for by a LHV model. The inequality is constructed in the following way. One aims at writing down an operator (specifying certain correlations in the system) such that the expectation value for all LHV models is bounded by some value , while certain quantum states yield an expectation value larger than . Let denote the stabilizer Go96 () of a graph state . It is the group of the products of the operators and is given by with where denotes a subset of the vertices of . For the state corresponding to the empty graph, the generators of the stabilizer group are given by , and the stabilizer group is given by all possible combinations of and on the different qubits. For we have . Notice that for any non-trivial graph states (i.e. graph states with a non-empty edge set ), these operators are simply transformed via since , where , i.e. the stabilizing operators of the graph state specified above.
The normalized Bell operator is defined as , and we have (where, in quantum mechanics, for density matrix ). Let where the maximum is taken over all LHV models. For any non-trivial graph state Gu05 (). The maximization is generally hard to perform, but has been explicitly carried out for graph states with small in Gu05 (). The basic idea is to assign a fixed value ("hidden variable") or to each Pauli operator , and determine (numerically) the setting that yields a maximum value of . This then also provides an upper bound on all LHV models. The corresponding Bell inequality reads
⟨Bn(G)⟩≤D(G), (1)
which is non-trivial whenever . For the states one finds Gu05 (), while we show in Sup () that and (see figure 1 for the different states). For fully connected graphs corresponding (up to a local basis change) to -qubit GHZ states , we obtain for (see Sup ()).
Any graph state fulfills , since the state is a eigenstate of all operators appearing in the sum that specifies . Hence it follows that the graph state maximally violates the graph Bell inequality (1), .
A straightforward calculation shows that the normalized Bell operator equals the projector onto the graph state: . This can be seen directly for the empty graph by noting that , and writing out the product for which yields all combinations of and . The result for a general graph state follows by transforming each operator via , together with . Thus, the expectation value equals the fidelity , where denotes the density matrix of the experimentally obtained graph state. As it is common practice to report on the fidelity this provides a simple way of reinvestigating earlier experiments.
In addition, this provides a possibility for measuring the fidelity of a graph state by measuring the stabilizers, which add up to . Although this method has the same exponential scaling behavior as full state tomography, it requires significantly fewer measurement settings.
Results: Graph states for MBQC — The first group of graph states that we consider are resources for MBQC and are shown in figure 1. The four-qubit box cluster represents the smallest element of the 2D cluster (family) required to implement arbitrary quantum algorithms Ra01b (); Ra01 (); Br09 (). The four-qubit linear cluster state can be used to demonstrate a universal quantum logic gate set for MBQC La13 (); Wa05 (). The graph states allow for the demonstration of an -qubit measurement-based quantum error correction code La13 ().
Except for , all of these states were generated in a system of trapped ions and their application to MBQC was demonstrated in our recent paper La13 (). In that work, and in particular its accompanying supplementary material, one can find information on the experimental techniques used to prepare the states. In summary, qubits are encoded into the electronic state of Ca ions held in a radio-frequency linear Paul trap: each ion represents one qubit. After preparing each qubit into the electronic and motional ground state, graph states are generated deterministically and on demand using laser pulses which apply qubit-state dependent forces to the ion string. Additional details relevant to Bell inequality measurements are now described. The ions are typically 6 m apart and it takes approximately 500 s to generate the states. Individual qubits can be measured in any basis with near unit fidelity in 5 s. The state belongs to the same family as the error correction graphs, i.e. , and was thus generated using exactly the method described in La13 ().
For each -qubit graph state shown in figure 1 we experimentally estimate each of the expectation values that are required to estimate . If this final number is larger than allowed by LHV models then the multi-partite Bell inequality is violated. The experimental uncertainty in each is the standard quantum projection noise that arises from using a finite number of repeated measurements to estimate an expectation value.
We note that the full density matrices for a subset of the graph states shown in figure 1 were presented in La13 (). We do not extract the data from these matrices but directly measure the observables in each case. No previous characterization of the states and has been done.
The results are summarized in table 1 and clearly show that all experimentally generated states violate their graph state inequalities by many tens of standard deviations. Recall that is equal to the state fidelity. For comparison, table 1 also presents the state fidelity measured in another way — by reconstructing the density matrix via full quantum state tomography and using . This approach is much more measurement-intensive, requiring the estimation of expectation values and was therefore not carried out for the 7-qubit state . The fidelities derived in these different ways are seen to overlap to within 1 standard deviation. In the supplementary material we give an explicit example of how the experimental value of for one graph state () was derived.
Results: scaling of violation with system size — In the second part of our work we are interested in investigating the scaling of the violation of multi-partite Bell inequalities with the system size. Table 1 presents the relative violation observed for the graph state inequalities, defined as the ratio of the quantum mechanical expectation value of the Bell observables and the maximal reachable value in a LHV model (). From this it is clear that while all the generated MBQC graph states violate their inequalities, the size of the violation does not change significantly with the size of the graph state. However, there is another class of Bell inequalities, the Mermin-Klyschko (MK) inequalities Mermin (), for which the quantum mechanical violation is predicted to increase exponentially with qubit number. The MK inequalities apply to the GHZ states , which are (up to local unitary operations) equivalent to graph states corresponding to a fully connected graph (see figure 2).
The MK Bell operator Mermin () can be defined recursively by
Bk=12√2Bk−1⊗(σak+σa′k)+12√2B′k−1⊗(σak−σa′k) (2)
and starts with footnoteNorm (). The are given by scalar products of three dimensional unit vectors and the vector consisting of the three Pauli operators, i.e. . The operator is obtained from by exchanging all the and . Within a LHV model one can only reach Mermin (). This can be seen intuitively by assigning specific values or to each of the operators , which implies that the recursive relation reduces to or where for all possible choices. It follows that in this case, and similarly for all LHV models.
Quantum mechanics allows a violation of the MK inequality by ; by comparison to the maximum allowed LHV value , one sees that the violation scales exponentially with the system size. Note that the MK inequality achieves the highest violation for any inequality with two observables per qubit We01 (). The observables can be significantly simplified by choosing the same measurement directions for all qubits, e.g. and for all . It can then be shown that Mermin ()
Bn=(eiβn|1⟩⊗n⟨0|+e−iβn|0⟩⊗n⟨1|), (3)
with . The determination of then reduces to determining two specific off-diagonal elements in the density matrix . The states which violate the MK inequality maximally are then given by , leading to . Notice that the local observables can be adjusted in such a way that GHZ states with arbitrary phase maximally violate the corresponding MK-inequality, i.e. the relevant quantity for a violation is given by the absolute value of the coherences .
GHZ states of the form for up to qubits have previously been prepared using trapped ions Mo11 () (again 1 qubit is encoded per ion). In that work the state fidelities were estimated via measurements of the logical populations and , and the coherences . From this information both the graph state Bell observable and the MK Bell observable can now be calculated.
The relative violations , defined as for the MK inequalities and for the graph inequality, are presented graphically in figure 3. An exponential scaling is apparent for the relative violation of the MK inequalities, i.e. by using larger systems a stronger violation of non-locality can be observed. We now show that the violation of the MK inequalities with larger systems can be more robust to noise than for smaller systems. This can be illustrated as follows. Assume the preparation of a noisy -qubit GHZ state, where imperfections and decoherence is modeled in such a way that each qubit is effected by single qubit depolarizing noise , i.e. . Even though the state can be shown straightforwardly to have an exponentially small fidelity, one nevertheless encounters a violation of the MK inequality even for a large amount of local depolarizing noise. To be specific, one finds that (the off-diagonal elements are simply suppressed by this factor), leading to . That is, as long as , one encounters a violation of the MK inequality for large enough . This means that MK inequalities can tolerate almost noise per qubit. The graph inequalities for GHZ states demand a fidelity larger than 0.5 Sup (), requiring the noise per qubit to reduce exponentially with system size.
Conclusion and outlook — We have demonstrated the violation of multi-partite Bell inequalities for graph states which are resources in MBQC, thereby confirming a connection between applicability of states as resources for quantum information processing and violation of LHV models. In addition, we show that the data in a previous experiment is sufficient to identify an exponentially increasing Bell violation with system size Mo11 (). Given the fact that our set-up can readily be scaled up to a larger number of ions, this opens the possibility to demonstrate LHV violations for large-scale systems.
Acknowledgements — This work was supported by the Austrian Science Fund (FWF): P25354-N20, P24273-N16 and SFB F40-FoQus F4012-N16.
## References
• (1) J. S. Bell, Physics 1, 195 (1964).
• (2) R.F. Werner and M.M. Wolf, Phys. Rev. A 64, 032112 (2001).
• (3) N. D. Mermin, Phys. Rev. Lett. 65, 1838 (1990); A. V. Belinskii and D. N. Klyshko, Phys. Usp. 36, 653 (1993); V. Scarani and N. Gisin, J.Phys. A 34, 6043 (2001).
• (4) O. Gühne, G. Tóth, P. Hyllus and H. J. Briegel, Phys. Rev. Lett. 95, 120405 (2005).
• (5) V. Scarani, A. Acín, E. Schenck and M. Aspelmeyer, Phys. Rev. A 71, 042325 (2005).
• (6) A. Cabello, O. Gühne and D. Rodríguez, Phys. Rev. A 77, 062106 (2008).
• (7) M. Hein, J. Eisert and H.J. Briegel, Phys. Rev. A 69, 062311 (2004).
• (8) M. Hein, W. Dür, J. Eisert, R. Raussendorf, M. Van den Nest and H.J. Briegel, Proceedings of the International School of Physics "Enrico Fermi" on "Quantum Computers, Algorithms and Chaos", Varenna, Italy (2005).
• (9) H.J. Briegel and R. Raussendorf, Phys. Rev. Lett. 86, 910 (2001). R. Raussendorf, D.E. Browne and H.J. Briegel, Phys. Rev. A 68, 022312 (2003).
• (10) R. Raussendorf and H.J. Briegel, Phys. Rev. Lett. 86, 5188 (2001).
• (11) H.J. Briegel, D.E. Browne, W. Dür, R. Raussendorf and M. Van den Nest, Nature Physics 5, 19 (2009).
• (12) A. M. Steane, Phys. Rev. Lett. 77, 793 (1996); A. R. Calderbank and P. W. Shor, Phys. Rev. A 54, 1098 (1996); D. Gottesman, Stabilizer codes and quantum error correction, PhD thesis, Caltech (1997). E-print: arXiv: quant-ph/9705052.
• (13) D.P. DiVincenzo and A. Peres, Phys. Rev. A 55, 4089 (1997).
• (14) B.P. Lanyon, P. Jurcevic, M. Zwerger, C. Hempel, E.A. Martinez, W. Dür, H.J. Briegel, R. Blatt and C.F. Roos, Phys. Rev. Lett. 111, 210501 (2013).
• (15) T. Monz, P. Schindler, J.T. Barreiro, M. Chwalla, D. Nigg, W.A. Coish, M. Harlander, W. Hänsel, M. Hennrich and R. Blatt, Phys. Rev. Lett. 106, 130506 (2011).
• (16) P. Walther, M. Aspelmeyer, K. Resch and A. Zeilinger, Phys. Rev. Lett. 95, 020403 (2005); W.-B. Gao, X.-C. Yao, P. Xu, O. Gühne, A. Cabello, C.-Y. Lu, T. Yang, Z.-B. Chen, J.-W. Pan, Phys. Rev. A 82, 042334 (2010).
• (17) M. A. Rowe, D. Kielpinski, V. Meyer, C. A. Sackett, W. M. Itano, C. Monroe and D. J. Wineland, Nature 409, 791 (2001); C.F. Roos, G.P.T. Lancaster, M. Riebe, H. Häffner, W. Hänsel, S. Gulde, C. Becher, J. Eschner, F. Schmidt-Kaler, and R. Blatt, Phys. Rev. Lett. 92, 220402 (2004); D. N. Matsukevich, P. Maunz, D. L. Moehring, S. Olmschenk, C. Monroe, Phys. Rev. Lett. 100, 150404 (2008).
• (18) D. Gottesman, Phys. Rev. A 54, 1862 (1996).
• (19) P. Walther, K.J. Resch, T. Rudolph, E. Schenck, H. Weinfurter, V. Vedral, M. Aspelmeyer, and A. Zeilinger, Nature 434, 169 (2005).
• (20) For details see supplemental material.
• (21) Notice that we have included a normalization factor such that .
• (22) A. Sørensen and K. Mølmer, Phys. Rev. Lett., 82, 1971 (1999).
## Appendix B Examples of Bell operators (theory)
For illustration, we explicitly provide some of the Bell operators, both for graph state inequalities and MK inequalities. As an example, for the graph , i.e. the linear cluster state of four qubits, the normalized Bell operator is given by
B4(LC4) = 116(IIII+XZII+ZXZI+IZXZ +IIZX+YYZI+XIXZ+XZZX +ZYYZ+ZXIX+IZYY−ZYXY +XIYY+YYIX−YXYZ+YXXY).
For the graph , i.e. the box cluster state of four qubits, the normalized Bell operator is given by
B4(BC4) = 116(IIII+XZZI+ZXIZ+ZIXZ +IZZX+YYZZ+YZYZ+XIIX +IXXI+ZYZY+ZZYY−IYYX −YIXY−YXIY−XYYI+XXXX).
For the four-qubit GHZ state and the corresponding MK inequalities the Bell operator is given by
B4 = −18(YXXY+YXYX+YYXX−YYYY (6) +XYYX+XYXY+XXYY−XXXX).
## Appendix C A complete experimental example
In this section we provide details on how the Multipartite Bell inequality , given in the table in the main text, was measured for one of the graph states. Specifically we choose the 4-qubit box cluster shown in figure 4. As described in the main text, the most well known method to prepare clusters states is to initialize each physical qubit in the state and then to apply CP gates between every pair of qubits with a connecting edge: in this case qubit pairs 1&2, 1&3, 2&4 and 3&4. Note that
CP=e−iHcpπ4, (7)
where .
In our experiments we prepare all our graph states in a different way, which is equivalent to the method using CP gates up to single qubit rotations: i.e. the states are equivalent up to a local change of basis. In summary we begin by initializing all qubits into and applying pairwise entangling operations generated by Hamiltonians of the form , where the subscript refers to the Mølmer-Sørensen interaction on which our qubit interactions are based MS99 (). For more experimental details on the state generation see the supplementary material of La13 () where laser pulse sequences can be found. Note that the 4-qubit box cluster is not presented in La13 (), however the laser pulse sequence is identical to that for all the error-correction states . In fact : rotating the box cluster diagram in figure 4 by 45 degrees in either direction (so that it becomes a diamond) makes it clear that it belongs to the same family of states.
As stated, experimentally we do not prepare , but ideally a locally rotated state for which we will use the label . This state is given by
∣∣^BC4⟩=|0000⟩−|0110⟩−|1001⟩−|1111⟩2 (8)
which is equivalent to the state made with CP gate once it is corrected by the following single-qubit correction rotations. qubit 1: HXZ, qubit 2: HX, qubit 3: HX, qubit 4 HXZ, where H is the Hadamard, and X and Z are standard Pauli operators.
For the experimentally generated 4-qubit box cluster , the normalized Bell operator is given by
B4(^BC4) = 116(IIII+IYYZ−IXXZ+IZZI +YIZY+YYXX+YXYX+YZIY −XIZX+XYXY+XXYY+XZIX +ZIIZ+ZYYI−ZXXIX+ZZZZ).
The experimentally observed expectation values for all 16 observables are presented in table 2. The average values of the last column is and is the normalized Bell operator we observe for this state. All uncertainties are one standard deviation and derive from the intrinsic uncertainty in using a finite number of measurements to estimate expectation values.
## Appendix D Values for D(G) for |ECn⟩
A bound for , where is the graph underlying the five qubit state which we used to demonstrate quantum error correction, can be found in the following way. First one notes that is equivalent to the state in figure 5b) up to local Clifford (LC) operations. The two graph states have the same rank indices and are thus equivalent up to local unitary operations He04 (). The fact that they are both graph states then implies the LC equivalence. The local Clifford operations do not change the value of . The graph state is build from a four qubit GHZ state and a single qubit graph , connected by an edge. Application of Lemma 3 in Gu05 () then gives a bound on :
D(EC3)≤D(G1)D(GHZ4)=34. (10)
In a similar way one can bound the value ,
D(EC5)≤D(G1)D(GHZ6)=58. (11)
## Appendix E Values for D(G) for GHZ states
The values for for GHZ states with up to ten qubits have been derived numerically in Gu05 (). Here we illustrate how one can simplify the numerical procedure and provide the values for GHZ states with twelve and fourteen qubits. In addition we show that a fidelity larger than one half is required for all GHZ states in order to violate the graph state inequality.
The generators for GHZ states are given (up to irrelevant local Clifford operations) by , , …, . The Bell operator contains all products of the generators, as described in the main text. In Gu05 () it is shown that one can restrict to LHV models which assign to all measurements. For GHZ states one can then show by simply multiplying the generators that one only has to check the following operators: for stabilizers with an odd number of generators:
Oodd=(−1)(jodd−1)/2X⊗jodd⊗I⊗n−jodd, (12)
and for stabilizers with an even number of generators:
Oeven=Y⊗jeven⊗I⊗n−jeven, (13)
and all the permutations of the qubits in both cases.
Since each of the operators and contain only or operators, they can be optimized independently. For the operators it is easy to see that they contribute maximally by assigning to all measurement outcomes. Their total contribution to is then given by , where the factor comes from the normalization in the definition of and the sum comes from the total number of operators . The optimization for the operators is done numerically and we find and . For even one can confirm . We leave it as a conjecture that this expression holds for arbitrary even .
The contribution from the operators puts a lower bound on and thus, via the relation , on the fidelity . Consequently, a necessary requirement for any GHZ state to violate the Bell type inequality derived in Gu05 () is that the fidelity is greater than one half.
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters | 2020-01-18 17:45:49 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8424389362335205, "perplexity": 1193.793691329158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593295.11/warc/CC-MAIN-20200118164132-20200118192132-00514.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-2-real-numbers-chapters-1-2-cumulative-review-problem-set-page-91/28 | ## Elementary Algebra
First, we express each number as a product of its prime factors: $48=2\times2\times2\times2\times3$ $66=2\times3\times11$ $78=2\times3\times13$ Since a single 2 and a single 3 is common to all of the numbers, the Greatest Common Factor$=2\times3=6$ | 2019-12-07 13:50:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6104601621627808, "perplexity": 214.1987364927166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540499439.6/warc/CC-MAIN-20191207132817-20191207160817-00333.warc.gz"} |
https://socratic.org/questions/how-do-you-use-the-factor-theorem-to-determine-whether-x-1-is-a-factor-of-x-3-x--1 | # How do you use the factor theorem to determine whether x+1 is a factor of x^3 + x^2 + x + 1?
##### 1 Answer
Jan 3, 2016
The factor theorem states that a polynomial $f \left(x\right)$ has a factor $\left(x + k\right)$ if and only if $f \left(- k\right) = 0$.
Here ${x}^{3} + {x}^{2} + x + 1$ is a polynomial.
Let $f \left(x\right) = {x}^{3} + {x}^{2} + x + 1$
Now we want to know that is $x + 1$ a factor of $f \left(x\right)$ or not.
For this purpose we have to put$x = - 1$ in $f \left(x\right)$, if the result comes to be $0$ then $x + 1$ is a factor of $f \left(x\right)$ and if the result comes not to be $0$ then $x + 1$ is not a factor of $f \left(x\right)$.
Put $x = - 1$ in $f \left(x\right)$
$\implies f \left(- 1\right) = {\left(- 1\right)}^{3} + {\left(- 1\right)}^{2} + \left(- 1\right) + 1 = - 1 + 1 - 1 + 1 = 0$
Since the result is $0$, therefore $x + 1$ is a factor of the given polynomial. | 2021-06-23 20:51:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 20, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6634171009063721, "perplexity": 82.51861285200944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488540235.72/warc/CC-MAIN-20210623195636-20210623225636-00487.warc.gz"} |
https://scicomp.stackexchange.com/questions/32578/convexity-of-sum-of-k-smallest-eigenvalue/32582 | # Convexity of Sum of $k$-smallest Eigenvalue
If I have a real positive definite matrix $$A\in\mathbb{R}^{n\times n}$$, and denote its eigenvalues as $$\lambda_1\leq \lambda_2 \leq ... \leq \lambda_n$$.
Define the function as $$f(A)=\sum_{i=1}^{k} \lambda_i$$ for a constant $$k. What do we know about the convexity of $$f(A)$$? Is it convex or concave?
Given $$A \in {\bf S}^n$$ (a positive definite matrix) with eigenvalues $$\lambda_1 \leq \lambda_2 \leq \ldots \leq \lambda_n$$, then:
1. $$\displaystyle f_k(A)=\sum_{i=1}^{k} \lambda_i$$ is concave. Why?
$$f_k(A) = \inf \left\{ {\bf tr}(V^T A V) | V \in {\bf R}^{n \times k}, V^T V = I \right\}$$
This follows from the Poincare separation theorem (see e.g. Horn and Johnson's Matrix analysis, 2nd ed., corollaries 4.3.37 and 4.3.39). $$f_k$$ is the pointwise infimum of a family of linear functions $${\bf tr}(V^T A V)$$, hence it is concave (Boyd and Vandenberghe, section 3.2.3).
2. $$\displaystyle g_k(A)=\sum_{i=n-k+1}^{n} \lambda_i$$ is convex. Again, we can show that
$$g_k(A)=\sum_{i=n-k+1}^{n} \lambda_i(A) = \sup \left\{ {\bf tr}(V^T A V) | V \in {\bf R}^{n \times k}, V^T V = I \right\}$$
$$g_k$$ is the pointwise supremum of a family of linear functions $${\bf tr}(V^T A V)$$, hence it is convex (Boyd and Vandenberghe, section 3.2.3).
cvxpy treats that function as being concave (link). | 2019-10-16 06:56:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 4663.549370943567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666467.20/warc/CC-MAIN-20191016063833-20191016091333-00029.warc.gz"} |
https://www.nature.com/articles/s41598-019-52259-6?error=cookies_not_supported&code=d0c408e3-5e2f-4578-a8e7-c888900dc3a0 | # Relative sorption coefficient: Key to tracing petroleum migration and other subsurface fluids
## Abstract
The accumulation and spatial distribution of economically important petroleum in sedimentary basins are primarily controlled by its migration from source rocks through permeable carrier beds to reservoirs. Tracing petroleum migration entails the use of molecular indices established according to sorption capacities of polar molecules in migrating petroleum. However, little is known about molecular sorption capacities in natural migration systems, rendering these indices unreliable. Here, we present a new approach based on a novel concept of relative sorption coefficient for quantitatively assessing sorption capacities of polar molecules during natural petroleum migration. Using this approach, we discovered previously unrecognized “stripping” and “impeding” effects that significantly reduce the sorption capacities of polar compounds. These discoveries provide new insights into the behaviors of polar compounds and can easily explain why traditional molecular indices yield incorrect information about petroleum migration. In light of these new findings, we established new molecular indices for tracing petroleum migration. We demonstrate via case studies that the newly established indices, unlike traditional molecular indices, are reliable and effective in tracing petroleum migration. Our approach can be applied to diverse basins around the world to reveal distribution patterns of petroleum, which would decrease environmental risks of exploration by reducing unsuccessful wells.
## Introduction
Petroleum is produced in source rocks through thermal alteration of organic matter buried in sedimentary basins. Its accumulation and distribution are mainly controlled by secondary petroleum migration (SPM) through permeable carrier beds following petroleum expulsion (primary migration) out of the source rocks. Tracing petroleum migration can reveal distribution patterns of petroleum reservoirs and thus increase exploration success rate. Meanwhile, environmental risks can be decreased by reducing unsuccessful wells. Biomarker hydrocarbon geochemistry can be applied to trace SPM1,2,3,4,5,6. The underlying principle of this approach is that polar molecules are preferentially removed from petroleum during secondary migration due to sorption onto immobile mineral surfaces, which causes their concentrations to decrease with increasing migration distance1,3,5,7. Thus, tracing SPM entails the use of molecular indices established on the basis of sorption capacities of polar compounds in migrating petroleum (Supplementary Text S-1.1). However, there have been no reports of quantitative research on the sorption capacities of polar molecules in naturally migrating petroleum, although it has been suggested that the sorption capacities of alkylcarbazoles are determined by the (partial) shielding effect related to alkylation at positions 1 and/or 8 (refs 3, 6) (Fig. 1). Sorption of trace polar compounds during lateral petroleum migration typically reaches equilibrium2,4,5,8 and thus their sorption capacities essentially represent equilibrium sorption capacities determined by both sorption and desorption. However, the shielding and partial shielding effects consider only the sorption, not the desorption, of alkylcarbazoles during petroleum migration. Therefore, the previously-proposed theory on molecular sorption capacities, which was based solely on the shielding and partial shielding effects, needs to be re-evaluated to avoid erroneous results when applying molecular indices to trace SPM, as demonstrated by the case studies herein.
In this paper, we propose a new parameter, the Relative Sorption Coefficient (RSC), which quantitatively describes equilibrium sorption capacities of polar compounds in migrating petroleum. We establish its computation method (see Methods), and then apply and test the validity of the new approach, by using natural petroleum samples collected from the Xifeng Oilfield in the Ordos Basin and the Rimbey-Meadowbrook Reef Trend in the Western Canada Sedimentary Basin (WCSB). Because the petroleum in the Xifeng Oilfield contains very little benzocarbazoles5, we only analyzed alkylcarbazoles in the samples from this oilfield, although both alkylcarbazoles and benzocarbazoles can serve as important tracers1,5,9,10. Using the RCS approach, we determined the sorption capacities of an important group of polar molecules, the alkylcarbazoles, and discovered two previously unrecognized effects (i.e., stripping and impeding effects) that strongly influence the equilibrium sorption capacities of polar compounds. In light of these new findings, we reclassify alkylcarbazoles into six subgroups, according to their equilibrium sorption capacities, and propose new ratios that consist of numerators with stronger sorption capacities than denominators. All the new ratios show a significantly decreasing trend with increasing migration distance, demonstrating that the new ratios are reliable and effective indices for migration and that the RSC provides the key to tracing SPM. Our new method is further validated through the analyses of benzocarbazoles in petroleum samples from the Rimbey-Meadowbrook reef trend in the WCSB. Because the RSC is established on the basis of a physicochemical sorption model, it should be widely applicable to assessing equilibrium sorption capacities of solutes in a diverse range of geofluids.
## Results
### Modification of secondary migration fractionation index
We have previously investigated SPM in the Xifeng Oilfield in the southwest part of the Ordos Basin in China using the secondary migration fractionation index (SMFIs). The geological setting, samples and geochemical data are documented in Zhang et al.5. Besides sorption, this earlier study also examined other factors11,12,13,14,15,16,17,18,19,20 that may influence the concentrations of polar compounds in migrating petroleum. It illustrated that the effect of thermal maturation of source rocks on polar molecule can be eliminated in the derivation of SMFIs (see Zhang et al.5 for details). Other influences (i.e., organic facies of source rocks, biodegradation of petroleum and dissolution in water) can be neglected for alkylcarbazoles in the petroleum of this field5. In this study, we re-calculated the SMFIs, using the migration-sorption fractionation equation with a quadratic polynomial (Supplementary Equation (S3)), to improve the accuracy of the results (Supplementary Text, S-1.2-3). The results clearly show an exponential decrease of SMFIs with increasing relative migration distance (Supplementary Fig. S1A–C,G–I), suggesting that the Xifeng Oilfield was likely formed by SPM in the SW direction along the sand body from the source kitchen located in the NE of the reservoir (refer to Zhang et al.5).
However, the information about petroleum migration derived from the SMFIs needs to be verified by using the ratios of the SMFIs. Because the SMFI is affected by the relative rates of concentration variations of polar molecules at the starting point or at a reference point of SPM (Supplementary Text, S-1.2-3), it cannot be used to construct reliable ratios. To overcome this problem, we revised the SMFI (Supplementary Equation (S19)) and its related ratios (Supplementary Text, S-1-3). The amended indices are denoted by the subscript λ (e.g. SMFIλ). The values of SMFIλ (Supplementary Table S4) display similar distribution trends as the values of SMFI Supplementary Fig. S1), even though the powers of their regression equations are different, as shown by Supplementary Equation (S16). The ratios of the SMFIλs of alkylcarbazoles with stronger sorption capacities to those with weaker sorption capacities should decrease with increasing migration distance if the underlying assumptions about source facies, biodegradation and thermal maturity effects are valid. The ratios were initially established based on the previously proposed theory about sorption capacities that considered only the shielding and partial shielding effects3,6,21,22. Based on this theory, alkylcarbazole isomers can be divided into three groups3,6: N-H shielded (Group I), N-H partially shielded (Group II) and N-H exposed (Group III). Their sorption capacities are expected to decrease in the order of Group III> Group II> Group I, and the ratios of the SMFIλs of dimethylcarbazoles (DMCAs) in Group III to those in Group II, III to I and II to I would be predicted to decrease with increasing migration distance. However, many of these ratios (Fig. 2) do not display a decreasing trend with increasing migration distance, but instead they exhibit a clear increasing trend (Supplementary Text, S-1.5), which is completely opposite to the decreasing trend of SMFIs and SMFIλs (Supplementary Fig. S1), and inconsistent with the geological conditions (Fig. 1 in Zhang et al.5). From this, we can see that if the ratios of carbazoles, constructed on the basis of the existing theory on sorption capacities, are used to trace SPM, it would yield erroneous or misleading information about petroleum migration. Similarly, the ratios based on the current sorption capacity theory cannot be used to verify the information about SPM that is inferred from the SMFIs and SMFIλs. Therefore, their use should be discontinued.
### Relative sorption coefficient (RSC)
We re-examined the sorption capacities of polar compounds in petroleum samples from the Xifeng Oilfield using our new approach described in the Methods and Supplementary Text S-1. We calculated the relative sorption coefficients - the Kr values of alkylcarbazoles in these petroleum samples (Supplementary Table S2). The Kr values vary widely. Some N-H partially shielded DMCAs (Group II) have higher Kr values than some of the N-H exposed DMCAs (Group III), which cannot be explained by the existing theory on sorption capacities that considered only the shielding and partial shielding effects. This suggests that there are other factors controlling equilibrium sorption capacities of alkylcarbazoles.
Through comparison of desorption of the adsorbed polar compounds under both static and dynamic conditions (Supplementary Text, S-1.6), a stripping effect was observed arising from petroleum migration that causes “tall” alkylcarbazoles to desorb more easily than the “short” ones (Supplementary Figs S4, S5). This stripping effect greatly reduces the equilibrium sorption capacities of alkylcarbazoles with the alkyl substituents at positions 4 and/or 5, as is demonstrated by the Kr values.
In Group II alkylcarbazoles, the molecular height of 1,4-DMCA is greater than for 1,5- and 1,3-DMCA (Supplementary Text, S-1.6), and the latter two are taller than the other DMCAs in this group. Consequently, 1,4-DMCA has a lower Kr value than 1,5- and 1,3-DMCA, which have lower Kr values than the rest of the DMCAs in this group (Supplementary Fig. S3). In light of the stripping effect, the Group II alkylcarbazoles are further divided into three subgroups (Fig. 3) with decreasing stripping effect and increasing sorption capacity in the following order: N-H partially shielded alkylcarbazole with the alkyl at position 4 (Subgroup II-1), N-H partially shielded alkylcarbazoles with the alkyl at positions 3 or 5 (Subgroup II-2), and N-H partially shielded without the alkyl at position 3, 4 or 5 (Subgroup II-3) (Figs 3 and S3).
In Group III alkylcarbazoles, 3,4-DMCA has two methyls sticking out, and is subject to a stronger stripping effect (two-methyl stripping; see Supplementary Text, S-1.6 for details) and thus has a lower Kr value than 2,4- or 2,5-DMCA (Supplementary Fig. S5A–C). The Kr value of 3,4-DMCA is even lower than that of 1,8-DMCA, which experiences the shielding effect (Fig. 3).
In addition, we discovered an impeding effect related to the alkyls at positions 2 and 7 (Supplementary Text, S-1.7). The impeding effect causes the equilibrium sorption capacity of 2,7-DMCA in the Group III alkylcarbazoles to become lower than those of 2,3- and 2,6-DMCA (Supplementary Figs. S5D–F). Due to the stripping and impeding effects, alkylcarbazoles in Group III show large variations in Kr (Supplementary Fig. S4) and are also divided into three subgroups (Fig. 3): N-H exposed alkylcarbazole with the alkyls at positions 3 and 4 (Subgroup III-1), N-H exposed alkylcarbazoles with one alkyl at position 2 and the other alkyl at positions 4, 5 or 7 (Subgroup III-2), and N-H exposed alkylcarbazoles without the alkyls at positions 4, 5 or 7 (Subgroup III-3).
The stripping and impeding effects, which control the sorption capacities of polar molecules in migrating petroleum, are also related to the molecular structures of the organic compounds, just like the shielding effect. Based on the three effects noted above and the Kr values (Supplementary Table S2), these subgroups and the Group I alkylcarbazole can be arranged in the following sequence with decreasing equilibrium sorption capacity (Fig. 3): Subgroup III-3 (Kr = 78.5–100%)> III-2 (Kr = 0.40–5.6%), II-2 and II-3 [II-3 (Kr = 0.97–12.0%)> II-2 (Kr = 0.38–0.46%)]> II-1 (Kr = 0.11%) and Group I (Kr = 0.10%)> III-1 (Kr = 0.0%).
From the above analyses of molecular structures and their relationships with Kr, we established the following sequence of various effects on reducing the equilibrium sorption capacities: two-methyl stripping (represented by Subgroup III-1)> shielding (Group I)> partial shielding plus one-methyl stripping (II-1 and -2)> partial shielding (II-3), one-methyl stripping and impeding (III-2)> partial impeding. The interplay of these three effects results in complex variations in equilibrium sorption capacities of the DMCAs within and among subgroups. The seemingly unreasonable relationships of SMFI ratios with relative migration distance (Fig. 2) can all be explained by these effects and their combination(Supplementary Text S-1.8).
The relative sorption coefficient is derived from the linear isotherm model that is the simplification of the Langmuir isotherm model of equilibrium sorption at low concentrations of adsorbents such as carbazoles (refer to the Methods Section, Supplementary Information and Zhang et al.5). Recent studies on sorption of asphaltenes onto minerals show that the Langmuir isotherm model can be used to describe the equilibrium adsorption of asphaltenes when interactions between the solute and the solvent as well as interactions that can occur at a non-ideal lattice of a mineral are negligible and that the sorption of asphaltenes is highly dependent on the heteroatoms (i.e. N, O, S) in their molecular structure23,24,25. These results confirm the validity of using the linear isotherm model to investigate the equilibrium sorption of polar heteroatom compounds such as carbazoles onto solid surfaces.
### New ratios and their application
Given the sorption capacity sequence of the subgroups and Group I, eighteen SMFIλ ratios are established as indices for petroleum migration: alkylcarbazoles in Subgroup III-3 to those in III-2, III-3 to III-1, III-3 to II-3, III-3 to II-2, III-3 to II-1, III-3 to Group I, III-2 to III-1, III-2 to II-1, III-2 to Group I, II-3 to II-2, II-3 to II-1, II-3 to Group I, II-3 to III-1, II-2 to II-1, II-2 to Group I, II-2 to III-1, II-1 to III-1, and Group I to III-1. Since the equilibrium sorption capacities of the numerators are significantly higher than those of the denominators, these ratios decrease with increasing migration distance and thus can serve as odometers for SPM (Supplementary Equation (S23)). Similarly, the corresponding ratios of the geometric means of SMFIλs decrease with increasing migration distance and can also be used as indices for petroleum migration (Supplementary Equations (S21 and S22)). It is worth noting that the ratios of alkylcarbazoles within each group (except Group I with only one compound), which were not considered previously, can also be useful in the establishment of new indices (Supplementary Text S-1.9).
The new SMFIλ ratios for the Xifeng Oilfield fit the known data well, clearly showing exponential decreases with increasing migration distance with high correlation coefficients (Fig. 4). These are consistent with the migration fractionations inferred from SMFIλs and SMFIs, and geological conditions26,27,28,29 (Fig. 1 in Zhang et al.5). Thus, the new ratios confirm the validity of the influence elimination and migration information revealed by the SMFIs5 and SMFIλs, and demonstrate that the petroleum migrated along the sand body from the source kitchen into the Xifeng Oilfield in a SW direction (refer to Zhang et al.5).
Molecular indices that are correlated with migration directions, pathways and distances have been sought based on sorption capacities of polar organic compounds in migrating petroleum for decades1,5, but with limited success5 because the sorption capacities of these polar compounds have been unclear. As a result, reliable indices have not been established and secondary petroleum migration still remains the least understood of the processes involved in petroleum accumulation5. The results of the application of the new ratios of alkylcarbazoles in the petroleum of the Xifeng Oilfield, however, demonstrate that the relative sorption coefficients (Kr) can be used to assess the sorption capacities and that the new ratios established on the basis of the new understanding of the sorption capacities can serve as effective indices for petroleum migration. These new indices provide a powerful tool for revealing migration directions, pathways and distances that control petroleum distribution patterns in reservoirs in basins, which would greatly facilitate future petroleum exploration and increase the success rate of wells.
Furthermore, the new observation of the stripping effect on equilibrium sorption capacities is supported by the analyses of benzocarbazoles in the petroleum samples from the Rimbey-Meadowbrook reef trend of central Alberta, Canada. In these petroleum samples, the “taller” benzo[c]carbazole has a lower Kr value than the “shorter” benzo[a]carbazole, consistent with predictions from the stripping effect (Supplementary Table S5).
## Discussion
The new concept of RSC overcomes the dependency of the sorption coefficient assessment on the migration velocity and rock characteristics of the carrier beds. As demonstrated in our case studies, the RSC provides a powerful tool with a sound scientific basis to quantitatively evaluate equilibrium sorption capacities of polar compounds during petroleum migration, and can help uncover factors controlling equilibrium sorption capacities. Without this tool, it would be impossible to quantitatively assess equilibrium sorption capacities of polar compounds in migrating petroleum and to establish reliable molecular indices for tracing petroleum migration. The lack of a quantitative assessment tool is also the primary reason why many of the previously proposed molecular indices failed to provide reliable information about SPM. Application of this approach to quantitative assessment of equilibrium sorption capacities of alkylcarbazoles has resulted in the discovery of the previously unrecognized stripping and impeding effects that significantly reduce the equilibrium sorption capacities of polar compounds. These findings have led to the reclassification of the polar compounds according to their sorption capacities. Based on the reclassification of the polar compounds, we established eighteen new ratios. As demonstrated in our case studies, these new indices provide reliable information about petroleum migration (i.e. migration directions, routes and distances). Therefore, this approach is the key to tracing secondary petroleum migration and can be applied to petroliferous basins around the world, to reveal distribution patterns of petroleum reservoirs, which would help to find more petroleum and decrease environmental risks of exploration by reducing unsuccessful wells.
Moreover, the concept of RSC and its evaluation method developed in this study should be applicable in hydrological and environmental studies (as well as other disciplines) to trace the movement of pollutants and water (and other geofluids) (Supplementary Text S-1.10).
## Methods
The equilibrium sorption of a polar molecule or an adsorbable element in a natural migration system of petroleum or other geofluids can be described by the linear isotherm model if its concentration is sufficiently low5,8,30,31. In this physicochemical model, the sorption coefficient Kd (cm3/g) represents the sorption amount of a polar compound or an adsorbable element at a given concentration and saturation of petroleum or geofluid8,30,31 (see Supplementary Equation (S5) in Zhang et al.)5. This amount may describe the equilibrium sorption capacity of the compound or element, according to Delle Site8. However, the Kd values determined in laboratories are not necessarily applicable to natural migration systems, due to differences in size, time and distance between laboratory experiments and natural migration systems. Moreover, lab experimental studies for the determination of sorption coefficients are expensive and time consuming, and the results may not be accurate, especially when concentrations are low8. Above all, Kd is also controlled by many factors such as the porosity, density of carrier beds and the average velocity of migration (Supplementary Text S-1.2). Therefore, the sorption coefficient Kd cannot be used directly to describe the equilibrium sorption capacities of polar organic compounds or trace elements during lateral migration.
To evaluate equilibrium sorption capacities of polar compounds (or adsorbable elements) in natural migration systems, we introduce a new concept of relative sorption coefficient (RSC):
$${K}_{r}=\frac{{K}_{d}-{K}_{dmin}}{{K}_{dmax}-{K}_{dmin}}\times 100( \% )$$
(1)
where Kr is the RSC; Kd is the sorption coefficient (cm3/g); Kdmax is the maximum value in a series of Kd values of polar compounds in petroleum (or adsorbable elements in other geofluids); and Kdmin is the minimum value. The range of Kr values is 0–100%. Kr can be used quantitatively to evaluate equilibrium sorption capacities. High Kr values indicate strong equilibrium sorption capacities.
For the linear isotherm model of the equilibrium sorption in natural migration systems5,8,30,31, we can derive the following equation from Supplementary Equation (S8) in Zhang et al.5:
$${K}_{d}=({R}_{d}-1)\frac{n}{{n}_{s}\cdot {\rho }_{s}}\,$$
(2)
where Rd represents the retardation factor of a polar compound in migrating petroleum or an adsorbable trace element in migrating groundwater (a dimensionless constant), being related to the sorption of the compound or the element and the average migrating velocity of petroleum or groundwater (Supplementary Text S-1.2); n is the porosity of the carrier bed (%); ns = 100 − n (%); ρs is the density of the solids (g/cm3).
Migration of petroleum (or other geofluids) usually occurred in past geological times. Therefore, the current porosity and density of carrier beds do not represent the porosity and density during migration, as these lithological properties most likely have changed over time during diagenesis. Therefore, quantitative measurements of the porosity and density of carrier beds during migration can rarely be obtained. However, these parameters are the same for different compounds or for different elements in a migration system, and thus can be eliminated (Supplementary Text S-1.2) when Eq. (2) is substituted into Eq. (1):
$${K}_{r}=\frac{{R}_{d}-{R}_{dmin}}{{R}_{dmax}-{R}_{dmin}}\times 100( \% )$$
(3)
where Rdmax is the maximum value in a series of Rd values of polar compounds or elements; Rdmin is the minimum value. Rd is also controlled by the average velocity of migration and the difference in relative variation rates of concentrations with time at the starting point of a migration pathway between polar compounds. However, it is demonstrated that the RSC can also eliminate these two kinds of influences when Supplementary Equations (S11S13) are substituted into Eq. (3) (see Supplementary Text S-1.2 for details):
$${K}_{r}=\frac{{a}_{\lambda max}-{a}_{\lambda }}{{a}_{\lambda max}-{a}_{\lambda min}}\times 100( \% )$$
(4)
where $${a}_{\lambda }$$ is a constant controlling migration-sorption fractionation (km−1) and can be derived from Supplementary Equation (S9); $${a}_{\lambda max}$$is the maximum in a series of $${a}_{\lambda }$$ values of polar compounds (km−1); $${a}_{\lambda min}$$ is the minimum (km−1). Equation (4) provides a workable means to quantitatively evaluate sorption capacities of polar organic compounds or adsorbable trace elements.
To quantify equilibrium sorption capacities of polar organic compounds in migrating petroleum, we have established a new method for computing RSC (Kr values) of polar compounds in natural migration petroleum, on the basis of Eq. (4) (Supplementary Text S-1.2). The method for computing the relative sorption coefficient involves the following steps:
The 1st step is to conduct regression analysis using Supplementary Equation (S3) instead of Eq. (1) in Zhang et al.5, to obtain estimates of the values for the constants $${a}_{1}$$, $${a}_{2}$$, $${a}_{3}$$ and $${a}_{4}$$ that are more accurate than achievable with the previous equation in Zhang et al.5. The data preparation and the subsequent non-linear regression analyses are presented in Zhang et al.5. However, the non-linear regression analyses herein are conducted in an iterative manner (Supplementary Text S-1.2) to obtain more rational regression equations.
The 2nd step is to calculate the λ ratios (λ is the relative variation rate of the concentration at the reference point for a given polar compound) from Supplementary Equation (S8), the migration-sorption factor $${a}_{\lambda }$$ (a constant controlling migration-sorption fractionation) and finally the relative sorption coefficient Kr with Supplementary Equation (S9) and Eq. (4), respectively.
The Kr values of the alkylcarbazoles in the petroleum in the Xifeng Oilfield were calculated and are listed in Supplementary Table S2(Supplementary Text S-1.2).
## References
1. 1.
Larter, S. R. et al. Molecular indicators of secondary oil migration distances. Nature 383, 593–597 (1996).
2. 2.
Larter, S. R. et al. An experimental investigation of geochromatography during secondary migration of petroleum performed under subsurface conditions with a real rock. Geochem. Trans. 9, 1–7 (2000).
3. 3.
Li, M., Larter, S. R., Stoddart, D. & Bjorøy, M. Fractionation of pyrrolic nitrogen compounds in petroleum during migration: derivation of migration related geochemical parameters. Geol. Soci. London Spe. Pub. 86, 103–123 (1995).
4. 4.
Yang, Y. L., Aplin, A. C. & Larter, S. R. Mathematical models of the distribution of geotracers during oil migration and accumulation. Petrol. Geosci. 11(1), 67–78 (2005).
5. 5.
Zhang, L., Li, M., Wang, Y., Yin, Q.-Z. & Zhang, W. A novel molecular index for secondary oil migration distance. Sci. Rep. 3, 2487 (2013).
6. 6.
Larter, S. R. & Aplin, A. C. Reservoir geochemistry: methods, applications and opportunities. Geol. Soci. London Spe. Pub. 86, 5–32 (1995).
7. 7.
Li, M., Larter, S. R. & Frolov, Y. B. Adsorptive interactions between petroleum nitrogen compounds and organic/mineral phases in subsurface rocks as models for compositional fractionation of pyrrolic nitrogen compounds in petroleum during petroleum migration. J. High Res. Chrom. 17, 230–236 (1994).
8. 8.
Delle Site, A. Factors affecting sorption of organic compounds in natural sorbent/water systems and sorption coefficients for selected pollutants. A Review. J. Phys. Chem. Ref. Data 30(1), 187–439 (2001).
9. 9.
Larter, S. R. et al. Reservoir geochemistry: a link between reservoir geology and engineering? SPE Reser. Eng. 12(1), 12–17 (1997).
10. 10.
Li, M. Quantification of petroleum secondary migration distances: fundamentals and case histories. Petrol. Explor. Develop. 27, 11–19 (2000).
11. 11.
Clegg, H., Wilkes, H. & Horsfield, B. Carbazole distributions in carbonate and clastic source rocks. Geochim. Cosmochim. Acta 61, 5335–5345 (1997).
12. 12.
Clegg, H., Wilkes, H., Santamaria-Orozco, D. & Horsfield, B. Influence of maturity on carbazole and benzocarbazole distributions in crude oils and source rocks from the Sonda de Campeche, Gulf of Mexico. Org. Geochem. 29, 183–194 (1998).
13. 13.
Li, M., Yao, H., Stasiuk, L. D., Fowler, M. G. & Larter, S. R. Effect of maturity and petroleum expulsion on pyrrolic nitrogen compound yields and distributions in Duvernay Formation petroleum source rocks in central Alberta, Canada. Org. Geochem. 26, 731–744 (1997).
14. 14.
Galimberti, R., Ghiselli, C. & Chiaramonte, M. A. Acidic polar compounds in petroleum: a new analytical methodology and applications as molecular migration indices. Org. Geochem. 31, 1375–1386 (2000).
15. 15.
Bennett, B., Chen, M., Brincat, D., Gelin, F. J. P. & Larter, S. R. Fractionation of benzocarbazoles between source rocks and petroleums. Org. Geochem. 33, 545–559 (2002).
16. 16.
Hwang, R. J., Heidrick, T., Mertani, B. Q. & Li, M. Correlation and migration studies of North Central Sumatra oils. Org. Geochem. 33, 1361–1379 (2002).
17. 17.
Bakr, M. M. Y. & Wilkes, H. The influence of facies and depositional environment on the occurrence and distribution of carbazoles and benzocarbazoles in crude oils: a case study from the Gulf of Suez, Egypt. Org. Geochem. 33, 561–580 (2002).
18. 18.
Huang, H., Bowler, B. F. J., Zhang, Z., Oldenburg, T. B. P. & Larter, S. R. Influence of biodegradation on carbazole and benzocarbazole distributions in oil columns from the Liaohe Basin, NE China. Org. Geochem. 34, 951–969 (2003).
19. 19.
Lager, A., Russell, C. A., Love, G. D. & Larter, S. R. Hydropyrolysis of algae, bacteria, archaea and lake sediments: insights into the origin of nitrogen compounds in petroleum. Org. Geochem. 35, 1427–1439 (2004).
20. 20.
Bennett, B. & Olsen, S. D. The influence of source depositional conditions on the hydrocarbon and nitrogen compounds in petroleum from central Montana, USA. Org. Geochem. 38, 935–956 (2007).
21. 21.
Yamamoto, M., Taguchi, T. & Sasaki, K. Basic nitrogen compounds in bitumen and crude oils. Chem. Geol. 93, 193–206 (1991).
22. 22.
Yamamoto, M. Fractionation of azarenes during oil migration. Org. Geochem. 19, 389–402 (1992).
23. 23.
Joonaki, E., Buckman, J., Burgass, R. & Tohidi, B. Water versus asphaltenes; liquid–liquid and solid–liquid molecular interactions unravel the mechanisms behind an improved oil recovery methodology. Sci. Rep. 9, 11369 (2019).
24. 24.
Pradilla, D., Simon, S. & Sjöblom, J. Mixed interfaces of asphaltenes and model demulsifiers part I: Adsorption and desorption of single components. Colloids & Surfaces A: Physicochem. Eng. Aspects 466, 45–56 (2015).
25. 25.
Bai, Y. et al. Effects of the N, O, and S heteroatoms on the adsorption and desorption of asphaltenes on silica surface: A molecular dynamics simulation. Fuel 240, 252–261 (2019).
26. 26.
Yang, H. & Zhang, W. Leading effect of the seventh member high-quality source rock of the Yanchang Formation in the Ordos Basin during the enrichment of low-penetrating oil-gas accumulation: geology and geochemistry. Geochimica 34(2), 147–154 (2005).
27. 27.
Wu, M., Zhang, L., Luo, X., Mao, M. & Yang, Y. Analysis of hydrocarbon migration stages in the 8th member of the Yanchang Formation in the Xifeng Oilfield. Oil Gas. Geology 27(1), 33–36 (2006).
28. 28.
Zhang, L., Wu, M., Yang, W., Luo, X. & Chen, Z. A new model for the formation of the low-permeability oilfields in the Triassic Yanchang Formation of the Ordos Basin. Coexisting Mechanisms, Accumulation and Distribution Laws of Oil, Gas, Coal and Uranium in the Ordos Basin, (eds Liu, C. & Wu, B.) 601-631 (Science Press, Beijing, 2016).
29. 29.
Duan, Y., Wu, B., Zhang, H., Zheng, C. & Wang, C. Geochemistry and genesis of crude oils of the Xifeng Oilfield in the Ordos Basin. Acta Geol. Sinica. 80(2), 301–310 (2006).
30. 30.
Wu, Y. Contamination Transportation Dynamics in Porous Media 86–93 (Shanghai Jiaotong University Press, Shanghai, 2007).
31. 31.
Chen, G. Applied Physical Chemistry 86–90 (Chemical Industry Press, Beijing, 2008).
32. 32.
Kurahashi, M., Fukuyo, M. & Shimada, A. The crystal and molecular structure of carbazole. Bull. Chem. Soci. Japan 42, 2174–2179 (1969).
33. 33.
Song, H. Organic Chemistry 2–4 (China Medical Sci. & Tech. Press, Beijing, 2005).
34. 34.
Chen, C., Bao, J. & Zhu, C. Molecular structural parameters and thermal stability of 1-methylcarbazole. Geol. Sci. Tech. Infor. 25(2), 57–59 (2006).
## Acknowledgements
We are grateful to Drs. Quan Shi and Minghui Wu who kindly provided assistance in oil sampling and analyses. Special thanks to Professor Lloyd Snowdon for his reviewing and English editing of the manuscript before submission. We also thank Professor Simon George, three anonymous reviewers and the journal editor for their insightful and constructive comments and suggestions. This research was supported by Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB10020203), China National Major Science 973 Projects (Nos. 2003CB214608 and 2014CB239101) and Guangdong Province Higher Education Pearl River Scholar Program.
## Author information
L.Z. designed research, originated the concept, developed its computation method. L.Z., Y.W., M. L. and Q.-Z. Y. analyzed the data and wrote the paper. W.Z. analyzed the data.
Correspondence to L. Zhang.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Zhang, L., Wang, Y., Li, M. et al. Relative sorption coefficient: Key to tracing petroleum migration and other subsurface fluids. Sci Rep 9, 16845 (2019) doi:10.1038/s41598-019-52259-6 | 2020-01-19 08:46:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7352515459060669, "perplexity": 7206.793297668373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594333.5/warc/CC-MAIN-20200119064802-20200119092802-00135.warc.gz"} |
http://www.sciforums.com/threads/oil-crisis.41614/page-6 | # Oil Crisis
Discussion in 'Earth Science' started by ck27, Oct 17, 2004.
Not open for further replies.
Messages:
16
3. ### OphioliteValued Senior Member
Messages:
9,232
If I notice you about to step out in front of a truck and cry out 'Look out. A truck.' thus causing you to stop your step towards oblivion, is that a worthless chicken little antic , or the reason you are still around to carry out your next dumb, life threatening move?
The question is rhetorical. You can sense my implied answer I'm sure.
5. ### TygerMothRegistered Member
Messages:
16
Before I stepped onto the street, I noticed that you were blind, so should I pay attention to you?
7. ### suzukisfrogRegistered Senior Member
Messages:
65
these indians & chinese could always just order a pizza. when i'm real hungry i just get extra pepperoni & bacon.
8. ### OphioliteValued Senior Member
Messages:
9,232
Your option, but as a blind person my sense of hearing is excellent. I know my other senses are limited and that this might be someone playing truck sounds on a high quality sound system, but I would be remiss if I did not issue the warning. You would be pretty stupid to ignore it. (Although I ignore the little voice that says 'let the idiots perish'.)
9. ### TygerMothRegistered Member
Messages:
16
If your track record is bad as the Club of Rome types, I would safer ignoring your shouts of warning. What is bothering you? Is it the fact that India went from a country which was experiencing famine to a net exporter of food while doubling its size of population, something which directly contradicted the warning shouts (predictions)?
10. ### OphioliteValued Senior Member
Messages:
9,232
What is bothering me is your misinterpretation, through ignorance or intent, of the nature of the warning. The authors made it very clear what they were, and more importantly what they were not, taking into account. It was not a panic laden "we're all doomed" cry, but rather a sober, factual account of where certain trends would lead if corrective action were not taken. They were not responsible for the chicken little attitude adopted by elements of the media.
11. ### TygerMothRegistered Member
Messages:
16
"It was not a panic laden "we're all doomed" cry, but rather a sober, factual account of where certain trends would lead if corrective action were not taken."
What do you think of the methodologies and models they used to make these predictions? Shortly after they published their findings, "Limits to Growth", the oil crisis of 1973 happened, which was purely caused by politics not lack of resources. So, why did the authors let their study be misused by energy industry? Why didn't they publish a series of articles in 1973 or 1974 stating that the oil crisis was not created by lack of resources but by politics? They could have cleared up the media confusion if they wanted to.
12. ### OphioliteValued Senior Member
Messages:
9,232
Groundbreaking and innovative at the time. Simplistic and provincial today. We would not be working with the sophisticated economic and environmental models we do today if this, and similar, pioneering studies had not been conducted.
Politics and economics. The price was too low.
In what way did the energy industry misuse the study? Please be specific.
It is not the job of researchers to run around countering claims made by industry or media based on an erroneous interpretation of their work.
13. ### TruthSeekerFancy Virtual Reality MonkeyValued Senior Member
Messages:
15,162
Exportation of food doesn't imply that the domestic market is healthy.
14. ### TygerMothRegistered Member
Messages:
16
LOL, only "Groundbreaking and innovative at the time" thing they did was to use a super-computer to run simple exponential increase calculations of "what if" scenarios. Same calculations can be run on your excel spreadsheet today. At that time, there were better models which took into account feedback loops and self-correcting. Club of Rome presented the most simplistic model based on erroneous assumptions to make their point that eventually we will run of resources. Neither their models nor their methodologies were innovative nor groundbreaking.
"The price was too low" - it was based on market forces. Only way oil price goes up is if there is market manipulation not lack of oil. If the price of the crude oil goes up too high, then a wide range of alternate forms of energy become viable.
But for transportation, liquid hydrocarbon fuel is hard to beat in terms of convenience, range and price. Currently, the price of biodiesel is $1.25/gallon in USA for large quantities. Biogasoline can be made from the same process as the biodiesel with addition of cracker units in the refining process. With the price of gasoline hovering around$2.00/gal, you may start seeing more interest in biogasoline production in USA.
"In what way did the energy industry misuse the study? Please be specific." - Oil industry used the Club of Rome book as an excuse to put into action a range of policies such as invasion of other countries to something more mundane as raising the price of oil. But as you have already admitted, that report is "Simplistic and provincial today". Unfortunately, this flawed and simplistic report has been misused by wide variety of people ranging from race supremacists to doomsayers for many years.
"It is not the job of researchers to run around countering claims made by industry or media based on an erroneous interpretation of their work." - lol, any serious researcher would go out of his/her way to correct any misinterpretations of their work, especially if it was caused by limitations, mistakes or omissions in their work.
15. ### TygerMothRegistered Member
Messages:
16
Truthseeker, I gave an example of experts being wrong in the case of India's ability to feed her growing population because of posts such as this
and the post you replied to,
I am new to this forum, so I did not have a chance to read your poverty thread.
Unhealthy domestic market in an exporting nation indicates that they have a distribution problem not production. Proper resource allocation to fix these distribution problems is largely a matter of policy decisions which are in turn based on our decision makers' personal interests.
16. ### TruthSeekerFancy Virtual Reality MonkeyValued Senior Member
Messages:
15,162
Well, how much technology do you think they have in India? Yes, they export food. I've heard they are a big exporter of it. But I also heard of people starving in the streets. Distribution is always the problem. We have a very rich world here. You know.... one of the things that the poverty thread shows is that the 20% richer in the planet consume more than 80% of all the world's resources while the 20% poorer consume only 1.3%. There is an obvious lack of fair distribution of resources. How many people do you think Bush could have helped with the money he spent on that useless war? And of course, feeding people is not going to do anything. We need to invest in their education as well as their ability to produce and sustain themselves. And population growth is also a key ingredient in the whole thing. Ever studied calculus? If you have had, you can probably remember problems such as what is the limit of a population where there is this much of resources. If resources are scarce, bigger populations only worsens the problem.
Messages:
251
Ophiolite could order pizza with little pieces of chicken.
18. ### OphioliteValued Senior Member
Messages:
9,232
You are trying to infect me with Asian Bird Flu, aren't you!
Messages:
102
20. ### TruthSeekerFancy Virtual Reality MonkeyValued Senior Member
Messages:
15,162
Hey Golgo.... are there graphs and statistics about that in this book?
21. ### AgitpropRegistered Senior Member
Messages:
157
Peak oil is likely a real phenomenon, with all of the attendant smoke and mirrors like price fixing, wars on terror, and rampant corruption. Trickery and reality aren't mutually exclusive and trickery is most successful when occuring in tandem with a real event. It would be in the best interest of egocentrics to stop ridiculing other's concerns long enough to seriously consider the idea that ultimately the earth is finite, and will remain so, and technology is limited in it's ability to correct such a stark physical reality.
The snide references to Malthusians and Club of Romers, is limited, narrow and tragically unhip.
22. ### GodlessObjectivist MindRegistered Senior Member
Messages:
4,197
Such limited mentality was also thought of in the Dark Ages, look at us now. You can't predict what will happen to humanity after YOU! Perish. But the human spirit to continue is stronger than any freaking oil crisis, many wars will be fought, some won, some lost. We can't predict the future outcome of technology. If I were to explain to a simpleton 200 years ago, that we would reach the outer space, and have vessels that reach the moon they would think of me as a mad man.
We are barely scratching the surface of human ingenuity, however the disease of the mind, which is rampant around the globe, may be our doom. What is that disease? Mysticism.
Mysticism is the evil that has held us for so long, and mysticism will be our undoing.
Get rid of the disease, and human ingenuity would sore beyond the capacity of "Your" limited imagination.
Godless.
23. ### Golgo 13The ProfessionalRegistered Senior Member
Messages:
102
I'm sure the spirit to continue was strong on Easter Island as well. It still didn't stop people from dying when they depleted their key resource.
You can't eat willpower.
The root of the problem is too many people, too much exponential growth, and not enough resources to sustain it. The ultimate solutions are to either
1. Find an infinite source of energy in this finite world so growth can be sustained until the Earth is 1 person per square meter on the dry-land surface of the planet and the globe glows hot from electricity usage, or
2. Controlling the population, or
3. Let nature take care of the population problem.
Either way, the problem is going to be taken care of. It's just a hell of a lot easier if we're the ones doing the controlling.
Any of you guys that are still under the illusion that we're going to have anywhere near the energy capacity of today post-peak are missing the issue entirely.
Energy resources must produce more energy than they consume, otherwise they are called "sinks" (this is known as the "net energy" principle). About 735 joules of energy is required to lift 15 kg of oil 5 meters out of the ground just to overcome gravity -- and the higher the lift, the greater the energy requirements. The most concentrated and most accessible oil is produced first; thereafter, more and more energy is required to find and produce oil. At some point, more energy is spent finding and producing oil than the energy recovered -- and the "resource" has become a "sink".
There is an enormous difference between the net energy of the "highly-concentrated" fossil fuel that power modern industrial society, and the "dilute" alternative energy we will be forced to depend upon as fossil fuel resources become sinks.
No so-called "renewable" energy system has the potential to generate more than a tiny fraction of the power now being generated by fossil fuels!
- Energy Synopsis
The end of growth in consumption of energy means the end of growth. That means the population cannot grow, energy use cannot grow, and the economy cannot grow.
The same also holds true for decline in energy resources. Everything follows.
Last edited: Mar 15, 2005 | 2017-10-18 09:23:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28427252173423767, "perplexity": 2391.774466876573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822851.65/warc/CC-MAIN-20171018085500-20171018105500-00488.warc.gz"} |
http://docs.astropy.org/en/latest/api/astropy.modeling.functional_models.TrapezoidDisk2D.html | # TrapezoidDisk2D¶
class astropy.modeling.functional_models.TrapezoidDisk2D(amplitude=1, x_0=0, y_0=0, R_0=1, slope=1, **kwargs)[source]
Two dimensional circular Trapezoid model.
Parameters
amplitudefloat
Amplitude of the trapezoid
x_0float
x position of the center of the trapezoid
y_0float
y position of the center of the trapezoid
R_0float
Radius of the constant part of the trapezoid.
slopefloat
Slope of the tails of the trapezoid in x direction.
Other Parameters
fixeda dict, optional
A dictionary {parameter_name: boolean} of parameters to not be varied during fitting. True means the parameter is held fixed. Alternatively the fixed property of a parameter may be used.
tieddict, optional
A dictionary {parameter_name: callable} of parameters which are linked to some other parameter. The dictionary values are callables providing the linking relationship. Alternatively the tied property of a parameter may be used.
boundsdict, optional
A dictionary {parameter_name: value} of lower and upper bounds of parameters. Keys are parameter names. Values are a list or a tuple of length 2 giving the desired range for the parameter. Alternatively, the min and max properties of a parameter may be used.
eqconslist, optional
A list of functions of length n such that eqcons[j](x0,*args) == 0.0 in a successfully optimized problem.
ineqconslist, optional
A list of functions of length n such that ieqcons[j](x0,*args) >= 0.0 is a successfully optimized problem.
Attributes Summary
R_0 amplitude input_units This property is used to indicate what units or sets of units the evaluate method expects, and returns a dictionary mapping inputs to units (or None if any units are accepted). param_names slope x_0 y_0
Methods Summary
evaluate(x, y, amplitude, x_0, y_0, R_0, slope) Two dimensional Trapezoid Disk model function
Attributes Documentation
R_0 = Parameter('R_0', value=1.0)
amplitude = Parameter('amplitude', value=1.0)
input_units
This property is used to indicate what units or sets of units the evaluate method expects, and returns a dictionary mapping inputs to units (or None if any units are accepted).
Model sub-classes can also use function annotations in evaluate to indicate valid input units, in which case this property should not be overridden since it will return the input units based on the annotations.
param_names = ('amplitude', 'x_0', 'y_0', 'R_0', 'slope')
slope = Parameter('slope', value=1.0)
x_0 = Parameter('x_0', value=0.0)
y_0 = Parameter('y_0', value=0.0)
Methods Documentation
static evaluate(x, y, amplitude, x_0, y_0, R_0, slope)[source]
Two dimensional Trapezoid Disk model function | 2020-01-20 21:07:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3217562735080719, "perplexity": 4484.2025604365435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250599789.45/warc/CC-MAIN-20200120195035-20200120224035-00191.warc.gz"} |
https://nemocas.github.io/AbstractAlgebra.jl/ytabs.html | Partitions and Young tableaux
# Partitions and Young tableaux
AbstractAlgebra.jl provides basic support for computations with Young tableaux, skew diagrams and the characters of permutation groups (implemented src/generic/YoungTabs.jl). All functionality of permutations is accesible in the Generic submodule.
## Partitions
The basic underlying object for those concepts is Partition of a number $n$, i.e. a sequence of positive integers $n_1, \ldots, n_k$ which sum to $n$. Partitions in AbstractAlgebra.jl are represented internally by non-increasing Vectors of Ints. Partitions are printed using the standard notation, i.e. $9 = 4 + 2 + 1 + 1 + 1$ is shown as $4_1 2_1 1_3$ with the subscript indicating the count of a summand in the partition.
Partition(part::Vector{<:Integer}[, check::Bool=true]) <: AbstractVector{Int}
Represent integer partition in the non-increasing order.
part will be sorted, if necessary. Checks for validity of input can be skipped by calling the (inner) constructor with false as the second argument.
Functionally Partition is a thin wrapper over Vector{Int}.
Fieldnames:
• n::Int - the partitioned number
• part::Vector{Int} - a non-increasing sequence of summands of n.
Examples:
julia> p = Partition([4,2,1,1,1])
4₁2₁1₃
julia> p.n == sum(p.part)
true
source
### Array interface
Partition is a concrete subtype of AbstractVector{Int} and implements the following standard Array interface:
size(p::Partition)
Return the size of the vector which represents the partition.
Examples:
julia> p = Partition([4,3,1]); size(p)
(3,)
source
getindex(p::Partition, i::Integer)
Return the i-th part (in decreasing order) of the partition.
source
setindex!(p::Partition, v::Integer, i::Integer)
Set the i-th part of partition p to v. setindex! will throw an error if the operation violates the non-increasing assumption.
source
These functions work on the level of p.part vector. Additionally setindex! will try to prevent uses which result in non-valid (i.e. non-decreasing) partition vectors.
One can easily iterate over all partitions of $n$ using the AllParts type:
AllParts(n::Int)
Return an iterator over all integer Partitions of n. Partitions are produced in ascending order according to RuleAsc (Algorithm 3.1) from
Jerome Kelleher and Barry O’Sullivan, Generating All Partitions: A Comparison Of Two Encodings ArXiv:0909.2331
See also Combinatorics.partitions(1:n).
Examples
julia> ap = AllParts(5);
julia> collect(ap)
7-element Array{AbstractAlgebra.Generic.Partition,1}:
1₅
2₁1₃
3₁1₂
2₂1₁
4₁1₁
3₁2₁
5₁
source
The number all all partitions can be computed by the hidden function _numpart. Much faster implementation is available in Nemo.jl.
_numpart(n::Integer)
Returns the number of all distinct integer partitions of n. The function uses Euler pentagonal number theorem for recursive formula. For more details see OEIS sequence A000041. Note that _numpart(0) = 1 by convention.
source
Since Partition is a subtype of AbstractVector generic functions which operate on vectors should work in general. However the meaning of conj has been changed to agree with the traditional understanding of conjugation of Partitions:
conj(part::Partition)
Returns the conjugated partition of part, i.e. the partition corresponding to the Young diagram of part reflected through the main diagonal.
Examples:
julia> p = Partition([4,2,1,1,1])
4₁2₁1₃
julia> conj(p)
5₁2₁1₂
source
conj(part::Partition, v::Vector)
Returns the conjugated partition of part together with permuted vector v.
source
## Young Diagrams and Young Tableaux
Mathematicaly speaking Young diagram is a diagram which consists of rows of square boxes such that the number of boxes in each row is no less than the number of boxes in the previous row. For example partition $4_1 3_2 1$ represents the following diagram.
┌───┬───┬───┬───┐
│ │ │ │ │
├───┼───┼───┼───┘
│ │ │ │
├───┼───┼───┤
│ │ │ │
├───┼───┴───┘
│ │
└───┘
Young Tableau is formally a bijection between the set of boxes of a Young Diagram and the set $\{1, \ldots, n\}$. If a bijection is increasing along rows and columns of the diagram it is referred to as standard. For example
┌───┬───┬───┬───┐
│ 1 │ 2 │ 3 │ 4 │
├───┼───┼───┼───┘
│ 5 │ 6 │ 7 │
├───┼───┼───┤
│ 8 │ 9 │10 │
├───┼───┴───┘
│11 │
└───┘
is a standard Young tableau of $4_1 3_2 1$ where the bijection assigns consecutive natural numbers to consecutive (row-major) cells.
### Constructors
In AbstractAlgebra.jl Young tableau are implemented as essentially row-major sparse matrices, i.e. YoungTableau <: AbstractArray{Int,2} but only the defining Partition and the (row-major) fill-vector is stored.
YoungTableau(part::Partition[, fill::Vector{Int}=collect(1:sum(part))]) <: AbstractArray{Int, 2}
Returns the Young tableaux of partition part, filled linearly by fill vector. Note that fill vector is in row-major format.
Fields:
• part - the partition defining Young diagram
• fill - the row-major fill vector: the entries of the diagram.
Examples:
julia> p = Partition([4,3,1]); y = YoungTableau(p)
┌───┬───┬───┬───┐
│ 1 │ 2 │ 3 │ 4 │
├───┼───┼───┼───┘
│ 5 │ 6 │ 7 │
├───┼───┴───┘
│ 8 │
└───┘
julia> y.part
4₁3₁1₁
julia> y.fill
8-element Array{Int64,1}:
1
2
3
4
5
6
7
8
source
For convenience there exists an alternative constructor of YoungTableau, which accepts a vector of integers and constructs Partition internally.
YoungTableau(p::Vector{Integer}[, fill=collect(1:sum(p))])
### Array interface
To make YoungTableaux array-like we implement the following functions:
size(Y::YoungTableau)
Return size of the smallest array containing Y, i.e. the tuple of the number of rows and the number of columns of Y.
Examples:
julia> y = YoungTableau([4,3,1]); size(y)
(3, 4)
source
getindex(Y::YoungTableau, n::Integer)
Return the column-major linear index into the size(Y)-array. If a box is outside of the array return 0.
Examples:
julia> y = YoungTableau([4,3,1])
┌───┬───┬───┬───┐
│ 1 │ 2 │ 3 │ 4 │
├───┼───┼───┼───┘
│ 5 │ 6 │ 7 │
├───┼───┴───┘
│ 8 │
└───┘
julia> y[1]
1
julia> y[2]
5
julia> y[4]
2
julia> y[6]
0
source
Also the double-indexing corresponds to (row, column) access to an abstract array.
julia> y = YoungTableau([4,3,1])
┌───┬───┬───┬───┐
│ 1 │ 2 │ 3 │ 4 │
├───┼───┼───┼───┘
│ 5 │ 6 │ 7 │
├───┼───┴───┘
│ 8 │
└───┘
julia> y[1,2]
2
julia> y[2,3]
7
julia> y[3,2]
0
Functions defined for AbstractArray type based on those (e.g. length) should work. Again, as in the case of Partition the meaning of conj is altered to reflect the usual meaning for Young tableaux:
conj(Y::YoungTableau)
Returns the conjugated tableau, i.e. the tableau reflected through the main diagonal.
Examples
julia> y = YoungTableau([4,3,1])
┌───┬───┬───┬───┐
│ 1 │ 2 │ 3 │ 4 │
├───┼───┼───┼───┘
│ 5 │ 6 │ 7 │
├───┼───┴───┘
│ 8 │
└───┘
julia> conj(y)
┌───┬───┬───┐
│ 1 │ 5 │ 8 │
├───┼───┼───┘
│ 2 │ 6 │
├───┼───┤
│ 3 │ 7 │
├───┼───┘
│ 4 │
└───┘
source
### Pretty-printing
Similarly to permutations we have two methods of displaying Young Diagrams:
setyoungtabstyle(format::Symbol)
Select the style in which Young tableaux are displayed (in REPL or in general as string). This can be either
• :array - as matrices of integers, or
• :diagram - as filled Young diagrams (the default).
The difference is purely esthetical.
Examples:
julia> Generic.setyoungtabstyle(:array)
:array
julia> p = Partition([4,3,1]); YoungTableau(p)
1 2 3 4
5 6 7
8
julia> Generic.setyoungtabstyle(:diagram)
:diagram
julia> YoungTableau(p)
┌───┬───┬───┬───┐
│ 1 │ 2 │ 3 │ 4 │
├───┼───┼───┼───┘
│ 5 │ 6 │ 7 │
├───┼───┴───┘
│ 8 │
└───┘
source
### Ulitility functions
matrix_repr(Y::YoungTableau)
Construct sparse integer matrix representing the tableau.
Examples:
julia> y = YoungTableau([4,3,1]);
julia> matrix_repr(y)
3×4 SparseMatrixCSC{Int64,Int64} with 8 stored entries:
[1, 1] = 1
[2, 1] = 5
[3, 1] = 8
[1, 2] = 2
[2, 2] = 6
[1, 3] = 3
[2, 3] = 7
[1, 4] = 4
source
fill!(Y::YoungTableaux, V::Vector{<:Integer})
Replace the fill vector Y.fill by V. No check if the resulting tableau is standard (i.e. increasing along rows and columns) is performed.
Examples:
julia> y = YoungTableau([4,3,1])
┌───┬───┬───┬───┐
│ 1 │ 2 │ 3 │ 4 │
├───┼───┼───┼───┘
│ 5 │ 6 │ 7 │
├───┼───┴───┘
│ 8 │
└───┘
julia> fill!(y, [2:9...])
┌───┬───┬───┬───┐
│ 2 │ 3 │ 4 │ 5 │
├───┼───┼───┼───┘
│ 6 │ 7 │ 8 │
├───┼───┴───┘
│ 9 │
└───┘
source
## Characters of permutation grups
Irreducible characters (at least over field of characteristic $0$) of the full group of permutations $S_n$ correspond via Specht modules to partitions of $n$.
character(lambda::Partition)
Return the $\lambda$-th irreducible character of permutation group on sum(lambda) symbols. The returned character function is of the following signature:
chi(p::perm[, check::Bool=true]) -> BigInt
The function checks (if p belongs to the appropriate group) can be switched off by calling chi(p, false). The values computed by $\chi$ are cached in look-up table.
The computation follows the Murnaghan-Nakayama formula: $\chi_\lambda(\sigma) = \sum_{\text{rimhook }\xi\subset \lambda}(-1)^{ll(\lambda\backslash\xi)} \chi_{\lambda \backslash\xi}(\tilde\sigma)$ where $\lambda\backslash\xi$ denotes the skew diagram of $\lambda$ with $\xi$ removed, $ll$ denotes the leg-length (i.e. number of rows - 1) and $\tilde\sigma$ is permutation obtained from $\sigma$ by the removal of the longest cycle.
For more details see e.g. Chapter 2.8 of Group Theory and Physics by S.Sternberg.
Examples
julia> G = PermutationGroup(4)
Permutation group over 4 elements
julia> chi = character(Partition([3,1])) # character of the regular representation
(::char) (generic function with 2 methods)
julia> chi(G())
3
julia> chi(perm"(1,3)(2,4)")
-1
source
character(lambda::Partition, p::perm, check::Bool=true) -> BigInt
Returns the value of lambda-th irreducible character of the permutation group on permutation p.
source
character(lambda::Partition, mu::Partition) -> BigInt
Returns the value of lambda-th irreducible character on the conjugacy class represented by partition mu.
source
The values computed by characters are cached in an internal dictionary Dict{Tuple{BitVector,Vector{Int}}, BigInt}. Note that all of the above functions return BigInts. If you are sure that the computations do not overflow, variants of the last two functions using Int are available:
character(::Type{Int}, lambda::Partition, p::perm[, check::Bool=true])
character(::Type{Int}, lambda::Partition, mu::Partition[, check::Bool=true])
The dimension $\dim \lambda$ of the irreducible module corresponding to partition $\lambda$ can be computed using Hook length formula
rowlength(Y::YoungTableau, i, j)
Return the row length of Y at box (i,j), i.e. the number of boxes in the i-th row of the diagram of Y located to the right of the (i,j)-th box.
Examples
julia> y = YoungTableau([4,3,1])
┌───┬───┬───┬───┐
│ 1 │ 2 │ 3 │ 4 │
├───┼───┼───┼───┘
│ 5 │ 6 │ 7 │
├───┼───┴───┘
│ 8 │
└───┘
julia> Generic.rowlength(y, 1,2)
2
julia> Generic.rowlength(y, 2,3)
0
julia> Generic.rowlength(y, 3,3)
0
source
collength(Y::YoungTableau, i, j)
Return the column length of Y at box (i,j), i.e. the number of boxes in the j-th column of the diagram of Y located below of the (i,j)-th box.
Examples
julia> y = YoungTableau([4,3,1])
┌───┬───┬───┬───┐
│ 1 │ 2 │ 3 │ 4 │
├───┼───┼───┼───┘
│ 5 │ 6 │ 7 │
├───┼───┴───┘
│ 8 │
└───┘
julia> Generic.collength(y, 1,1)
2
julia> Generic.collength(y, 1,3)
1
julia> Generic.collength(y, 2,4)
0
source
hooklength(Y::YoungTableau, i, j)
Return the hook-length of an element in Y at position (i,j), i.e the number of cells in the i-th row to the rigth of (i,j)-th box, plus the number of cells in the j-th column below the (i,j)-th box, plus 1.
Return 0 for (i,j) not in the tableau Y.
Examples
julia> y = YoungTableau([4,3,1])
┌───┬───┬───┬───┐
│ 1 │ 2 │ 3 │ 4 │
├───┼───┼───┼───┘
│ 5 │ 6 │ 7 │
├───┼───┴───┘
│ 8 │
└───┘
julia> hooklength(y, 1,1)
6
julia> hooklength(y, 1,3)
3
julia> hooklength(y, 2,4)
0
source
dim(Y::YoungTableau) -> BigInt
Returns the dimension (using hook-length formula) of the irreducible representation of permutation group $S_n$ associated the partition Y.part.
Since the computation overflows easily BigInt is returned. You may perform the computation of the dimension in different type by calling dim(Int, Y).
Examples
julia> dim(YoungTableau([4,3,1]))
70
julia> dim(YoungTableau([3,1])) # the regular representation of S_4
3
source
The the character associated with Y.part can also be used to compute the dimension, but as it is expected the Murnaghan-Nakayama is much slower even though (due to caching) consecutive calls are fast:
julia> λ = Partition(collect(12:-1:1))
12₁11₁10₁9₁8₁7₁6₁5₁4₁3₁2₁1₁
julia> @time dim(YoungTableau(λ))
0.224430 seconds (155.77 k allocations: 7.990 MiB)
9079590132732747656880081324531330222983622187548672000
julia> @time dim(YoungTableau(λ))
0.000038 seconds (335 allocations: 10.734 KiB)
9079590132732747656880081324531330222983622187548672000
julia> G = PermutationGroup(sum(λ))
Permutation group over 78 elements
julia> @time character(λ, G())
24.154105 seconds (58.13 M allocations: 3.909 GiB, 42.84% gc time)
9079590132732747656880081324531330222983622187548672000
julia> @time character(λ, G())
0.001439 seconds (195 allocations: 24.453 KiB)
9079590132732747656880081324531330222983622187548672000
### Low-level functions and characters
As mentioned above character functions use the Murnaghan-Nakayama rule for evaluation. The implementation follows
Dan Bernstein, The computational complexity of rules for the character table of $S_n$ Journal of Symbolic Computation, 37 (6), 2004, p. 727-748,
implementing the following functions. For precise definitions and meaning please consult the paper cited.
partitionseq(lambda::Partition)
Returns a sequence (as BitVector) of falses and trues constructed from lambda: tracing the lower contour of the Young Diagram associated to lambda from left to right a true is inserted for every horizontal and false for every vertical step. The sequence always starts with true and ends with false.
source
partitionseq(seq::BitVector)
Returns the essential part of the sequence seq, i.e. a subsequence starting at first true and ending at last false.
source
isrimhook(R::BitVector, idx::Int, len::Int)
R[idx:idx+len] forms a rim hook in the Young Diagram of partition corresponding to R iff R[idx] == true and R[idx+len] == false.
source
MN1inner(R::BitVector, mu::Partition, t::Int, [charvals])
Returns the value of $\lambda$-th irreducible character on conjugacy class of permutations represented by partition mu, where R is the (binary) partition sequence representing $\lambda$. Values already computed are stored in charvals::Dict{Tuple{BitVector,Vector{Int}}, Int}. This is an implementation (with slight modifications) of the Murnaghan-Nakayama formula as described in
Dan Bernstein,
"The computational complexity of rules for the character table of Sn"
_Journal of Symbolic Computation_, 37(6), 2004, p. 727-748.
source
## Skew Diagrams
Skew diagrams are formally differences of two Young diagrams. Given $\lambda$ and $\mu$, two partitions of $n+m$ and $m$ (respectively). Suppose that each of cells of $\mu$ is a cell of $\lambda$ (i.e. parts of $\mu$ are no greater than the corresponding parts of $\lambda$). Then the skew diagram denoted by $\lambda/\mu$ is the set theoretic difference the of sets of boxes, i.e. is a diagram with exactly $n$ boxes:
SkewDiagram(lambda::Partition, mu::Partition) <: AbstractArray{Int, 2}
Implements a skew diagram, i.e. a difference of two Young diagrams represented by partitions lambda and mu. (below dots symbolise the removed entries)
Examples
julia> l = Partition([4,3,2])
4₁3₁2₁
julia> m = Partition([3,1,1])
3₁1₂
julia> xi = SkewDiagram(l,m)
3×4 AbstractAlgebra.Generic.SkewDiagram:
⋅ ⋅ ⋅ 1
⋅ 1 1
⋅ 1
source
SkewDiagram implements array interface with the following functions:
size(xi::SkewDiagram)
Return the size of array where xi is minimally contained. See size(Y::YoungTableau) for more details.
source
in(t::Tuple{T,T}, xi::SkewDiagram) where T<:Integer
Checks if box at position (i,j) belongs to the skew diagram xi.
source
getindex(xi::SkewDiagram, n::Integer)
Return 1 if linear index n corresponds to (column-major) entry in xi.lam which is not contained in xi.mu. Otherwise return 0.
source
The support for skew diagrams is very rudimentary. The following functions are available:
isrimhook(xi::SkewDiagram)
Checks if xi represents a rim-hook diagram, i.e. its diagram is edge-connected and contains no $2\times 2$ squares.
source
leglength(xi::SkewDiagram[, check::Bool=true])
Computes the leglength of a rim-hook xi, i.e. the number of rows with non-zero entries minus one. If check is false function will not check whether xi is actually a rim-hook.
source
matrix_repr(xi::SkewDiagram)
Returns a sparse representation of the diagram xi, i.e. a sparse array A where A[i,j] == 1 if and only if (i,j) is in xi.lam but not in xi.mu.
source | 2018-10-17 21:29:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6654866933822632, "perplexity": 2965.5397693506793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511216.45/warc/CC-MAIN-20181017195553-20181017221053-00087.warc.gz"} |
https://blog.givewell.org/2011/06/21/guest-post-from-eric-friedman/ | # Guest post from Eric Friedman
This is a guest post from Eric Friedman about how he decided what charity to support for his most recent donation. We requested this post along the lines of earlier posts by Jason Fehr, Ian Turner and Dario Amodei.
In 2003, I decided that I wanted to increase the amount of money I gave away, shortly after a two-week trip to India. It was there that I saw a level of poverty beyond the scope of anything I had ever seen in America, and I privately vowed that I could not stand by idly.
When I returned home to Chicago, I tried to figure out the most effective way to give and the best organizations to support. Before my trip to India, I had made a few $100-$200 gifts that I later regretted, and this time around I was not going to give away a dime until I was convinced that my donation would be used well. Unfortunately, I was not able to find the information I needed to be comfortable making a donation. Despite the promise I had made to myself, I gave nothing.
Fast forward a year, and the big tsunami hit east Asia in December 2004. The images on tv motivated me to give $1,000 to an organization well-known for disaster relief, and that reminded me that I had not done what I planned on doing after returning from India. Although I restarted my research to find an organization I wanted to give more to, I couldn’t figure out which organizations were strong. Everything seemed like fluff, and there was no “Consumer Reports” for nonprofits. I barely gave anything in 2005. By 2006, I came to realize that there wasn’t much high-quality information on which nonprofits performed best, so if I wanted to give, I’d have to make do without it. Inaction was unacceptable, so I developed a plan. I started with the American Institute of Philanthropy, which rates organizations on financial efficiency metrics such as percentage of costs that go towards fundraising and overhead. This provided a starting point to identify organizations that might be good, then I reviewed their websites to pick three that I liked. I knew that this was not a particularly rigorous screen, but it was the best I knew how to do at the time. In 2006, I gave$2,500 to CARE, $2,500 to Africare, and$1,000 to Freedom from Hunger.
I figured that donations of this size might draw enough attention to have a serious discussion with their staff. I spoke with each organization, but was unsatisfied with all of the conversations. Whenever I asked them about how to provide the most help for people or evaluate them against other organizations, their responses were inadequate. They were filled with anecdotes about individuals they’ve helped and inspirational stories, but not much information that would genuinely help answer my questions. I was quite surprised when some of them actually asked what I liked most among their set of programs (e.g. education, clean water, healthcare, emergency relief, etc). Weren’t they supposed to be more knowledgeable than me about which of these is most effective?
I asked if other donors asked these types of questions, and they said that it was rare. I asked if big, sophisticated foundations ask these types of questions. They didn’t either. I found that to be exceptionally odd. (Ed note: this was similar to GiveWell staff’s experience before starting GiveWell. See Elie’s blog post from February 2007) (Since then, I’ve asked several different nonprofits about the grant-making process and post-grant relationships they have with big foundations, and there appears to be a surprisingly small amount of value-added in the process. In some cases, the foundations are actually requiring the nonprofits to spend the grants on things the nonprofits don’t think will best help the intended recipients.)
I also spoke with a couple of philanthropy consultants, who I expected to be better at evaluating organizations and selecting priorities. I was disappointed. Usually they would turn the question on me and ask what types of programs and organizations interested me. I explained that I wanted to support programs that were most effective at helping people and organizations that were best at executing those programs, and I was looking for information on how to do that. Other than that, I didn’t really care what type of program or organization it was. While that seemed relatively basic to me, it appeared as if I was speaking a different language. One told me that I needed to figure out what my objectives were, though I thought I had stated them clearly. A couple were somewhat condescending—implying that they were the experts and I was the one who needed help. Their responses to my questions didn’t give me much confidence in their expertise. I wondered if I was the only donor trying to structure my giving around what the world needed rather than my personal interests. (Ed note: for more, see this Tactical Philanthropy post on the rarity of issue agnostic giving.)
Maybe I was being too critical. I knew that I wasn’t asking easy questions, but they also didn’t seem unfair. I wasn’t expecting an objectively perfect answer, but just something better than I got.
A turning point happened a few weeks later. My contact at Freedom from Hunger called me to see if I wanted to meet their CEO, who was going to be in my hometown for a conference. We had a great discussion for about an hour and a half in a hotel lobby, and it became clear to me that he understood what I was trying to do and thought about many of the same issues when running Freedom from Hunger. While I still didn’t know how to evaluate the quality of programs at different organizations, I found one that had shared many aspects of my philosophy. In 2007, I donated $25,000. Based on the information I had at the time, there was no organization I believed in more than Freedom from Hunger. In early 2007, my life took a turn for the better: I met the woman who is now my wife. In 2008, Freedom from Hunger offered us the opportunity to join them (at our expense) on a site visit to some of their programs in Ghana. During our time there, we spent a significant amount of time with some of their senior staff (including the CEO), three board members, program staff, and their clients. We saw the programs in action, which increased our conviction in what they do. We gave another$28,000 to Freedom from Hunger in 2008.
I started following GiveWell in 2009. It was clear from their blog that we shared similar values, and I loved what they were trying to do. But I disagreed with some aspects of their approach. Their emphasis on measurement seemed excessive. This approach had a built-in bias towards smaller, single-program organizations that could measure their impact more precisely. I wasn’t convinced that there weren’t economies of scale in international development. And their focus on scaling up existing solutions and excluding funding unproven innovations seemed incomplete. While I liked what they were doing, I still had more conviction in my own ability to pick organizations.
We gave Freedom from Hunger another $27,000 in 2009. As I continued to follow GiveWell’s blog in 2010, I became more persuaded toward their views on areas where I previously disagreed. There were still differences of opinion, but I was also coming to realize that their skill in selecting organizations far exceeded mine. It was humbling to realize that someone else is better at something I had put so much thought into. At that time, GiveWell had not evaluated Freedom from Hunger. For the first time in several years, I wasn’t sure where to donate to. I had a very candid conversation with Freedom from Hunger about this predicament, and they offered to have GiveWell evaluate them. GiveWell was also willing to do the evaluation—they already wanted to learn more about Freedom from Hunger independent of me. The evaluation resulted in a “notable” rating, which is better than the vast majority of the organizations GiveWell considers, but also not nearly as strong as Gold. My wife and I liked the people at Freedom from Hunger and had become personally connected with them, especially with the site visit to Ghana. They are extraordinary people who have devoted their lives to helping others, and they are really good at what they do. They might be the best at their specific niche in the nonprofit world. Despite this, GiveWell’s review suggested that there might be organizations in different niches that have a greater possibility of generating results. If helping others was a sport, Freedom from Hunger is good enough to qualify for the Olympics, but it didn’t win the Gold. My wife and I had several conversations about what to do. Freedom from Hunger did nothing wrong and we had no regrets about our prior donations. They were our friends, and we had enough of a relationship with them that if we shifted our donations elsewhere, we’d have to explain why. Eventually, we decided that there was one fundamental principle we should apply: giving was primarily about helping the less fortunate, not our friendships or personal interests. Breaking up with Freedom from Hunger would be hard. I explained our reasoning and they took it in stride, demonstrating that they care more about the less fortunate than their own institutional growth. They are a good group. But in 2010, we gave about$31,000 to GiveWell’s donor advised fund to ultimately be distributed as they recommended.
I imagine that there are other donors who read this blog, but donate to many organizations that are not recommended by GiveWell. While I don’t want to oversimplify the decision-making involved with large charitable gifts or pretend that I have all the answers, I will offer two pieces of advice.
First, know what you’re trying to do. I’ve heard many people say that philanthropy is very personal. I understand that view, and my own giving is close to my heart. But if giving is primarily about helping others, then that the most important component of giving should be about other people. That is, the donor’s personal friendships, interests, and passions should take a back seat. Although you may feel a close connection to a school you attended, an illness that affected a family member, or a community you live in, those may not be the areas positioned to provide the most help for others. Instead, donors primarily focused on helping others should identify the greatest areas of need and the most effective solutions. It can be tough to put other people’s needs over ours, but, ironically, it makes most donors feel better about giving in the end. I certainly do.
Second, know when someone else has more expertise than you. I originally viewed Freedom from Hunger as the best organization I could identify based on the information available to me at the time. And I had thought about it a lot. So it was personally challenging for me to acknowledge that GiveWell is better at evaluating charitable organizations. Neither my wife nor I agree with every aspect of GiveWell’s philosophy and approach—I doubt there is anyone who does—but the strengths they have seemed more than enough to outweigh any weaknesses we perceived. There is a certain pride of ownership many donors (including me) have as they develop their own philanthropic paths, and I’d encourage them to critically self-evaluate to make sure pride of ownership doesn’t get in the way of incorporating the expertise of others.
I am extremely appreciative for the work GiveWell has done to provide resources that were not available at the time I started giving. I get more personal satisfaction from knowing that my giving is doing more to help others, and I will have fewer reservations about opening my wallet wider in the future. To be completely frank, one thing that confuses me is why foundations and mega-donors making million-plus dollar gifts apparently make little use of GiveWell. I hope and expect this to change over time.
• Vipul Naik on June 23, 2011 at 9:39 am said:
Spelling error in your first Ed Note: you wrote “staring” instead of “starting”
• Chuck S'r on June 26, 2011 at 12:41 pm said:
Dear Mr. Friedman, the human and intelligent description of your frustrations following your donations in 2006 is very plausible and is for me somewhat touching.
Throughout your post there are expressions of a gracious human being.
Thank you.
• Chris Dunford, Freedom from Hunger on July 1, 2011 at 8:57 pm said:
Dear Eric,
You and I already have had a deep discussion by phone about this matter long before you wrote this blog post, but I would like to state publicly my personal admiration for both your thoughtful, careful approach to philanthropy. Too often, donors (even institutional grantmakers, as you suggest) act on what seems like whimsical emotion rather than rational calculation. You two are to be applauded for putting evidence before personal relationship as the decisive factor in your decision-making. And it is true that GiveWell appears to offer the most evidence-based approach to determining which organizations are most effectively helping the less fortunate.
However, “helping the less fortunate” is a multi-dimensional task, and I do not believe GiveWell has yet captured this complexity in its assessments of organizational effectiveness. What GiveWell is especially good at is identifying organizations that have built programs based on solid evidence of impact and then have used that scientific evidence to construct meaningful quality assurance systems to make sure that their programs consistently deliver services in the way that the impact research has shown to be effective.
That there are so few such organizations identified by GiveWell is troubling, however. One could say this is a very poor reflection on the state of international development efforts. But it may also be a negative reflection on the suitability of the GiveWell approach. That is, only the Cadillacs and Rolls Royces can capture the gold, and there are few of these on the road for some good reasons, not just because international development practitioners are sloppy stewards of donor funds.
GiveWell’s system overweights the one dimension of impact and seems to ignore two other key dimensions — scale and sustainability. The Cadillac programs in international development typically are small in scale of outreach to intended beneficiaries, because they are expensive. There is a trade-off between the high cost of proving and constantly assuring impact (though I believe this is crucial and therefore applaud GiveWell’s attention to this dimension) and the abilty of organizations to reduce their costs per person served in order to reach large numbers of people in need. Moreover, this trade-off is even more pronounced if the organization is trying to sustain its outreach over years or decades without endless dependence on philanthropic subsidy. I would argue that Freedom from Hunger does not qualify for better than “notable” rating by GiveWell, because we are trying to support a variety of local organizations to pursue a multi-dimensional balance between assurance of impact, scale of outreach and sustainability of operations over long periods of time. Being “notable” in each of these dimensions would be high praise indeed.
Personal relationships may not be valid determinants of philanthropic action, but personal interests cannot be eliminated. In fact, they are indispensable for making the very tough, stubbornly intuitive decisions about one’s philanthropic strategy in a multi-dimensional and still uncertain world of people striving both to do better (the poor) and to do good (the better off who would help the poor). A donor should have a theory of change, but each theory comes with some assumptions and biases about what will help most. There is no one theory and certainly no absolutes–development is all about people, and therefore harder than rocket science.
By depending solely on GiveWell’s assessments (which is a reasonable decision), you are choosing a particular bias and set of assumptions constituting a theory of change that overweights impact at the expense of scale and sustainability. This is a subjective rather than an objective choice, and I deeply respect your transparency and thoughtfulness in making that choice.
Your donations have done and will continue to do a great deal of good for the very poor of this world (even if you cannot always be sure of that!). Thank you!
Warmly,
Chris
• Samuel Lee on July 2, 2011 at 3:21 am said:
Eric, thanks for sharing. I also wrestled with rejecting my own work in favor of GiveWell’s analysis. But I did, and I think the world is a tiny bit better for it. And that’s all that matters.
• Jonathan Lewis on July 3, 2011 at 12:32 am said:
Hi, Eric. Let me salute your very thoughtful, considered blog post. An eye-opener for many, I am sure. And, so reflective of the process that so many of us experience as well.
I am in the process of blogging at the Huffington Post on this very subject (http://www.huffingtonpost.com/jonathan-lewis)so I won’t duplicate that effort. Please do read my thoughts.
My only caution is that you have elected to outsource your own judgment as well as your trusted confidence in programs you know in favor of robotic analysis. Poverty is multi-disclinary in nature, and narrow analytics may not be the evalution cure-all you seek. Think about it.
• Holden on July 6, 2011 at 10:42 am said:
Chris, thanks for the thoughtful comment. We agree that some of the key decisions here come down to one’s subjective worldview rather than simply to facts, and we’re glad that you’ve highlighted this and shared your own perspective.
• Boris Yakubchik on July 6, 2011 at 12:14 pm said:
Eric, thank you for sharing your story.
I wholeheartedly agree with your advice: “But if giving is primarily about helping others, then that the most important component of giving should be about other people” and I too am very thankful that GiveWell provides support to those with a similar mindset.
• mtporter on August 16, 2013 at 2:54 pm said:
I found my way to this essay after reading about Friedman’s forthcoming book on this topic. I’m a follower of Peter Singer’s work, and I’ve seen comparisons between his work and Friedman’s utilitarian-rationalist analytic approach.
Based on this essay (though I look forward to seeing these points developed further in the book itself), what I suspect may be missing from the utilitarian assessment is the role that personal gratification may play in influencing individuals’ willingness to commit more of their resources to philanthropy. As Friedman says above, philanthropy is principally about helping others — but for most donors, it is rarely exclusively so. Furthermore, sudden “conversions” like Friedman’s, from a low level of interest in giving, to a much larger commitment, are relatively rare. Instead, most individual givers begin at low levels, and their generosity increases over time in a manner commensurate with their personal enthusiasm for the work their money is doing.
Given that, while it’s valuable to encourage well-established givers to rely more heavily on resources like GiveWell, in order to link their efforts to sound empirical analyses of the results they’ll achieve, it may be counterproductive to counsel Friedman’s own exclusive commitment to impartial impact analyses within the larger community of casual philanthropists. Expert empirical assessments are a valuable tool for inexperienced givers, but so is passion, and that sense of personal connection that may create the motivation to give in the first place.
Encouraging prospective donors to dispense with these latter motivations altogether may mean a higher impact at the initial level of contribution — but it may also mean contributions that grow much more slowly over time, if donors are less likely to develop the same level of interest in and engagement with the work their money is doing. In the end, someone who gives 10% of their impact to a charity whose impact is only 50% of the optimum still does more good than someone who makes more efficient choices but whose commitment to giving stagnates at only 3% of income.
With a view to nurturing sustained and increased philanthropic commitments in the longer term, advice to would-be charitable givers must teach them to consider efficiency, but should not push them to disregard passion. Notably, GiveWell’s own site does contain some tools for accommodating a more balanced approach, providing information about charities operating in a variety of regions and issue areas, and offering best recommendations for users committed to a particular focus — even while also highlighting greater needs and opportunities that may exist elsewhere, so as to potentially engage donors’ interests in additional or alternative charitable projects. The approach of offering information about “notable” as well as “gold” charities also recognizes that donors may have their own reasons for wishing to select from a larger pool of options, and makes it easier to balance personal gratification with some degree of efficiency in giving. Moreover, striking that balance may well result in most donors’ doing more good with their money in the long run. | 2017-10-19 14:23:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2998066544532776, "perplexity": 1520.4022758530023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823309.55/warc/CC-MAIN-20171019141046-20171019161046-00124.warc.gz"} |
https://publications.ashp.org/view/book/9781585286423/ch14.xml?rskey=Ipio2Q&result=24 | # Pulmonary Function and Related Tests
Authors: Lori A. Wilken and Min J. Joo
Free access
## OBJECTIVES
After completing this chapter, the reader should be able to
• Identify common pulmonary function tests and list their purpose and limitations
• Describe how pulmonary function tests are performed and discuss factors affecting the validity of the results
• Interpret commonly used pulmonary function tests, given clinical information
• Discuss how pulmonary function tests provide objective measurement to aid in the diagnosis of pulmonary diseases
• Discuss how pulmonary function tests assist with monitoring efficacy and toxicity of various drug therapies
Pulmonary function tests (PFTs) provide objective and quantifiable measures of lung function and are useful in diagnosing, evaluating, and monitoring respiratory disease. Diagnosing and monitoring many pulmonary diseases, including diseases of gas exchange, often require measuring the flow or volume of air inhaled and exhaled by the patient. Spirometry, a test that measures the movement of air into and out of the lungs during various breathing maneuvers, is the most frequently used PFT. Clinicians use spirometry to aid in the diagnosis of respiratory diseases such as asthma and chronic obstructive pulmonary disease (COPD). Other tests of lung function include lung volume assessment, carbon monoxide diffusion capacity (DLCO), exercise testing, and bronchial provocation tests. Arterial blood gases (ABGs) can be measured with PFTs and are useful for assessing lung function. (Interpretation of arterial blood gases is discussed in Chapter 13.) This chapter discusses the mechanics and interpretation of PFTs.
## ANATOMY AND PHYSIOLOGY OF LUNGS1
The purpose of the lungs is to take oxygen from the atmosphere and exchange it for carbon dioxide in the blood. The movement of air in and out of the lungs is called ventilation; the movement of blood through the lungs is termed perfusion.
Air enters the body through the mouth and nose and travels through the pharynx to the trachea. The trachea splits into the left and right main stem bronchi, which deliver inspired air to the respective lungs. The left and right lungs are in the pleural cavity of the thorax. These two spongy, conical structures are the primary organs of respiration. The right lung has three lobes, whereas the left lung has only two lobes, thus leaving space for the heart. Within the lungs, the main bronchi continue to split successively into smaller bronchi, bronchioles, terminal bronchioles, and finally alveoli. In the alveoli, carbon dioxide is exchanged for oxygen across a thin membrane separating capillary blood from inspired air.
The thoracic cavity is separated from the abdominal cavity by the diaphragm. The diaphragm, a thin sheet of dome-shaped muscle, contracts and relaxes during breathing. The lungs are contained within the rib cage but rest on the diaphragm. Between the ribs are two sets of intercostal muscles, which attach to each upper and lower rib. During inhalation, the intercostal muscles and the diaphragm contract, which enlarges the thoracic cavity. This action generates a negative intrathoracic pressure, allowing air to rush in through the nose and mouth down into the pharynx, trachea, and lungs. During exhalation, these muscles relax, and a positive intrathoracic pressure causes air to be pushed out of the lungs. Normal expiration is a passive process that results from the natural recoil of the expanded lungs. However, in people with rapid or labored breathing or airflow limitation, the accessory muscles and abdominal muscles often must contract to help force air out of the lungs more quickly or completely.
The ability of the lungs to expand and contract to inhale and exhale air is affected by the compliance of the lungs, which is a measure of the ease of expansion of the lungs and thorax. Processes that result in scarring of lung tissue (eg, pulmonary fibrosis) can decrease compliance, thus decreasing the flow and volume of air moved by the lungs, and increase the work to breathe. The degree of ease in which air travels through the airways is known as resistance. The length and radius of the airways as well as the viscosity of the gas inhaled determine resistance. A patient with a high degree of airway resistance may not be able to take a full breath in or exhale fully (some air may become trapped in the lungs).
To have an adequate exchange of the gases, there must be a matching of ventilation (V) and perfusion (Q) at the alveolar level. An average V:Q ratio, determined by dividing total alveolar ventilation (4 L/min) by cardiac output (5 L/min), is 0.8. A mismatch of ventilation and perfusion may result from a shunt or dead space. A shunt occurs when there is flow of blood adjacent to alveoli that are not ventilated. This could be physiologic (eg, at rest, some alveoli are collapsed or partially opened but perfused) or pathologic when alveoli are filled with fluid (eg, heart failure) or cellular debris (eg, pneumonia) or are collapsed (eg, atelectasis). A shunt can also occur when airways are obstructed by mucus or collapse on exhalation (eg, COPD). In a shunt, blood moves from the venous circulation to the arterial circulation without being oxygenated.
Dead space occurs when there is ventilation of functional lung tissue without adjacent blood flow for gas exchange. Dead space can be physiologic (eg, the trachea) or pathologic because of airflow limitation of blood flow (eg, pulmonary embolism). The body uses a few mechanisms to normalize the V:Q ratio, such as hypoxic vasoconstriction and bronchoconstriction. When the V:Q ratio is low, hypoxic vasoconstriction leads to decreased perfusion to the hypoxic regions of the lungs, thus redirecting perfusion to functional areas of the lungs, which leads to an increase in the V:Q ratio. When the V:Q ratio is high, the bronchi constrict in areas that are not well perfused, which leads to a decrease in the amount of ventilation to areas that are not well perfused, a decrease in the amount of alveolar dead space, and a decrease in the V:Q ratio.
For the respiration process to be complete, gas diffusion must occur between the alveoli and the pulmonary capillaries. By the diffusion mechanism, gases equilibrate from areas of high concentration to areas of low concentration. Hemoglobin (Hgb) releases carbon dioxide and adsorbs oxygen as it diffuses through the alveolar walls. If these walls thicken, diffusion is hampered, potentially causing carbon dioxide retention, hypoxia, or both. Membrane formation with secondary thickening of the alveolar wall may result from an acute or chronic inflammatory process such as interstitial pneumonia and pulmonary fibrosis. The pulmonary diffusing capacity is also reduced in the presence of a V:Q mismatch, loss of lung surface areas (eg, emphysema, lung resection), or decrease in oxygen-carrying capacity (eg, anemia). The various PFTs can measure airflow in or out of the lungs, indicate how much air is in the lungs, and provide information on gas diffusion or specific changes in airway tone or reactivity.
## CLINICAL USE OF PULMONARY FUNCTION TESTING
Pulmonary function tests are useful in many clinical situations.2 They aid in the diagnostic differentiation of various pulmonary diseases. PFT results are divided into two types of pulmonary abnormalities: obstructive and restrictive lung diseases. Obstructive diseases (eg, asthma and COPD) decrease the flow rate of air (liters/minute) out of the lungs but have less impact on the total volume of air per breath. In restrictive diseases (eg, kyphosis or sarcoidosis), the lungs are limited in the amount of air they can contain. Restrictive diseases usually decrease the total volume of air per breath in a similar ratio to the flow rate of air. Table 14-1 summarizes common pulmonary disease states with PFT results.
TABLE 14-1.
### Pulmonary Disease States and Common PFT Results
PULMONARY ABNORMALITY
PATHOPHYSIOLOGY
DISEASE STATE EXAMPLES
COMMON PFT RESULTS
FEV1/FVC
FEV1
FVC
RV
TLC
Obstructive lung disease, chronic
Fixed airflow limitation
Asthma with fixed airflow limitation, COPD, cystic fibrosis, bronchiectasis
Decreased
Decreased
Normal or decreased
Normal or increased
Normal or increased
Obstructive lung disease, reversible and stable
Reversible (eg, bronchoconstriction)
Asthma
Normal
Normal
Normal
Normal
Normal
Restrictive lung disease
Parenchymal infiltration or fibrosis
Idiopathic pulmonary fibrosis and other idiopathic interstitial pneumonias, drug induced, secondary to autoimmune diseases, sarcoidosis
Normal or increased
Decreased
Decreased
Decreased
Decreased
Extrathoracic compression
Kyphosis, morbid obesity, ascites, chest wall deformities, pregnancy
Normal or increased
Decreased
Decreased
Decreased
Decreased
Neuromuscular causes
Guillain-Barré syndrome, myasthenia gravis, muscular dystrophy, amyotrophic lateral sclerosis
Normal or increased
Decreased
Decreased
Decreased
Decreased
Mixed obstructive and restrictive
Combinations of restrictive and obstructive processes
Both restrictive and obstructive diseases
Decreased
Decreased
Decreased
Increased, normal, or decreased
Decreased
FEV1 = forced expiratory volume in 1 second; FVC = forced vital capacity; RV= residual volume; TLC = total lung capacity
In addition, serial PFTs allow tracking of the progression of pulmonary diseases and the need for or response to various treatments. They also help to establish a baseline of respiratory function before surgical, medical, or radiation therapy. Subsequent serial measurements then aid in the detection and tracking of changes in lung function caused by these therapies. Similarly, serial PFTs can be used to evaluate the risk of lung damage from exposure to environmental or occupational hazards. Table 14-2 summarizes the selected uses of PFTs.
TABLE 14-2.
### Selected Uses of PFTs
Diagnosis Evaluate signs and symptoms of respiratory disease Screen at-risk individuals for pulmonary disease Evaluation Assess the health status before initiating physical activity or rehabilitation Determine preoperative risk of having pulmonary-related issues during surgery Monitoring Describe the course of lung function from a respiratory disease Monitor respiratory changes for occupational or environmental exposure to toxins Assess therapeutic drug effectiveness (eg, inhaled corticosteroids or bronchodilators for asthma) Monitor adverse drug effects on pulmonary function (eg, amiodarone)
## PULMONARY FUNCTION TESTS AND MEASUREMENTS
Pulmonary function tests use equations based on an individual’s age, height, sex, and race (when available) to calculate reference values from the population. The reference values most commonly used for spirometry is the National Health and Nutrition Examination Survey III and, more recently, the Global Lung Function Initiative (GLI)-2012.3 The individual’s measurement is then compared with the calculated reference values and the lower limit of normal (LLN). The LLN value is set at the fifth percentile, indicating that if the measured value is less than the lower fifth percentile of a normal population, then it is considered reduced and may be associated with disease. Using both the reference measurement and the LLN helps decrease overdiagnosing by removing bias from age seen in fixed value cutoffs.3
### Spirometry
Spirometry is a PFT that helps detect airflow limitation that can be manifested in asthma or COPD. Spirometry measures the flow of air in volume per time. The physical forces of the airflow and the total amount of air inhaled and exhaled are converted by transducers to electrical signals, which are displayed on a computer screen.
During this maneuver, a volume-time curve—a plot of the volume exhaled against time—and a flow-volume curve or flow-volume loop—a diagram with flow (liters/second) on the vertical axis and volume on the horizontal axis (liters)—are generated as the report (Figure 14-1). After the data are generated, the patient’s spirometry results are compared with the reference values. The flow-volume curve is visually useful for diagnosing airflow limitation. The Global Initiative for Chronic Obstructive Lung Disease (GOLD) strategy suggests suspecting COPD in patients >40 years old with symptoms and/or risk factors and recommends spirometry to definitively diagnose COPD.5 Once diagnosed with COPD, spirometry, in conjunction with symptoms and history of exacerbations, can be used to monitor disease state severity.5 When asthma is suspected, spirometry can be used to assess for airflow variation and is recommended at the time of diagnosis, 3 to 6 months after starting treatment, at least every 1 to 2 years, and as needed to assess ongoing risk of exacerbations.6
### Spirometry Measurements
Spirometry routinely assesses forced vital capacity (FVC), forced expiratory volume in 1 second (FEV1), and FEV1/FVC.
#### Forced Vital Capacity
The FVC is the total volume of air, measured in liters, forcefully and rapidly exhaled in one breath (from maximum inhalation to end of forced expiration). End of forced expiration is achieved when there is less than a 0.025 L change in volume for at least 1 second, or the forced expiratory time has reached 15 seconds, or the FVC is within 0.150 L of another FVC measurement if the patient is older than 6 years of age. When the full inhalation-exhalation procedure is repeated slowly—instead of forcefully and rapidly—it is called the slow vital capacity (SVC). This value is the maximum amount of air exhaled after a full and complete inhalation. In patients with normal airway function, FVC and SVC are usually similar and constitute the vital capacity. In patients with diseases such as COPD, the FVC may be lower than the SVC due to collapse of narrowed or floppy airways during forced expiration. Because of this, some interpretive strategies recommend using the FEV1/SVC ratio to determine the presence of airflow limitation, especially for pronounced airflow limitation.5
#### Forced Expiratory Volume in One Second
The full, forced inhalation-exhalation procedure was already described as the FVC. During this maneuver, the computer can discern the amount of air exhaled at specific time intervals of the FVC. By convention, FEV0.5, FEV0.75, FEV1, FEV3, and FEV6 are the amounts of air exhaled after one-half, three-fourths, 1, 3, and 6 seconds, respectively. Usually, a patient’s value is described in liters and as a percentage of a predicted value based on reference values adjusted for age, height, sex, and race. Of these measurements, FEV1 has the most clinical relevance, primarily as an indicator of airway function. A value ≥80% of the predicted normal value or greater than the LLN is considered normal. Normal values can be seen in patients with asthma when the disease is mild or well controlled. FEV1 is an important value for predicting clinical outcomes, such as mortality, hospitalizations, and lung transplantation.5 For children aged 6 years and younger, FEV0.75 is used instead of FEV1 if the maximal volume expired by time is less than 1 second.4
#### Forced Expiratory Volume in One Second/Forced Vital Capacity
The ratio of FEV1 to the FVC is used to estimate the presence and amount of airflow limitation in the airways. This ratio indicates the amount of air mobilized in 1 second as a percentage of the total amount of movable air. Normal, healthy individuals can exhale approximately 50% of their FVC in the first one-half second, about 80% in 1 second, and about 98% in 3 seconds. Patients with obstructive disease usually show a decreased ratio, and the actual percentage reduction varies with the severity of airflow limitation. In COPD, the GOLD strategy defines persistent airflow limitation as a postbronchodilator FEV1/FVC ratio <0.70.5 Table 14-3 summarizes the definition of airflow limitation severity for COPD. Minicase 1 discusses how spirometry is used to diagnose COPD.
TABLE 14-3.
##### Severity of Airflow Limitation for COPD with the Postbronchodilator FEV1/FVC <0.7
SEVERITY
POSTBRONCHODILATOR FEV1 (% PREDICTED)
1
Mild
≥80
2
Moderate
50–79
3
Severe
30–49
4
Very severe
<30
Refer to the Global Initiative for Chronic Obstructive Lung Disease (GOLDCOPD) 2022 Report4 for more information.
Spirometry can also show airflow variability necessary for the diagnosis of asthma. However, frequency of asthma symptoms, quick-relief medication use, and level of medications required to control symptoms are also necessary to assess asthma severity.
Generally, the FEV1/FVC is normal (or high) in patients with restrictive diseases. In mild restriction, the FVC alone may be decreased, resulting in a high ratio. Often in restrictive lung disease, both the FVC and FEV1 are similarly reduced compared with predicted values resulting in a normal ratio. It is important to note that this pattern is consistent with a restrictive pattern on spirometry, but lung volumes are needed to confirm restriction.
#### Flow-Volume Curves
Figure 14-2 shows several flow-volume curves in which the expiratory flow is plotted against the exhaled volume. As explained earlier, these curves are graphic representations of inspiration and expiration. The shape of the curve can indicate both the type of disease and the severity of airflow limitation. Obstructive changes result in decreased airflow, revealing a characteristic concave appearance. Restrictive changes result in a shape similar to that of a healthy individual, but the size is considerably smaller. The flow-volume loop also reveals mixed obstructive and restrictive disease by a combination of the two patterns.
#### Standardization of Spirometry Measurements
Spirometry is performed by having a person breathe into a tube (mouthpiece) connected to a machine (spirometer) that measures the amount and flow of inhaled and exhaled air. Prior to performing spirometry, relative contraindications such as recent brain, eye, or sinus surgery are assessed.4 Spirometry results depend greatly on the completeness and speed of the patient’s inhalation and exhalation, so the importance of completely filling and emptying the lungs of air during the test is emphasized. During spirometry, nose clips are worn to minimize air loss through the nose. The patient is seated comfortably without leaning or slumping, and any restrictive clothing (such as ties or tight belts) is loosened or removed. The patient is coached to take a full deep breath in and then blast the air out as quickly and forcefully as possible and to keep blowing the air out, while maintaining an upright posture, until all the air is exhaled. The patient is then instructed to inspire with maximal effort until completely full. This maneuver can be repeated up to eight times in adults if needed to meet testing standards.
##### Using Spirometry to Diagnose Asthma and COPD
Debra T. is a 56-year-old woman who reports chronic cough and shortness of breath when walking up a few stairs. She has been admitted several times each year for COPD exacerbations and pneumonia. She has a 40 pack-year-history of tobacco use. She is allergic to dust mites and dogs and has had a history of asthma since childhood. Today, on exam, she is wheezing and has nasal congestion.
QUESTION: How do the results from this patient’s spirometry test support the diagnosis of asthma and COPD?
DISCUSSION: A postbronchodilator measurement for FEV1/FVC <0.70 is consistent with COPD using the GOLD criteria4 in the right clinical setting. She has a postbronchodilator FEV1/FVC of <0.70 at 0.643 consistent with a diagnosis of COPD. A postbronchodilator FEV1 of 65.69% of her predicted is considered moderate airflow limitation or GOLD Grade 2 COPD.
Her FEV1 increased by more than 12% and 200 mL, which are the criteria for a positive bronchodilator test. Patients with asthma and COPD can have a positive bronchodilator test, but patients with asthma usually have a more extreme response. In addition, her clinical picture substantiates a diagnosis of both asthma and COPD: significant smoking history and shortness of breath on exertion are common with COPD, whereas allergies are associated more with asthma. Many patients, like Debra T., have both asthma and COPD that can be detected with PFTs and need to be treated appropriately.
PREBRONCHODILATOR
POSTBRONCHODILATOR
PFT
LLN
MEASURED
% PREDICTED
MEASURED
% PREDICTED
% CHANGE
FVC (L)
2.07
1.70
66
2.13
82.48
+24.97
FEV1 (L)
1.59
1.15
55.18
1.37
65.69
19.04
FEV1/FVC
0.696
0.676
0.643
Like most medical tests, spirometry has seen changes over the years in equipment, computer support, and recommendations for standardization. In an effort to maximize the usefulness of spirometry results, the American Thoracic Society (ATS), in conjunction with the European Respiratory Society (ERS), developed standardizations of spirometry testing.3,4 These recommendations are intended to decrease the variability of spirometry testing by improving the performance of the test. The recommendations cover equipment, quality control, the training and education of people conducting the test, and the training of patients performing the test. The recommendations also provide criteria for acceptability and usability of the patient’s spirometry efforts and guidelines on interpreting the spirometry test results. For acceptability and usability of both the FEV1 and FVC results, three criteria must be met. The first criterion is the back-extrapolated volume (BEV), which must be ≤5% of the FVC or 0.100 L, whichever is greater. BEV is the volume of gas exhaled before the start of forced expiration. The BEV would be too high if a patient leaked out air before maximal exhalation. Spirometers show the BEV calculation. The second criterion is the measurement must have no evidence of a faulty zero-flow setting, or airflow through the mouthpiece sensor before the start of the test. Newer spirometers use technology to detect this error and alert the user. The last criterion is there must be no glottic closure in the first second of expiration. Glottic closure appears like a flat line on the volume-time graph and shows a stop in airflow. Because the results of spirometry depend on the patient’s effort, at least three acceptable efforts are obtained with a goal of having the two highest measurements of FVC and FEV1 vary ≤0.150 L if the patient’s age is >6 years and FVC and FEV1 vary ≤0.100 L or 10% of the highest value, whichever is greater for patients 6 years of age and younger.4
#### Bronchodilator Responsiveness Test
One of many tests that may be useful in the diagnostic workup of asthma is spirometry with bronchodilator responsiveness to assess airflow variability. Before the testing day, the patient is asked to hold short-acting β2-agonists (SABA) for at least 4 hours; twice daily long-acting β2-agonists (LABA) for at least 12 hours and once-daily LABA for at least 24 hours; and ultra-LABA for 36 hours and long-acting muscarinic antagonists for 48 hours.4 Spirometry is performed at baseline and then again 15 to 30 minutes after the administration of an inhaled SABA. A positive bronchodilator response is defined as an improvement of the postbronchodilator FEV1 and FVC by at least 12% and 200 mL from the prebronchodilator measurement.7 The Global Initiative for Asthma defines airway reversibility in adults as an increase in FEV1 of at least 12% and 200 mL from the prebronchodilator measurement, with more confidence of a positive test with an increase in FEV1 of at least 15% and 400 mL from baseline. For children, an increase of at least 12% of predicted for the FEV1 is considered positive.6 Minicase 1 illustrates how spirometry is used to confirm COPD and how the bronchodilator reversibility study is useful for diagnosing asthma.
#### Peak Expiratory Flow
The peak expiratory flow (PEF) is measured with a simple, inexpensive hand-held peak flow meter that can be used at a patient’s home, in the clinician’s office, or in the emergency department. Because PEF measures airflow through the upper airway over a shorter time and readings can vary depending on the patient’s efforts and meter type, it is not the preferred test to detect airflow limitation.
When spirometry is not available, PEF can be used to help diagnose asthma by assessing diurnal variability.6,8 Diurnal variability is indicative of asthma when the diurnal variability is >10% after 1 to 2 weeks in adults and >13% for children.6 One can calculate diurnal variability by dividing [day’s highest PEF minus day’s lowest PEF] by the mean of these two values and then averaging these daily variability results over 1 week.9 Twice-daily PEF is measured in the morning before using inhalers and in the afternoon or evening. Variability decreases after about 3 months of use an inhaled corticosteroid.
Peak flow meters are designed for both pediatric and adult patients with PEFR between 60 and 400 L/min for children and between 60 and 850 L/min for adults. PEFR is measured by having a patient perform the following steps:
1. Stand up.
2. Move the indicator on the peak flow meter to the end nearest the mouthpiece.
3. Hold the meter and avoid blocking the movement of the indicator and the holes on the end of the meter.
4. Take a deep breath in and then seal the mouth around the mouthpiece.
5. Blow out into the meter as hard and as fast as possible without coughing into the meter (like blowing out candles on a cake).
6. Examine the indicator on the meter to identify the number corresponding to the peak flow measurement.
7. Repeat the test two more times (remembering to move the indicator to the base of the meter each time).
8. Record the highest value of the three measurements in a diary (readings in the morning and afternoon are ideal).
To establish a patient’s personal best peak flow rate, measure the peak flow rate over a 2-week period when asthma symptoms and treatment are stable. The highest value over the 2-week period is the personal best. Using the individual patient’s personal best peak flow instead of predicted peak flow values is considered best practice. Asthma patients typically have lower peak flow readings than the healthy population, and using predicted values may result in overtreatment.
Using PEF for long-term monitoring is helpful for patients who have sudden asthma exacerbations or severe asthma. Improved asthma outcomes have been seen in asthma action plans, including personal best PEF. PEF is also helpful for identifying environmental and occupational asthma triggers along with differentiating between asthma and anxiety symptoms.6
##### Six-Minute Walk Test
Charles O. is a 65-year-old man diagnosed with World Health Organization group 1 pulmonary arterial hypertension after completing a right-heart catheterization. He reports a persistent and progressive dyspnea on exertion with walking from room to room in his home with use of oxygen via nasal cannula at 6 L/min with exertion only. His medications were changed 6 months ago because he was experiencing side effects.
Test 1.
##### Baseline
SpO2 (%)
HEART RATE (beats/min)
BLOOD PRESSURE (mm Hg)
SUPPLEMENTAL OXYGEN (L/min)
Baseline on room air
89
69
100/69
Room air
Baseline on supplemental oxygen
Minute 1
90
96
Room air
Minute 2
86
104
Room air
Minute 3
83
85
109
112
2 L/min O2 NC
4 L/min O2 NC
Minute 4
89
114
6 L/min O2 NC
Minute 5
89
115
6 L/min O2 NC
Minute 6
90
115
104/69
6 L/min O2 NC
Recovery 2nd minute
89
65
106/75
Room air
Distance walked
274.3 m
Number of rests
0
Borg scale self-rate
Pretest
Posttest
Dyspnea
2
5
Fatigue
2
5
NC = nasal cannula.
Test 2.
##### 6 Months Later
SATURATION (%)
HEART RATE (beats/min)
BLOOD PRESSURE (mm Hg)
SUPPLEMENTAL OXYGEN (L/min)
Baseline on room air
90
81
109/78
Room air
Baseline on supplemental oxygen
Minute 1
91
88
Room air
Minute 2
88
103
Room air
Minute 3
86
113
3 L/min O2 NC
Minute 4
88
115
4 L/min O2 NC
Minute 5
90
116
6 L/min O2 NC
Minute 6
90
118
108/81
6 L/min O2 NC
Recovery 2nd minute
90
70
112/79
Room air
Distance walked
320 m
Number of rests
0
Borg scale self-rate
Pretest
Posttest
Dyspnea
2
3
Fatigue
2
4
NC = nasal cannula.
QUESTION: Do the results from this patient’s 6-month 6MWT show an improvement in distance walked after his medications were changed?
DISCUSSION: According to the first 6MWT, the patient walked 274.3 meters. He required 6 L/min of oxygen with exercise. The patient’s distance walked was 48% of the LLN. The patient became tachycardic with exertion, which improved after a 2-minute recovery period. The results of the second 6MWT show the patient walked for 320 m with no rests. The patient required 6 L/min supplemental oxygen with exertion. Comparing his 6MWT after starting the new medication regimen, the patient’s 6-minute walk distance improved by 45.7 m and his oxygen requirement remained unchanged. A distance greater than a 30-m walk is considered clinically important.20 The patient has improved with the medication change.
### Body Plethysmography
Body plethysmography is a method used to obtain lung volume measures. Lung volume tests indicate the amount of gas contained in the lungs at the various stages of inflation. The lung volumes and capacities may be obtained by several methods, including body plethysmography, gas dilution, and imaging techniques.10 Different methods can have small but significant effects on the values reported. Gas dilution methods only measure ventilated areas, whereas body plethysmography measures both ventilated and nonventilated areas. Therefore, body plethysmography values may be larger in patients with nonventilated or poorly ventilated lung areas. Computed tomography (CT) and magnetic resonance imaging can estimate lung volumes with additional detail of the lung tissue. Because body plethysmography is the most used method, this technique is discussed in more detail.
In body plethysmography, a patient sits in an airtight box and is told to inhale and exhale to functional residual capacity (FRC), or the volume of gas remaining at the end of a normal breath. Inside, a mouthpiece contains a pressure transducer. This is done to measure the change in pressure within the box during respiration. It senses the intrathoracic pressure generated when the patient rapidly and forcefully puffs against the closed mouthpiece. These data are then placed into the equation for Boyle’s law:
$P1×V1=P2×V2$
where P1 is pressure inside the box in which the patient is seated (atmospheric pressure), V1 is volume of the box, P2 is intrathoracic pressure generated by the patient, and V2 is the calculated volume of the box at the end of chest expansion. The difference between this V2 and the initial volume of the box is the change in the volume of the box which is the same as the change in the volume in the chest. Because temperature (T1 and T2) is constant throughout testing, it is not included in the calculations.
Using this change in volume in Boyle’s law again, this test provides a measure of the FRC. Once the FRC is determined, the other lung volumes and capacities can be calculated based on this FRC and volumes obtained in static spirometry. After these data are generated, the patient’s plethysmography results are usually compared with references from a presumed normal population. This comparison necessitates the generation of predicted values for that patient if he or she were completely normal and healthy. Through complex mathematical formulas, sitting and standing height, age, sex, race, barometric pressure, and altitude are factored in to give predicted values for the pulmonary functions being assessed. The patient’s results are compared with the percentage of predicted values based on the results of these calculations.
### Lung Volumes and Lung Capacities
Lung volumes include tidal volume (TV), inspiratory reserve volume (IRV), expiratory reserve volume (ERV), and residual volume (RV). These four volumes in various combinations make up lung capacities, which include inspiratory capacity, FRC, SVC, and total lung capacity (TLC).
#### Tidal Volume, Functional Residual Capacity, Expiratory Reserve Volume, and Residual Volume
The tidal volume is the amount of air inhaled and exhaled at rest in a normal breath. It is usually a small proportion of the lung volume and is infrequently used as a measure of respiratory disease. The IRV is the volume measured from the “top” of the TV (ie, initial point of normal exhalation) to the maximal inspiration. During exhalation, the ERV is the volume from the “bottom” of the TV (ie, initial point of normal inhalation) to maximal expiration. The RV is the volume of air left in the lungs at the end of forced expiration to the bottom of ERV. Without the RV, the lungs would collapse like deflated balloons. In diseases characterized by airflow limitation that trap air in the lungs (eg, COPD), the RV increases and the patient is less able to mobilize trapped air out of the lung. These four lung volumes are depicted in Figure 14-3.
#### Inspiratory Capacity, Functional Residual Capacity, Slow Vital Capacity, and Total Lung Capacity
The inspiratory capacity is the volume measured from the point of the TV at which inhalation normally begins to maximal inspiration, and it is a summation of TV and IRV. The functional residual capacity is the sum of the ERV and RV, and it is the volume of gas remaining in the lungs at the end of the TV. It also may be defined as a balance point between chest wall forces that increase volume and lung parenchymal forces that decrease volume. An increased FRC represents hyperinflation of the lungs and usually indicates airflow limitation. The FRC may be decreased in diseases that affect many alveoli (eg, pneumonia) or by restrictive changes, especially those due to fibrotic pulmonary tissue changes. The SVC is the volume of air that is exhaled as much as possible after inhaling as much as possible. It is a summation of the IRV, TV, and ERV and is described in more detail in the Spirometry Measurements section. The total lung capacity is the summation of all four lung volumes (IRV + TV + ERV + RV). It is the total amount of gas contained in the lungs at maximal inhalation. Restrictive lung disease is defined as a TLC below the 5th percentile of normal predicted value with a normal FEV1/VC ratio.7
### Diffusion Capacity Tests
Tests of gas exchange measure the ability of gases to cross (diffuse) the alveolar-capillary membrane and are useful in assessing interstitial lung disease.11 Typically, these tests measure the per-minute transfer of a gas, usually carbon monoxide (CO), from the alveoli to the blood. CO is used because it is a gas that is not normally present in the lung, has a high affinity for Hgb in red blood cells, and is easily delivered and measured. The diffusion capacity may be lessened after losses in the surface area of the alveoli or thickening of the alveolar-capillary membrane. Membrane thickening may be due to infiltration of inflammatory cells or fibrotic changes. These test results can be confounded by a loss of diffusion capacity due to poor ventilation, which may be related to closed or partially closed airways (as with airflow limitation) or to a ventilation-perfusion mismatch (as with pulmonary emboli or pulmonary hypertension). The diffusion capacity of the lungs to CO can be measured by either a single-breath test or steady-state test.
In the single-breath test, the patient deeply inhales—to vital capacity—a mixture of 0.3% CO, a tracer gas such as 10% helium or 0.3% methane, and balanced air. After holding his or her breath for 10 seconds, the patient exhales fully, and the concentrations of CO and helium are measured during the end of expiration (ie, alveolar flow). These concentrations are compared with the inspired concentrations to determine the amount diffusing across the alveolar membrane. The mean value for CO is about 25 to 30 mL/min/mm Hg.
A normal DLCO using a cutoff of the percent predicted has not been standardized but 70% to 75% if often utilized. A normal DLCO is also considered when greater than the LLN for the patient. Diffusion capacity is decreased in diseases that cause alveolar fibrotic changes. Changes may be idiopathic, such as those seen with sarcoidosis or environmental or occupational disease (asbestosis and silicosis), or be induced by drugs (eg, nitrofurantoin, amiodarone, and bleomycin).12,13 Anything that alters Hgb, decreases the red blood cell Hgb concentration, or changes diffusion across the red blood cell membrane may alter the DLCO. The DLCO also reflects the pulmonary capillary blood volume. An increase in this volume (pulmonary edema or asthma) may increase the DLCO. Minicase 3 describes how PFTs are used to diagnose restrictive airway disease.
### Specialized Tests
#### Bronchial Challenge Tests
Bronchial challenge tests (BCTs) are used to (1) aid in the diagnosis of asthma when the more common tests (symptom history, spirometry with reversibility) cannot confirm or reject the diagnosis, (2) evaluate the effects of drug therapy on airway hyperreactivity, and (3) evaluate potential drug effectiveness. A BCT measures the reactivity of the airways to known concentrations of agents that induce airway narrowing. Negative BCTs are useful in excluding the diagnosis of asthma more than confirming the diagnosis when a test is positive. Using this technique in research, the magnitude and duration of the effect of different drugs on the airways may be compared. BCTs are often referred to as challenges because the airways are challenged with increasing doses of methacholine, a synthetic derivative of acetylcholine, in a protocolized manner to determine whether there is a drop in the FEV1. A decrease in the FEV1 of 20% at specified doses is considered a positive test result.14 The ERS has published guidelines for methacholine challenge testing to enhance the safety, accuracy, and validity of the test.14
##### Using Pulmonary Function Tests to Evaluate a Patient with Interstitial Lung Disease
Jacob K. is a 59-year-old man who presents to the medicine clinic with reports of progressive dyspnea on exertion and minimal dry cough for the past 3 months. He has a history of rheumatoid arthritis and was started on methotrexate 4 months ago. CT of the chest shows diffuse ground glass opacities consistent with active inflammation and some minimal fibrosis at the bases. He had a PFT performed over a year ago that was completely normal. A repeat PFT is ordered and includes spirometry, lung volumes, and diffusion capacity. His PFT reveals the following results and the flow-volume curve in Figure 14-2 labeled “Restriction.”
QUESTION: How are these PFTs useful in the diagnosis, evaluation, and management of this patient?
DISCUSSION: Looking at his PFT in the following table, the FVC is 56% of predicted (reduced), the FEV1 is 60% of predicted (reduced), and the FEV1/FVC ratio is 0.85 (normal). This is consistent with a restrictive pattern. A TLC is 54% of predicted (reduced), verifying a restrictive pulmonary defect, and the DLCO is 50% of predicted (normal range is ≥70–75% of predicted). These findings are helpful in the diagnosis of interstitial lung disease in the setting of abnormal results on CT scan and change from previous normal spirometry. The severity of restriction can also be determined by the amount of decrease in TLC.6
PREBRONCHODILATOR
POSTBRONCHODILATOR
PFT
LLN
MEASURED
% PREDICTED
MEASURED
% CHANGE
FVC (L)
4.09
3.03
56
3.11
+2
FEV1 (L)
3.10
2.48
60
2.65
+6
FEV1/FVC
0.82
0.85
+3
SVC (L)
4.09
3.06
57
TLC (L)
6.54
4.36
54
RV (L)
1.71
1.30
51
DLCO (mL/min/mm Hg)
22.06
15.99
50
He is diagnosed with methotrexate-induced lung disease. The methotrexate is discontinued, and he is treated with prednisone. After 3 months of therapy, a repeat PFT is performed. The FVC is 75% of predicted, the FEV1 is 72% of predicted, and the FEV1/FVC ratio is 0.80. The TLC has increased to 65%, and the DLCO has increased to 60% of predicted. The repeat PFT shows that he is improved. He reports improvement in his symptoms. The follow-up PFT is used to help evaluate the response to discontinuing the offending medication and establish a new pulmonary function status.
Bronchial challenge testing begins by measuring baseline spirometry parameters to ensure it is safe to conduct the test. A BCT should not be performed if the FEV1 is <60% of predicted.14 Most BCTs then begin with nebulization of a solution of phosphate buffered saline. This both serves as a placebo to assess the airway effect of nebulization and establishes baseline airway function from which the amount of pulmonary function to be reduced is calculated. After each dose, spirometry efforts are performed based on ATS/ERS criteria. The challenge data are then summarized into a single number, the provocative dose causing a 20% fall in forced expiratory volume in 1 second (PD20/mcg).
For methacholine, a PD20FEV1 of <6 mcg indicates severe airway hyperresponsiveness (AHR), 6–25 mcg indicates moderate AHR, 25–100 mcg indicates mild AHR, 100–400 mcg suggests borderline AHR, and >400 mcg is a normal AHR test and excludes asthma. During a BCT, patients may experience transient respiratory symptoms such as cough, shortness of breath, wheezing, and chest tightness. An inhaled, SABA or short-acting muscarinic antagonist may be administered to alleviate symptoms and quicken the return of the FEV1 to the baseline value. Because BCTs can elicit severe, life-threatening bronchospasm, trained personnel and medications to treat severe bronchospasm should be on hand in the testing area.
#### Exercise Challenge Test
The exercise challenge test is used to confirm or rule out exercise-induced bronchospasm (EIB) and to evaluate the effectiveness of medications used to treat or prevent EIB, which occurs usually in patients with normal PFTs who become symptomatic with exercise. The etiology of EIB is thought to be related to the cooling and drying of the airways caused by rapid breathing during exercise.
Exercise tests are usually done with a motor-driven treadmill (with adjustable speed and grade) or an electromagnetically braked cycle ergometer. Heart rate should be monitored throughout the test, nose clips should be worn, and the room air should be dry and cool to promote water loss from the airways during the exercise test. In most patients, symptoms are effectively blocked by use of an inhaled bronchodilator immediately before beginning exercise or other exertion causing the problem. After obtaining baseline spirometry, the exercise test is started at a low speed that is gradually increased over 2 to 4 minutes until the heart rate is 80% to 90% of the predicted maximum or the work rate is at 100%. The duration of the exercise is age and tolerance dependent. Children <12 years generally take 6 minutes, while older children and adults take 8 minutes to complete the test. After the exercise is completed, the patient does serial spirometry at 5-minute intervals for 20 to 30 minutes. FEV1 is the primary outcome variable. A 10% or more decrease in FEV1 from baseline is generally accepted as an abnormal response, although some clinicians feel a 15% decrease is more diagnostic of EIB.15
#### Six-Minute Walk Test
The six-minute walk test (6MWT) is a test used to measure the distance a patient can walk on a flat, hard surface in 6 minutes.16 The results of the test are used to determine if a patient requires continuous oxygen at home. The results have also been correlated to a patient’s quality of life and abilities to complete daily activities. The results of the 6MWT also help predict morbidity and mortality for patients with congestive heart failure, COPD, and primary pulmonary hypertension.1719 Pulmonary hypertension studies use this test to monitor the efficacy of interventions with medications.20 Minicase 2 is an example of how a 6MWT is used to monitor a patient with pulmonary hypertension.
While performing the 6MWT, the patient is educated that the goal of the test is to walk as far as possible in 6 minutes, allowing the patient to select the intensity of exercise. Stopping and resting is allowed during the test. Reference equations for healthy adults have been published; however, large variations in the predicted values exist.21 The minimal important difference for the 6MWT is 30 meters for adults with chronic respiratory disease.21 Continuous pulse oximetry is recommended to capture the lowest arterial oxygen saturation (SpO2). The lowest SpO2 is a marker for prognosis and disease severity. The test is discontinued if the SpO2 decreases to <80%. Heart rate measurements and heart rate recovery are recorded during the test. Poorer outcomes have been associated with reduced heart rate recovery in the first minute after the test. Use of the Borg scale to document dyspnea and fatigue before and after the 6MWT has good reliability when determining exercise limitations in patients with chronic respiratory disease. Practice tests, younger age, taller height, less weight, male sex, longer corridor length, and encouragement all improve test results. Unstable angina and myocardial infarction in the past 3 to 5 days and syncope and arterial oxygen saturation by pulse oximetry ≤85% are all contraindications for performing the 6MWT.22 In practice, the 6MWT is also used to assess the amount of oxygen needed with exertion. Patients with mild-to-moderate pulmonary disease may have normal oxygen saturation at rest but poor saturation with exertion. An oxygen saturation of ≤88% indicates the need for supplemental oxygen.
#### Carbon Monoxide Breath Test
Carbon monoxide is a poisonous gas emitted from anything burning, including cigarette smoke. As discussed in the Diffusion Capacity Tests section, CO binds more readily to Hgb than oxygen, causing increased fatigue and shortness of breath. With a simple breath test by a handheld CO meter, the patient can see how much CO is in the body (parts per million [ppm]) and in the blood (% COHgb). In clinical studies, a result of ≤10 ppm often defines a nonsmoker; however, in clinical practice, depending on the meter used, the level is usually lower. A Cochrane Review found no significant increase in smoking abstinence rates with CO measurements.23
Testing exhaled CO is a simple breath test in which the patient holds his or her breath for 15 seconds and then exhales into a meter. The meter is able to indicate how much CO is in the patient’s lungs and estimate how much is attached to Hgb in the patient’s blood. The test is an objective value in which patients can visually see the effects of inhaling smoke when higher values of CO are detected. After 8 to 12 hours without smoking, CO levels become undetectable.
#### Fractional Exhaled Nitric Oxide
Measurement of exhaled concentrations of nitric oxide is a noninvasive biomarker test of airway inflammation for both diagnosing and monitoring eosinophilic airway inflammation.24 Various handheld devices using the patient’s breath are available that include an electrochemical sensor to determine the exhaled nitric oxide concentration. Fractional exhaled nitric oxide (FENO) results of >50 parts per billion (ppb) in adults and >35 ppb in children younger than 12 years indicate eosinophilic inflammation with a high likelihood to respond to corticosteroids and certain add-on biologic asthma treatments.25,26 As a monitoring test, FENO results are best interpreted as changes from baseline for each patient rather than using population normal readings. FENO measurements have many confounding factors, including smoking history, age, and sex. The test should be used in context with the patient history and as a tool with other diagnostic and monitoring tests.
## SUMMARY
This chapter discusses the importance of pulmonary function testing as it relates to the diagnosis, treatment, and monitoring of respiratory disease states. After a review of the anatomy and physiology of the lungs, the mechanics of obtaining PFTs were emphasized. By understanding these mechanics, a clinician can better understand the interpretation of PFTs, use findings from different PFTs to help differentiate among diagnoses, and assist in making optimal therapeutic recommendations. PFT results are not interpreted in isolation but need to be assessed within the context of the other findings from the medical history and from other laboratory or clinical test results. Clearly, PFTs are an important tool to aid the clinician in decision-making.
## LEARNING POINTS
1. What is a PFT?
ANSWER: A PFT is any test used to assess the function of the lungs (eg, spirometry, body plethysmography, 6MWT). The component of the PFT to be ordered is determined by the information needed. For example, spirometry is performed to reveal the presence of obstructive lung disease. Lung volumes determine the presence of restrictive lung disease, and the diffusion capacity test ascertains the adequacy of gas exchange.
2. Why is spirometry an important test in the diagnosis of COPD?
ANSWER: In COPD, postbronchodilator spirometry is necessary to determine the presence of persistent airflow limitation and the degree of disease severity. In the absence of COPD, other causes of symptoms should be considered. Physical exam and history alone are often not adequate to detect airflow limitation. Therefore, an objective test with spirometry is needed to confirm a clinical suspicion.
3. Does a significant bronchodilator response on spirometry testing differentiate asthma from COPD?
ANSWER: Traditionally, reversibility after bronchodilator use was considered a criterion to differentiate asthma and COPD. However, evidence has shown that a significant bronchodilator response is common in COPD as well, and this assessment is no longer used to differentiate between asthma and COPD. It is still a useful test when used in conjunction with a clinical history (eg, risk factors for COPD, presentation of shortness of breath, evidence of atopy), physical exam, and other PFTs to support a clinical suspicion of asthma and/or COPD.
4. How is restrictive lung disease diagnosed?
ANSWER: It is important to note that spirometry only can provide evidence consistent with restrictive disease, such as a decrease in FEV1 and FVC with a normal or elevated FEV1/FVC ratio. However, restriction is a decrease in lung volume as defined by a decrease in the TLC, which is obtained by lung volume tests, such as body plethysmography. Test results are needed to diagnose restrictive lung disease.
## REFERENCES
• 1.
Milavetz G, Teresi M. Pulmonary function and related tests. In Lee M, ed., Basic Skills in Interpreting Laboratory Data, 3rd ed. Bethesda, MD: American Society of Health-System Pharmacists; 2004.
• Export Citation
• 2.
Crapo RO. Pulmonary-function testing. N Engl J Med. 1994;331(1):2530.PubMed
• 3.
Culver BH, Graham BL, Coates AL, et al.American Thoracic Society Committee on Proficiency Standards for Pulmonary Function Report. An official American Thoracic Society technical statement. Am J Respir Crit Care Med. 2017;196(11):14631472.PubMed
• Export Citation
• 4.
Graham BL, Steenbruggen I, Miller MR, et al.Standardization of spirometry 2019 update: an official American Thoracic Society and European Respiratory Society technical statement. Am J Respir Crit Care Med. 2019;200(8):e70e88.PubMed
• Export Citation
• 5.
Global Strategy for the Diagnosis, Management, and Prevention of Chronic Obstructive Pulmonary Disease. Global Initiative for Chronic Obstructive Lung Disease (GOLDCOPD) 2022 Report. http://www.goldcopd.org. Accessed February 1, 2022.
• 6.
Global Initiative for Asthma. Global strategy for asthma management and prevention, 2020. http://www.ginasthma.org. Accessed Aug 3, 2020.
• 7.
Pellegrino R, Viegi G, Brusasco V, et al.Interpretative strategies for lung function tests. Eur Respir J. 2005;26(5):948968.PubMed
• 8.
Dekker FW, Schrier AC, Sterk PJ, et al.Validity of peak expiratory flow measurement in assessing reversibility of airflow obstruction. Thorax. 1992;47(3):162166.PubMed
• Export Citation
• 9.
Reddel HK, Taylor DR, Bateman ED, et al.An official American Thoracic Society/European Respiratory Society statement: asthma control and exacerbations: standardizing endpoints for clinical asthma trials and clinical practice. Am J Respir Crit Care Med. 2009;180(1):5999.PubMed
• Export Citation
• 10.
Wanger J, Clausen JL, Coates A, et al.Standardisation of the measurement of lung volumes. Eur Respir J. 2005;26(3):511522.PubMed
• 11.
Graham BL, Brusasco V, Burgos F, et al.2017 ERS/ATS standards for single-breath carbon monoxide uptake in the lung. Eur Respir J. 2017;49(1):1600016.PubMed
• Export Citation
• 12.
Cooper JA Jr, White DA, Matthay RA. Drug-induced pulmonary disease. Part 1: cytotoxic drugs. Am Rev Respir Dis. 1986;133(2):321340.PubMed
• Export Citation
• 13.
Cooper JA Jr, White DA, Matthay RA. Drug-induced pulmonary disease. Part 2: noncytotoxic drugs. Am Rev Respir Dis. 1986;133(3):488505.PubMed
• Export Citation
• 14.
Coates AL, Wanger J, Cockcroft DW, et al.ERS technical standard on bronchial challenge testing: general considerations and performance of methacholine challenge tests. Eur Respir J. 2017;49(5):1601526.PubMed
• Export Citation
• 15.
Parsons JP, Hallstrand TS, Mastronarde JG, et al.An official American Thoracic Society clinical practice guideline: exercise-induced bronchoconstriction. Am J Respir Crit Care Med. 2013;187(9):10161027.PubMed
• Export Citation
• 16.
Holland AE, Spruit MA, Troosters T, et al.An official European Respiratory Society/American Thoracic Society technical standard: field walking tests in chronic respiratory disease. Eur Respir J. 2014;44(6):14281446.PubMed
• Export Citation
• 17.
Cahalin LP, Mathier MA, Semigran MJ, et al.The six-minute walk test predicts peak oxygen uptake and survival in patients with advanced heart failure. Chest. 1996;110(2):325332.PubMed
• Export Citation
• 18.
Kessler R, Faller M, Fourgaut G, et al.Predictive factors of hospitalization for acute exacerbation in a series of 64 patients with chronic obstructive pulmonary disease. Am J Respir Crit Care Med. 1999;159(1):158164.PubMed
• Export Citation
• 19.
Kadikar A, Maurer J, Kesten S. The six-minute walk test: a guide to assessment for lung transplantation. J Heart Lung Transplant. 1997;16(3):313319.PubMed
• Export Citation
• 20.
Klinger JR, Elliott CG, Levine DJ, et al.Therapy for pulmonary arterial hypertension in adults: update of the CHEST guideline and expert panel report. Chest. 2019;155(3):565586.PubMed
• Export Citation
• 21.
Singh SJ, Puhan MA, Andrianopoulos V, et al.An official systematic review of the European Respiratory Society/American Thoracic Society: measurement properties of field walking tests in chronic respiratory disease. Eur Respir J. 2014;44(6):14471478.PubMed
• Export Citation
• 22.
American Thoracic Society. American College of Chest Physicians, ATS/ACCP statement on cardiopulmonary exercise testing. Am J Respir Crit Care Med. 2003;167:211277.
• Export Citation
• 23.
Bize R, Burnand B, Mueller Y, et al.Biomedical risk assessment as an aid for smoking cessation. Cochrane Database Syst Rev. 2012;12:CD004705.PubMed
• Export Citation
• 24.
American Thoracic Society. Recommendations for standardized procedures for the online and offline measurement of exhaled lower respiratory nitric oxide and nasal nitric oxide in adults and children: 1999. Am J Respir Crit Care Med. 1999;160:21042117. | 2022-07-04 22:24:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.397875040769577, "perplexity": 7010.641157030319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104496688.78/warc/CC-MAIN-20220704202455-20220704232455-00322.warc.gz"} |
https://calendar.math.illinois.edu/?year=2021&month=01&day=29&interval=day | Department of
# Mathematics
Seminar Calendar
for events the day of Friday, January 29, 2021.
.
events for the
events containing
Questions regarding events or the calendar should be directed to Tori Corkery.
December 2020 January 2021 February 2021
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 2 3 4 5 1 2 1 2 3 4 5 6
6 7 8 9 10 11 12 3 4 5 6 7 8 9 7 8 9 10 11 12 13
13 14 15 16 17 18 19 10 11 12 13 14 15 16 14 15 16 17 18 19 20
20 21 22 23 24 25 26 17 18 19 20 21 22 23 21 22 23 24 25 26 27
27 28 29 30 31 24 25 26 27 28 29 30 28
31
Friday, January 29, 2021
4:00 pm in Zoom,Friday, January 29, 2021
#### Organizaitonal Meeting
###### Brannon (UIUC)
Abstract: We will be having our first organizational meeting. Please email basilio3(at)illinois(dot)edu for the Zoom information.
4:00 pm in Zoom,Friday, January 29, 2021
#### The Mathematics Behind Two Puzzles
###### Jared Bronski (UIUC)
Abstract: I plan to talk about two puzzles with very elegant mathematical solutions. I would encourage you to think about (but not Google!) these puzzles, particularly the easier version of the first puzzle, before the talk. All of them can be found in the attachment below. We'll also need volunteers for a demonstration on Friday, so if you'd like to help please email undergradseminar@math.illinois.edu. For Zoom info, please email undergradseminar@math.illinois.edu | 2022-07-03 18:15:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3348652720451355, "perplexity": 598.8724849324146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104248623.69/warc/CC-MAIN-20220703164826-20220703194826-00076.warc.gz"} |
https://deeplizard.com/learn/video/0bt0SjbS3xc | Reinforcement Learning - Introducing Goal Oriented Intelligence
with deeplizard.
Training a Deep Q-Network - Reinforcement Learning
December 1, 2018 by
Blog
Training a deep Q-network with replay memory
What’s up, guys? In this post, we’ll continue our discussion of deep Q-networks and focus in on the complete algorithmic details of the underlying training process. With this, we’ll see exactly how the replay memory that was introduced in the previous post is utilized during training as well. So, let’s get to it!
What do we know so far about deep Q-learning? Well, we know about the deep Q-network architecture, and we also have been introduced to replay memory . We're now going to see exactly how the training process works for a DQN by utilizing this replay memory.
Here is a snapshot summary of what all we've went over before we ended last time.
1. Initialize replay memory capacity.
2. Initialize the network with random weights.
3. For each episode:
1. Initialize the starting state.
2. For each time step:
1. Select an action.
• Via exploration or exploitation
2. Execute selected action in an emulator.
3. Observe reward and next state.
4. Store experience in replay memory.
Make sure you've got an understanding of all this. All of these steps have occured before the actual training of the neural network starts. At this point, we're inside of a single time step within a single episode. Now, we'll pick up right where we left off after the experience is stored in replay memory to discuss what exactly happens during training.
The policy network
After storing an experience in replay memory, we then sample a random batch of experiences from replay memory. For ease of understanding, though, we're going to explain the remaining process for a single sample, and then you can generalize the idea to an entire batch.
Alright, so from a single experience sample from replay memory, we then preprocess the state (grayscale conversion, cropping, scaling, etc.), and pass the preprocessed state to the network as input. Going forward, we’ll refer to this network as the policy network since its objective is to approximate the optimal policy by finding the optimal Q-function.
The input state data then forward propagates through the network, using the same forward propagation technique that we’ve discussed for any other general neural network. The model then outputs an estimated Q-value for each possible action from the given input state.
The loss is then calculated. We do this by comparing the Q-value output from the network for the action in the experience tuple we sampled and the corresponding optimal Q-value, or target Q-value, for the same action.
Remember, the target Q-value is calculated using the expression from the right hand side of the Bellman equation. So, just as we saw when we initially learned about plain Q-learning earlier in this series, the loss is calculated by subtracting the Q-value for a given state-action pair from the optimal Q-value for the same state-action pair.
\begin{eqnarray*} q_{\ast }\left( s,a\right) - q(s,a)&=&loss \\E\left[ R_{t+1}+\gamma \max_{a^{\prime }}q_{\ast }\left( s^\prime,a^{\prime }\right)\right] - E\left[ \sum_{k=0}^{\infty }\gamma ^{k}R_{t+k+1}\right]&=&loss \end{eqnarray*}
Calculating the $$\max$$ term
When we are calculating the optimal Q-value for any given state-action pair, notice from the equation for calculating loss that we used above, we have this term here that we must compute:
$$\max_{a^{\prime }}q_{\ast }\left( s^\prime,a^{\prime }\right)$$
Recall that $$s^\prime$$ and $$a^{\prime}$$ are the state and action that occur in the following time step. Previously, we were able to find this $$\max$$ term by peeking in the Q-table, remember? We'd just look to see which action gave us the highest Q-value for a given state.
Well that's old news now with deep Q-learning. In order to find this $$\max$$ term now, what we do is pass $$s^\prime$$ to the policy network, which will output the Q-values for each state-action pair using $$s^\prime$$ as the state and each of the possible next actions as $$a^\prime$$. Given this, we can obtain the $$\max$$ Q-value over all possible actions taken from $$s^\prime$$, giving us $$\max_{a^{\prime}}q_{*}(s^\prime,a^{\prime})$$.
Once we find the value of this $$\max$$ term, we can then calculate this term for the original state input passed to the policy network.
\begin{eqnarray*} \\E\left[ R_{t+1}+\gamma \max_{a^{\prime }}q_{\ast }\left( s^\prime,a^{\prime }\right)\right] \end{eqnarray*}
Why do we need to calculate this term again?
Ah, yes, this term enables us to compute the loss between the Q-value given by the policy network for the state-action pair from our original experience tuple and the target optimal Q-value for this same state-action pair.
So, to quickly touch base, note that we first forward passed the state from our experience tuple to the network and got the Q-value for the action from our experience tuple as output. We then passed the next state contained in our experience tuple to the network to find the $$\max$$ Q-value among the next actions that can be taken from that state. This second step was done just to aid us in calculating the loss for our original state-action pair.
This may seem a bit odd, but let it sink in for a minute and see if the idea clicks.
Training the policy network
Alright, so after we're able to calculate the optimal Q-value for our state-action pair, we can calculate the loss from our policy network between the optimal Q-value and the Q-value that was output from the network for this state-action pair.
Gradient descent is then performed to update the weights in the network in attempts to minimize the loss, just like we’ve seen in all other previous networks we've covered on this channel. In this case, minimizing the loss means that we’re aiming to make the policy network output Q-values for each state-action pair that approximate the target Q-values given by the Bellman equation.
Up to this point, everything we've gone over was all for one single time step. We then move on to the next time step in the episode and do this process again and again time after time until we reach the end of the episode. At that point, we start a new episode, and do that over and over again until we reach the max number of episodes we’ve set. We’ll want to keep repeating this process until we’ve sufficiently minimized the loss.
Wrapping up
Admittingly, between the last post and this one, that was quite a number of steps, so let's go over this summary to bring it all together.
1. Initialize replay memory capacity.
2. Initialize the network with random weights.
3. For each episode:
1. Initialize the starting state.
2. For each time step:
1. Select an action.
• Via exploration or exploitation
2. Execute selected action in an emulator.
3. Observe reward and next state.
4. Store experience in replay memory.
5. Sample random batch from replay memory.
6. Preprocess states from batch.
7. Pass batch of preprocessed states to policy network.
8. Calculate loss between output Q-values and target Q-values.
• Requires a second pass to the network for the next state
9. Gradient descent updates weights in the policy network to minimize loss.
Take some time to go over this algorithm and see if you now have the full picture for how policy networks, experience replay, and training all come together. Let me know your thoughts in the comments!
In the next video, we'll see what kind of problems could be introduced by the process we covered here. Anyone want to try to guess? Given the problems that we'll discuss next time, we'll see how we can actually improve the training process by introducing a second network. Yes, two neural networks being used at the same time. Well kind of, we'll just have to wait and see. I’ll see ya in the next one!
Description | 2020-01-27 15:17:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5882076025009155, "perplexity": 905.3427734307526}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700988.64/warc/CC-MAIN-20200127143516-20200127173516-00239.warc.gz"} |
https://cs.stackexchange.com/questions/4729/how-many-possible-assignments-does-a-cnf-sentence-have | # How many possible assignments does a CNF sentence have?
I'm having some trouble understanding the following:
When we look at satisfiability problems in conjunctive normal form, an underconstrained problem is one with relatively few clauses constraining the variables. For eg. here is a randomly generated 3-CNF sentence with five symbols and five clauses. (Each clause contains 3 randomly selected distinct symbols, each of which is negated with 50% probability.)
(¬D ∨ ¬B ∨ C) ∧ (B ∨ ¬A ∨ ¬C) ∧ (¬C ∨ ¬B ∨ E) ∧ (E ∨ ¬D ∨ B) ∧ (B ∨ E ∨ ¬C)
16 of the 32 possible assignments are models of this sentence, so, on an average, it would take just 2 random guesses to find the model.
I don't understand the last line- saying that there are 32 possible assignments. How is it 32? And how are only 16 of them models of the sentence? Thanks.
There are 5 (Boolean) variables in the formula. Each of these could be either true or false. This means that there are $2^5=32$ ways of assigning values to these variables. | 2019-12-11 12:44:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.535737931728363, "perplexity": 520.8590783869371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530857.12/warc/CC-MAIN-20191211103140-20191211131140-00262.warc.gz"} |
https://blog.csdn.net/Biuasm/article/details/79945497 | Frogger
Freddy Frog is sitting on a stone in the middle of a lake. Suddenly he notices Fiona Frog who is sitting on another stone. He plans to visit her, but since the water is dirty and full of tourists' sunscreen, he wants to avoid swimming and instead reach her by jumping.
Unfortunately Fiona's stone is out of his jump range. Therefore Freddy considers to use other stones as intermediate stops and reach her by a sequence of several small jumps.
To execute a given sequence of jumps, a frog's jump range obviously must be at least as long as the longest jump occuring in the sequence.
The frog distance (humans also call it minimax distance) between two stones therefore is defined as the minimum necessary jump range over all possible paths between the two stones.
You are given the coordinates of Freddy's stone, Fiona's stone and all other stones in the lake. Your job is to compute the frog distance between Freddy's and Fiona's stone.
Input
The input will contain one or more test cases. The first line of each test case will contain the number of stones n (2<=n<=200). The next n lines each contain two integers xi,yi (0 <= xi,yi <= 1000) representing the coordinates of stone #i. Stone #1 is Freddy's stone, stone #2 is Fiona's stone, the other n-2 stones are unoccupied. There's a blank line following each test case. Input is terminated by a value of zero (0) for n.
Output
For each test case, print a line saying "Scenario #x" and a line saying "Frog Distance = y" where x is replaced by the test case number (they are numbered from 1) and y is replaced by the appropriate real number, printed to three decimals. Put a blank line after each test case, even after the last one.
Sample Input
2
0 0
3 4
3
17 4
19 4
18 5
0
Sample Output
Scenario #1
Frog Distance = 5.000
Scenario #2
Frog Distance = 1.414
#include<stdio.h>//Floyd-Warshall
#include<math.h>
#include<algorithm>
using namespace std;
int main()
{
int n,g=1;
double x[250],y[250],a[250][250];
while(~scanf("%d",&n)&&n)
{
for(int i=0;i<n;i++)
scanf("%lf %lf",&x[i],&y[i]);
for(int i=0;i<n;i++)
for(int j=0;j<n;j++)
a[i][j]=a[j][i]=sqrt((x[i]-x[j])*(x[i]-x[j])+(y[i]-y[j])*(y[i]-y[j]));
for(int k=0;k<n;k++)
for(int i=0;i<n;i++)
for(int j=0;j<n;j++)
a[i][j]=min(a[i][j],max(a[i][k],a[j][k]));
printf("Scenario #%d\n",g++);
printf("Frog Distance = %.3f\n\n",a[0][1]);
}
return 0;
}
#include<stdio.h>//Dijkstra
#include<math.h>
#include<algorithm>
#define inf 0x3f3f3f
using namespace std;
int main()
{
int n,g=1;
double x[250],y[205],dis[250],a[250][250];
int vis[250];
while(~scanf("%d",&n),n)
{
for(int i=0;i<n;i++)
scanf("%lf %lf",&x[i],&y[i]);
for(int i=0;i<n;i++)
for(int j=0;j<n;j++)
a[i][j]=a[j][i]=sqrt((x[i]-x[j])*(x[i]-x[j])+(y[i]-y[j])*(y[i]-y[j]));
for(int i=0;i<n;i++)
{
dis[i]=a[0][i];
vis[i]=0;
}
vis[0]=1;
for(int i=0;i<n-1;i++)
{
int k;
double mi=inf;
for(int j=0;j<n;j++)
{
if(vis[j]==0&&dis[j]<mi)
{
mi=dis[j];
k=j;
}
}
vis[k]=1;
for(int j=0;j<n;j++)
dis[j]=min(dis[j],max(dis[k],a[k][j]));
}
printf("Scenario #%d\n",g++);
printf("Frog Distance = %.3f\n\n",dis[1]);
}
return 0;
}
#include<stdio.h>
#include<math.h>
#include<queue>
#include<algorithm>
#define inf 0x3f3f3f
using namespace std;
int n,g=1,vis[250];
double x[250],y[250],dis[250],a[250][250];
void spfa()
{
queue<int>q;
int i,j;
for(i=1; i<n; i++)
{
dis[i]=inf;
vis[i]=0;
}
dis[0]=0;
q.push(0);
vis[0]=1;
while(!q.empty())
{
i=q.front();
q.pop();
vis[i]=0;
for(j=0; j<n; j++)
{
if(dis[j]>max(dis[i],a[i][j]))
{
dis[j]=max(dis[i],a[i][j]);
if(vis[j]==0)
{
q.push(j);
vis[j]=1;
}
}
}
}
}
int main()
{
while(~scanf("%d",&n)&&n)
{
for(int i=0; i<n; i++)
scanf("%lf %lf",&x[i],&y[i]);
for(int i=0; i<n; i++)
for(int j=0; j<n; j++)
a[i][j]=a[j][i]=sqrt((x[i]-x[j])*(x[i]-x[j])+(y[i]-y[j])*(y[i]-y[j]));
spfa();
printf("Scenario #%d\n",g++);
printf("Frog Distance = %.3f\n\n",dis[1]);
}
return 0;
} | 2018-12-11 16:43:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45026934146881104, "perplexity": 7227.340450643115}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823657.20/warc/CC-MAIN-20181211151237-20181211172737-00184.warc.gz"} |
https://plainmath.net/11498/burial-cloth-egyptian-estimated-contain-radioactive-materialcarbon | Question
# The burial cloth of an Egyptian mummy is estimated to contain 560 g of the radioactive materialcarbon-14, which has a
Exponential models
The burial cloth of an Egyptian mummy is estimated to contain 560 g of the radioactive materialcarbon-14, which has a half life of 5730 years.
a. Complete the table below. Make sure you justify your answer by showing all the steps.
$$\begin{array}{|l|l|l|}\hline t(\text{in years})&m(\text{amoun of radioactive material})\\\hline0&\\\hline5730\\\hline11460\\\hline17190\\\hline\end{array}$$
b. Find an exponential function that models the amount of carbon-14 in the cloth, y, after t years. Make sure you justify your answer by showing all the steps.
c. If the burial cloth is estimated to contain 49.5% of the original amount of carbon-14, how long ago was the mummy buried. Give exact answer. Make sure you justify your answer by showing all the steps.
2021-02-13
Given that
Initially the radioactive carbon-14 present in the burial cloth of the egyptian mummy is 560g.
The half life of carbon-14 is 5730 years.
The total amount of carbon-14 decayed after t years is given by
$$\displaystyle{A}={A}_{{0}}{\left({0.5}\right)}^{{{\frac{{{1}}}{{{h}}}}}}$$ (1)
Where, $$\displaystyle{A}_{{0}}$$ initial amount of carbon -14
t-time in years,
h-Half life of carbon -14
a) To complete table using the following calculation:
Now,
1) t=0 years, h=5730 years,$$\displaystyle{A}_{{0}}={560}$$g
$$\displaystyle{A}={A}_{{0}}{\left({0.5}\right)}^{{{\frac{{{1}}}{{{h}}}}}}$$
$$\displaystyle{A}={560}{\left({0.5}\right)}^{{{\frac{{{0}}}{{{5730}}}}}}$$
$$\displaystyle{A}={560}{\left({0.5}\right)}^{{0}}={560}{\left({1}\right)}$$
$$\displaystyle{A}={560}$$g
2) $$t=5730$$ years, $$h=5730$$ years,$$\displaystyle{A}_{{0}}={560}$$g
$$\displaystyle{A}={560}{\left({0.5}\right)}^{{{\frac{{{5730}}}{{{5730}}}}}}$$
$$\displaystyle{A}={560}{\left({0.5}\right)}^{{1}}={560}{\left({0.5}\right)}$$
$$\displaystyle{A}={280}$$g
3) $$t=11460$$ years, $$h=5730$$ years,$$\displaystyle{A}_{{0}}={560}$$g
$$\displaystyle{A}={560}{\left({0.5}\right)}^{{{\frac{{{11460}}}{{{5730}}}}}}$$
$$\displaystyle{560}{\left({0.5}\right)}^{{2}}={560}{\left({0.25}\right)}$$
$$\displaystyle{A}={140}$$g
3) $$t=17190$$ years, $$h=5730$$ years,$$\displaystyle{A}_{{0}}={560}$$g
$$\displaystyle{A}={560}{\left({0.5}\right)}^{{{\frac{{{17190}}}{{{5730}}}}}}$$
$$\displaystyle{A}={560}{\left({0.5}\right)}^{{3}}={560}{\left({0.125}\right)}$$
$$\displaystyle{A}={70}$$g
The table becomes
$$\begin{array}{|l|l|l|}\hline t(\text{in years})&m(\text{amoun of radioactive material})\\\hline0&560\\\hline5730&280\\\hline11460&140\\\hline17190&70\\\hline\end{array}$$
b)To find the exponential function that models the amount of carbon-14 in the cloth, y, after t years:
$$\displaystyle{A}={A}_{{0}}{e}^{{-{k}{t}}}$$
Here, At $$t=0, y=560$$ g,
k- rate of decay,
t- time in years.
The exponential function is
$$\displaystyle{y}={560}{e}^{{-{k}{t}}}$$
c) To find t, for $$k=49.5\%$$ :
49.5% of original amount of carbon-14 is
$$\displaystyle{49.5}\%\times{560}={\frac{{{49.5}}}{{{100}}}}\times{560}$$
$$\displaystyle={49.5}\times{5.6}$$
$$\displaystyle{49.5}\%\times{560}={277.2}$$g
Now, $$A_{0} =560$$, $$A= 277.2$$ g, $$h=5730$$ years, from (1)
$$\displaystyle{A}={A}_{{0}}{\left({0.5}\right)}^{{{\frac{{{t}}}{{{h}}}}}}$$
$$\displaystyle{277.2}={560}{\left({0.5}\right)}^{{{\frac{{{t}}}{{{5730}}}}}}$$
$$\displaystyle{\left({0.5}\right)}^{{{\frac{{{t}}}{{{5730}}}}}}={\frac{{{277.2}}}{{{560}}}}$$
$$\displaystyle{\left({0.5}\right)}^{{{\frac{{{t}}}{{{5730}}}}}}={0.495}$$
$$\displaystyle{\frac{{{t}}}{{{5730}}}}{\ln{{0.5}}}={\ln{{0.495}}}$$
$$\displaystyle{t}={5730}{\left({\frac{{{\ln{{0.495}}}}}{{{\ln{{0.5}}}}}}\right)}$$
$$\displaystyle{t}={5730}{\left({\frac{{-{0.7032}}}{{-{0.693}}}}\right)}$$
$$\displaystyle{t}={5730}{\left({1.015}\right)}$$
$$\displaystyle{t}={5814.33}$$ years
Thus, the mummy was burried before 5814.33 years. | 2021-07-30 11:39:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43503010272979736, "perplexity": 2405.2833567159296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153966.52/warc/CC-MAIN-20210730091645-20210730121645-00155.warc.gz"} |
https://www.transtutors.com/questions/5-76-lo-10-auditing-standards-indicate-that-if-the-preliminary-control-risk-assessme-1357320.htm | # 5-76 LO 10 Auditing standards indicate that if the preliminary control risk assessment is low,...
5-76 LO 10 Auditing standards indicate that if the preliminary control risk assessment is low, the auditor must gain assurance that the controls are operating effectively.
a. What is meant by testing the operating effectiveness of control procedures? How does an auditor decide which controls to test?
b. How is the auditor’s assessment of control risk affected if a docu- mented control procedure is not operating effectively? Explain the effect of such an assessment on substantive audit procedures. | 2018-07-19 17:33:29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8315189480781555, "perplexity": 5220.269827519226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591150.71/warc/CC-MAIN-20180719164439-20180719184439-00079.warc.gz"} |
https://escholarship.org/uc/item/5796c2vf | The [C II]/[N II] ratio in 3 < z < 6 sub-millimetre galaxies from the South Pole Telescope survey
Open Access Publications from the University of California
## The [C II]/[N II] ratio in 3 < z < 6 sub-millimetre galaxies from the South Pole Telescope survey
• Author(s): Cunningham, DJM
• Chapman, SC
• Aravena, M
• De Breuck, C
• Bethermin, M
• Chen, Chian-Chou
• Dong, Chenxing
• Gonzalez, AH
• Greve, TR
• Litke, KC
• Ma, J
• Malkan, M
• Marrone, DP
• Miller, T
• Reuter, C
• Rotermund, K
• Spilker, JS
• Stark, AA
• Strandet, M
• Vieira, JD
• Weiss, A
• et al.
## Published Web Location
https://doi.org/10.1093/mnras/staa820
Abstract
ABSTRACT We present Atacama Compact Array and Atacama Pathfinder Experiment observations of the [N ii] 205 μm fine-structure line in 40 sub-millimetre galaxies lying at redshifts z = 3–6, drawn from the 2500 deg2 South Pole Telescope survey. This represents the largest uniformly selected sample of high-redshift [N ii] 205 μm measurements to date. 29 sources also have [C ii] 158 μm line observations allowing a characterization of the distribution of the [C ii] to [N ii] luminosity ratio for the first time at high redshift. The sample exhibits a median L$_{{\rm{[C\,{\small II}]}}}$/L$_{{\rm{[N\,{\small II}]}}}$ ≈ 11.0 and interquartile range of 5.0 –24.7. These ratios are similar to those observed in local (Ultra)luminous infrared galaxies (LIRGs), possibly indicating similarities in their interstellar medium. At the extremes, we find individual sub-millimetre galaxies with L$_{{\rm{[C\,{\small II}]}}}$/L$_{{\rm{[N\,{\small II}]}}}$ low enough to suggest a smaller contribution from neutral gas than ionized gas to the [C ii] flux and high enough to suggest strongly photon or X-ray region dominated flux. These results highlight a large range in this line luminosity ratio for sub-millimetre galaxies, which may be caused by variations in gas density, the relative abundances of carbon and nitrogen, ionization parameter, metallicity, and a variation in the fractional abundance of ionized and neutral interstellar medium.
Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you. | 2020-08-05 17:19:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.59642094373703, "perplexity": 11628.58758171419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735963.64/warc/CC-MAIN-20200805153603-20200805183603-00165.warc.gz"} |
http://www.ck12.org/book/Basic-Algebra/r1/section/8.4/ | <meta http-equiv="refresh" content="1; url=/nojavascript/">
You are reading an older version of this FlexBook® textbook: CK-12 Algebra - Basic Go to the latest version.
# 8.4: Scientific Notation
Difficulty Level: At Grade Created by: CK-12
Sometimes in mathematics numbers are huge. They are so huge that we use what is called scientific notation. It is easier to work with such numbers when we shorten their decimal places and multiply them by 10 to a specific power. In this lesson, you will learn how to use scientific notation by hand and on a calculator.
Powers of 10:
100,000 &= 10^5\\
10,000 &= 10^4\\
1,000 &= 10^3\\
100 &= 10^2\\
10 &= 10^1
## Using Scientific Notation for Large Numbers
Example: If we divide 643,297 by 100,000 we get 6.43297. If we multiply this quotient by 100,000, we get back to our original number. But we have just seen that 100,000 is the same as 105$10^5$, so if we multiply 6.43297 by 105$10^5$, we should also get our original answer. In other words 6.43297×105=643,297$6.43297 \times 10^5=643,297$ Because there are five zeros, the decimal moves over five places.
Solution: Look at the following examples:
2.08 \times 10^4 &= 20,800\\
2.08 \times 10^3 &= 2,080\\
2.08 \times 10^2 &= 208\\
2.08 \times 10^1 &= 20.8\\
2.08 \times 10^0 &= 2.08
The power tells how many decimal places to move; positive powers mean the decimal moves to the right. A positive 4 means the decimal moves four positions the right.
Example 1: Write in scientific notation.
653,937,000
Solution: 653,937,000=6.53937000×100,000,000=6.53937×108$653,937,000=6.53937000 \times 100,000,000=6.53937 \times 10^8$
Oftentimes we do not keep more than a few decimal places when using scientific notation and we round the number to the nearest whole number, tenth, or hundredth depending on what the directions say. Rounding Example 1 could look like 6.5×108$6.5 \times 10^8$.
## Using Scientific Notation for Small Numbers
We’ve seen that scientific notation is useful when dealing with large numbers. It is also good to use when dealing with extremely small numbers.
Look at the following examples:
2.08 \times 10^{-1} &= 0.208\\
2.08 \times 10^{-2} &= 0.0208\\
2.08 \times 10^{-3} &= 0.00208\\
2.08 \times 10^{-4} &= 0.000208
Example 2: The time taken for a light beam to cross a football pitch is 0.0000004 seconds. Write in scientific notation.
Solution: 0.0000004=4×0.0000001=4×110,000,000=4×1107=4×107$0.0000004=4 \times 0.0000001=4 \times \frac{1}{10,000,000}=4 \times \frac{1}{10^7}=4 \times 10^{-7}$
## Evaluating Expressions Using Scientific Notation
When evaluating expressions with scientific notation, it is easiest to keep the powers of 10 together and deal with them separately.
Example: (3.2×106)(8.7×1011)=3.2×8.7106×1011=27.84×1017=2.784×101×1017=2.784×1018$(3.2 \times 10^6) \cdot (8.7 \times 10^{11}) = 3.2 \times 8.7 \cdot 10^6 \times 10^{11} = 27.84 \times 10^{17}=2.784 \times 10^1 \times 10^{17} = 2.784 \times 10^{18}$
Solution: It is best to keep one number before the decimal point. In order to do that, we had to make 27.84 become 2.784×101$2.784 \times 10^1$ so we could evaluate the expression more simply.
Example 3: Evaluate the following expression.
(a) (1.7×106)(2.7×1011)$(1.7 \times 10^6) \cdot (2.7 \times 10^{-11})$
(b) (3.2×106)÷(8.7×1011)$(3.2 \times 10^6) \div (8.7 \times 10^{11})$
Solution:
(a) (1.7×106)(2.7×1011)=1.7×2.7106×1011=4.59×105$(1.7 \times 10^6) \cdot (2.7 \times 10^{-11})=1.7 \times 2.7 \cdot 10^6 \times 10^{-11}=4.59 \times 10^{-5}$
(b) (3.2×106)÷(8.7×1011)=3.2×1068.7×1011=3.28.7×1061011=0.368×10611=3.68×101×105=3.68×106$(3.2 \times 10^6) \div (8.7 \times 10^{11})=\frac{3.2 \times 10^6}{8.7 \times 10^{11}}=\frac{3.2}{8.7} \times \frac{10^6}{10^{11}}=0.368 \times 10^{6-11} = 3.68 \times 10^{-1} \times 10^{-5}=3.68 \times 10^{-6}$
You must remember to keep the powers of ten together, and have 1 number before the decimal.
## Scientific Notation Using a Calculator
Scientific and graphing calculators make scientific notation easier. To compute scientific notation, use the [EE] button. This is [2nd] [,] on some TI models or [10χ]$[10^\chi]$, which is [2nd] [log].
For example to enter 2.6×105$2.6 \times 10^5$ enter 2.6 [EE] 5.
When you hit [ENTER] the calculator displays 260000$260000$ if it’s set in Scientific mode OR it displays 260,000 if it’s set in Normal mode.
## Solving Real-World Problems Using Scientific Notation
Example: The mass of a single lithium atom is approximately one percent of one millionth of one billionth of one billionth of one kilogram. Express this mass in scientific notation.
Solution: We know that percent means we divide by 100, and so our calculation for the mass (in kg) is 1100×11,000,000×11,000,000,000×11,000,000,000=102×106×109×109$\frac{1}{100} \times \frac{1}{1,000,000} \times \frac{1}{1,000,000,000} \times \frac{1}{1,000,000,000} = 10^{-2} \times 10^{-6} \times 10^{-9} \times 10^{-9}$
Next, we use the product of powers rule we learned earlier in the chapter.
102×106×109×109=10((2)+(6)+(9)+(9))=1026 kg.
The mass of one lithium atom is approximately 1×1026 kg$1 \times 10^{-26} \ kg$.
## Practice Set
Sample explanations for some of the practice exercises below are available by viewing the following video. Note that there is not always a match between the number of the practice exercise in the video and the number of the practice exercise listed in the following exercise set. However, the practice exercise is the same in both.
Write the numerical value of the following expressions.
1. 3.102×102$3.102 \times 10^2$
2. 7.4×104$7.4 \times 10^4$
3. 1.75×103$1.75 \times 10^{-3}$
4. 2.9×105$2.9 \times 10^{-5}$
5. 9.99×109$9.99 \times 10^{-9}$
6. (3.2×106)(8.7×1011)$(3.2 \times 10^6) \cdot (8.7 \times 10^{11})$
7. (5.2×104)(3.8×1019)$(5.2 \times 10^{-4}) \cdot (3.8 \times 10^{-19})$
8. (1.7×106)(2.7×1011)$(1.7 \times 10^6) \cdot (2.7 \times 10^{-11})$
9. (3.2×106)÷(8.7×1011)$(3.2 \times 10^6) \div (8.7 \times 10^{11})$
10. (5.2×104)÷(3.8×1019)$(5.2 \times 10^{-4}) \div (3.8 \times 10^{-19})$
11. (1.7×106)÷(2.7×1011)$(1.7 \times 10^6) \div (2.7 \times 10^{-11})$
Write the following numbers in scientific notation.
1. 120,000
2. 1,765,244
3. 63
4. 9,654
5. 653,937,000
6. 1,000,000,006
7. 12
8. 0.00281
9. 0.000000027
10. 0.003
11. 0.000056
12. 0.00005007
13. 0.00000000000954
14. The moon is approximately a sphere with radius r=1.08×103$r=1.08 \times 10^3$ miles. Use the formula Surface Area=4πr2$\text{Area}=4 \pi r^2$ to determine the surface area of the moon, in square miles. Express your answer in scientific notation, rounded to two significant figures.
15. The charge on one electron is approximately 1.60×1019$1.60 \times 10^{-19}$ coulombs. One Faraday is equal to the total charge on 6.02×1023$6.02 \times 10^{23}$ electrons. What, in coulombs, is the charge on one Faraday?
16. Proxima Centauri, the next closest star to our Sun, is approximately 2.5×1013$2.5 \times 10^{13}$ miles away. If light from Proxima Centauri takes 3.7×104$3.7 \times 10^4$ hours to reach us from there, calculate the speed of light in miles per hour. Express your answer in scientific notation, rounded to two significant figures.
Mixed Review
1. 14 milliliters of a 40% sugar solution was mixed with 4 milliliters of pure water. What is the concentration of the mixture?
2. Solve the system {6x+3y+1815=11y5x$\begin{cases} 6x+3y+18\\ -15=11y-5x \end{cases}$.
3. Graph the function by creating a table: f(x)=2x2$f(x)=2x^2$. Use the following values for x:5x5$x: -5 \le x \le 5$.
4. Simplify 5a6b2c6a11b$\frac{5a^6 b^2 c^{-6}}{a^{11} b}$. Your answer should have only positive exponents.
5. Each year Americans produce about 230 million tons of trash (Source: http://www.learner.org/interactives/garbage/solidwaste.html). There are 307,006,550 people in the United States. How much trash is produced per person per year?
6. The volume of a 3-dimesional box is given by the formula: V=l(w)(h)$V=l(w)(h)$, where l=$l=$ length, w=$w=$ width, and h=$h=$ height of the box. The box holds 312 cubic inches and has a length of 12 inches and a width of 8 inches. How tall is the box?
## Quick Quiz
1. Simplify: (2x4y3)3 x3y22x0y2$\frac{(2x^{-4}y^3)^{-3} \ \cdot \ x^{-3} y^{-2}}{-2x^0y^2}$.
2. The formula A=1,500(1.0025)t$A=1,500(1.0025)^t$ gives the total amount of money in a bank account with a balance of \$1,500.00, earning 0.25% interest, compounded annually. How much money would be in the account five years in the past?
3. True or false? (54)3=12564$\left(\frac{5}{4}\right)^{-3}= -\frac{125}{64}$
8 , 9
Feb 22, 2012
Dec 11, 2014 | 2015-05-30 15:04:09 | {"extraction_info": {"found_math": true, "script_math_tex": 45, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7628416419029236, "perplexity": 1109.825921839519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207932182.13/warc/CC-MAIN-20150521113212-00177-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://mathhelpforum.com/differential-equations/164349-first-order-nonlinear-de-print.html | # First-order nonlinear DE
• November 25th 2010, 05:26 AM
Greg98
First-order nonlinear DE
Hello,
I tried searching similar DE-problems, but I couldn't find one. The problem is following differential equation:
$e^{x}y'+xe^{-y}=0$
So, it seems to be first-order nonlinear DE. But, I don't know how to solve it. I couldn't separe it, though I managed to separate simpler equations, but I think that's the right way to go.
Just got it to form,
$e^{-y}(e^{y+x}y'+x)=0$,
which obviously doesn't help anything...
Any help is appreciated. Thank you!
• November 25th 2010, 05:35 AM
harish21
$y'=-\dfrac{xe^{-x}}{e^x}=-\dfrac{x}{e^{2x}}$
• November 25th 2010, 05:40 AM
Sudharaka
Quote:
Originally Posted by Greg98
Hello,
I tried searching similar DE-problems, but I couldn't find one. The problem is following differential equation:
$e^{x}y'+xe^{-x}=0$
So, it seems to be first-order nonlinear DE. But, I don't know how to solve it. I couldn't separe it, though I managed to separate simpler equations, but I think that's the right way to go.
Just got it to form,
$e^{-y}(e^{y+x}y'+x)=0$,
which obviously doesn't help anything...
Any help is appreciated. Thank you!
Dear Greg98,
$e^{x}y'+xe^{-x}=0$
$\frac{dy}{dx}=-xe^{-2x}$
$y=-\int{xe^{-2x}}dx$
This integration could be done by integration by part method.....
Hope this helps.
• November 25th 2010, 08:00 AM
Greg98
Thanks for the help! Because of my omnipotent typing skills, I still need some advice (see original post). Sorry... I tried some logarithms to eliminate $e^{-y}$, but that didn't help.
• November 25th 2010, 08:29 AM
harish21
$e^{x}y'+xe^{-y}=0$
$e^{x}y'+\dfrac{x}{e^{y}}=0$
divide both sides by $e^x$
$y'+\dfrac{x}{e^{x}\;e^{y}}=0$
multiply both sides by $e^y$
$y'\;e^y +\dfrac{x}{e^x}=0$
$e^y \frac{dy}{dx}=-\dfrac{x}{e^x}$
$e^y\;dy=-xe^{-x}\;dx$
finish... | 2014-07-23 22:31:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7980968952178955, "perplexity": 1458.660274029187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883858.16/warc/CC-MAIN-20140722025803-00130-ip-10-33-131-23.ec2.internal.warc.gz"} |
http://tex.stackexchange.com/questions/4832/ruled-lines-on-the-bottom-half-of-each-page/4849 | # Ruled lines on the bottom half of each page?
I'm trying to typeset a version of a document that will have the document on the half of each page, but I want the bottom half to have thin ruled lines for note taking. Any one have any thoughts about how I could do this automatically? Could I some how make a custom footer for each page?
-
In ConTeXt, the page layout has a parameter bottom that goes below the footer. You can set its value just like the value of a footer. That way, you can still have your regular footers and put anything else at the bottom of the page. Here is a complete solution:
\setuplayout
[height=fit,
bottomdistance=1in,
bottom=4.8in]
\startsetups bottom:rule
\vbox
{\dorecurse{32}{\blackrule[width=\textwidth,height=1pt]\blank[12pt]}}
\stopsetups
\setupbottomtexts[\setups{bottom:rule}]
\showframe
\starttext
\dorecurse{10}{\input knuth \par}
\stoptext
In \setuplayout the option bottom=4.8in allocates 4.8in for the bottom. The bottom area is placed 1in from the bottom of the text area (option bottomdistance=1in). The option height=fit asks ConTeXt to recalculate the height of the text area so that everything fits on the page.
The \setupbottomtexts sets what should be printed in the bottom area (this is just like \setupfootertexts for footers). The setups mechanism is just a nice way to abstract out complicated pieces of code.
-
Depending on what you want the 'document on one half of each page' to look like, you might be able to use pgfpages, as in http://www.guidodiepen.nl/2009/07/creating-latex-beamer-handouts-with-notes/. This approach will work if you want existing pages of the document to be scaled down to half the page size.
Otherwise I would use geometry to create a large bottom margin, and then something like atbegshi to add the ruled lines to each page (\AtBeginShipout{...})
- | 2015-05-23 06:21:45 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.854082465171814, "perplexity": 1057.1573545571734}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927245.60/warc/CC-MAIN-20150521113207-00237-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://stacks.math.columbia.edu/tag/0BND | Theorem 53.6.2. Let $X$ be a connected scheme. Let $\overline{x}$ be a geometric point of $X$.
1. The fibre functor $F_{\overline{x}}$ defines an equivalence of categories
$\textit{FÉt}_ X \longrightarrow \textit{Finite-}\pi _1(X, \overline{x})\textit{-Sets}$
2. Given a second geometric point $\overline{x}'$ of $X$ there exists an isomorphism $t : F_{\overline{x}} \to F_{\overline{x}'}$. This gives an isomorphism $\pi _1(X, \overline{x}) \to \pi _1(X, \overline{x}')$ compatible with the equivalences in (1). This isomorphism is independent of $t$ up to inner conjugation.
3. Given a morphism $f : X \to Y$ of connected schemes denote $\overline{y} = f \circ \overline{x}$. There is a canonical continuous homomorphism
$f_* : \pi _1(X, \overline{x}) \to \pi _1(Y, \overline{y})$
such that the diagram
$\xymatrix{ \textit{FÉt}_ Y \ar[r]_{\text{base change}} \ar[d]_{F_{\overline{y}}} & \textit{FÉt}_ X \ar[d]^{F_{\overline{x}}} \\ \textit{Finite-}\pi _1(Y, \overline{y})\textit{-Sets} \ar[r]^{f_*} & \textit{Finite-}\pi _1(X, \overline{x})\textit{-Sets} }$
is commutative.
Proof. Part (1) follows from Lemma 53.5.5 and Proposition 53.3.10. Part (2) is a special case of Lemma 53.3.11. For part (3) observe that the diagram
$\xymatrix{ \textit{FÉt}_ Y \ar[r] \ar[d]_{F_{\overline{y}}} & \textit{FÉt}_ X \ar[d]^{F_{\overline{x}}} \\ \textit{Sets} \ar@{=}[r] & \textit{Sets} }$
is commutative (actually commutative, not just $2$-commutative) because $\overline{y} = f \circ \overline{x}$. Hence we can apply Lemma 53.3.11 with the implied transformation of functors to get (3). $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | 2019-08-23 06:26:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9973967671394348, "perplexity": 276.63937999457625}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318011.89/warc/CC-MAIN-20190823062005-20190823084005-00255.warc.gz"} |
https://zbmath.org/?q=an%3A1188.53090 | ## Intrinsic formulation of geometric integrability and associated Riccati system generating conservation laws.(English)Zbl 1188.53090
The aim of the paper is to study, firstly, the formulation of Bäcklund transformations based on a Pfaffian system for the case of nonlinear evolution equations which describe pseudospherical surfaces, this is, surfaces with negative constant Gauss curvature, and secondly the determination of conservation laws for such equations.
Starting from the structure equations of a surface with Gauss curvature equal to $$-1$$, the author is able to transform them into an associated system of differential equations in a Riccati form and to formulate the equivalent linear problem. All this has been done in an intrinsic way.
Finally, it is shown that geometrical properties of a pseudospherical surface provide a systematic method for obtaining an infinite number of conservation laws.
### MSC:
53C80 Applications of global differential geometry to the sciences 53C21 Methods of global Riemannian geometry, including PDE methods; curvature restrictions 35Q53 KdV equations (Korteweg-de Vries equations) 53A10 Minimal surfaces in differential geometry, surfaces with prescribed mean curvature
Full Text:
### References:
[1] DOI: 10.1017/CBO9780511623998 [2] DOI: 10.1063/1.1337796 · Zbl 1016.53008 [3] DOI: 10.1016/0550-3213(79)90517-0 [4] DOI: 10.1143/PTP.53.419 · Zbl 1079.35506 [5] DOI: 10.1103/PhysRevLett.30.1262 [6] DOI: 10.1103/PhysRevLett.33.925 · Zbl 1329.35347 [7] DOI: 10.1002/sapm198674155 · Zbl 0605.35080 [8] DOI: 10.1063/1.528020 · Zbl 0695.35038 [9] DOI: 10.1063/1.533284 · Zbl 0992.53005 [10] DOI: 10.1023/A:1010774630016 · Zbl 0995.35054 [11] DOI: 10.1017/CBO9780511606359 [12] DOI: 10.1142/3812 [13] DOI: 10.1002/sapm1988783227 · Zbl 0681.35087 [14] DOI: 10.1002/sapm1989812125 · Zbl 0697.58059
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2022-06-25 23:02:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.348445326089859, "perplexity": 2467.519871697929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036176.7/warc/CC-MAIN-20220625220543-20220626010543-00239.warc.gz"} |
https://www.sanfoundry.com/statistical-quality-control-basic-questions-answers/ | # Statistical Quality Control Questions and Answers – Gauge and Measurement System Capability Studies – 2
«
»
This set of Basic Statistical Quality Control Questions and Answers focuses on “Gauge and Measurement System Capability Studies – 2”.
1. What of these can be used as a reasonable model for measurement system capability studies? Here y,x and ε denote the observed measurement, true measurement, and the measurement error respectively.
a) y=x-2ε
b) y=x+ε
c) y=x-ε
d) y=x+2ε
Explanation: The reasonable model, which can be used for measurement system capability is expressed as,
y=x+ε
2. The variance of the total observed measurement is expressed by __________
a) $$\sigma_{Total}^2=\sigma_P^2-\sigma_{Gauge}^2$$
b) $$\sigma_{Total}^2=\sigma_P+\sigma_{Gauge}^2$$
c) $$\sigma_{Total}^2=\sigma_P^2+\sigma_{Gauge}$$
d) $$\sigma_{Total}^2=\sigma_P^2+\sigma_{Gauge}^2$$
Explanation: The variance of the total observed measurement uses the value of the sum of the variances of product errors, and the gauge errors. It is calculated by,
$$\sigma_{Total}^2=\sigma_P^2+\sigma_{Gauge}^2$$
3. The P/T ratio stands for ___________
a) Probability to tolerance ratio
b) Precision to time ratio
c) Probability to total ratio
d) Precision to tolerance ratio
Explanation: The P/T ratio is used in the measurement system capability analysis. In P/T ratio, P/T refers to Precision to tolerance ratio.
4. What is the value of the P/T ratio?
a) $$\frac{P}{T}=\frac{1.5k\hat{σ_p}}{USL-LSL}$$
b) $$\frac{P}{T}=\frac{k\hat{σ}_{gauge}}{USL-LSL}$$
c) $$\frac{P}{T}=\frac{2k\hat{σ_p}}{USL+LSL}$$
d) $$\frac{P}{T}=\frac{k\hat{σ}_{gauge}}{USL+LSL}$$
Explanation: The P/T ratio is calculated for the evaluation of the gauge capability. It uses the value of the $$\hat{σ_{gauge}}$$ The value of P/T ratio is given by,
$$\frac{P}{T}=\frac{k\hat{σ}_{gauge}}{USL-LSL}$$
5. If the number of standard deviations between the usual natural tolerance limits of a normal distribution, what is the value used for k in the P/T ratio?
a) 5.15
b) 8
c) 6
d) 5.60
Explanation: The value k=6 in the P/T ratio corresponds to the number of standard deviations, between the usual natural tolerance limits for a normal distribution.
6. For a process, which has, USL and LSL equal to 60, and 5 respectively, and the value of $$\hat{σ}_{gauge}$$ = 0.887, what will be the value of P/T ratio when k=6?
a) 0.087
b) 0.077
c) 0.067
d) 0.097
Explanation: We know that,
$$\frac{P}{T}=\frac{k\hat{σ}_{gauge}}{USL-LSL}$$
Putting the values in the question, we get P/T=0.097.
7. Which of these indicate an adequate measurement system?
a) $$\frac{P}{T}≤0.1$$
b) $$\frac{P}{T}≤0.5$$
c) $$\frac{P}{T}≥0.1$$
d) $$\frac{P}{T}=0.3$$
Explanation: If the value of P/T ratio is lesser than or equal to 0.1, the measurement system used is predicted to be adequate for the selected process.
8. The options are the P/T ratios for different measurement systems. Which of these shows an adequate measurement system?
a) 0.21
b) 0.13
c) 0.18
d) 0.06
Explanation: A P/T ratio lesser than 0.1 indicates an appropriate measurement system for any process. So here, 0.06<0.1, so it is an example of acceptable measurement systems.
9. The cause of calling a measurement system adequate because it has P/T ratio lesser than 0.1, is ____________
a) A measurement device should be calibrated in units one-tenth large as the accuracy required in final measurement
b) A measurement device should be calibrated in units one-third large as the accuracy required in final measurement
c) A measurement device should be calibrated in units one-fourth large as the accuracy required in final measurement
d) A measurement device should be calibrated in units three-tenth large as the accuracy required in final measurement
Explanation: Values of estimated P/T ratio of 0.1 or less indicate adequate measurement system. It’s based upon the general rule, which requires the measurement device to be calibrated in units one-tenth large, as the accuracy required in the final measurement.
10. Which of these can be used as the estimate of standard deviation of total variability which is including both product variability, and the gauge variability?
a) The sample mean
b) The sample variance
c) The sample standard deviation
d) No of defects in the sample
Explanation: The sample variance can be used as the estimate of the standard deviation of the total variability, which includes both, the product variability, and the gauge variability.
11. If the sample variance of a process is, 10.05, and the gauge capability standard deviation is estimated to be 0.79. What will be the value of the estimate of the standard deviation of the product variability?
a) 9.26
b) 3.04
c) 2.03
d) 8.91
Explanation: As we know that,
$$\sigma_{Total}^2=\sigma_P^2+\sigma_{Gauge}^2$$
If the estimates are to be used, the same equation can be written for the corresponding estimate values. Putting the values as mentioned, we get, $$\hat{\sigma}_p^2$$=3.04.
12. Which of these show a correct expression for the ρp?
a) $$ρ_p=\frac{\sigma_p^2}{2σ_{Gauge}^2}$$
b) $$ρ_p=\frac{\sigma_{gauge}^2}{σ_{Total}^2}$$
c) $$ρ_p=\frac{\sigma_P^2}{σ_{Gauge}^2}$$
d) $$ρ_p=\frac{\sigma_{gauge}^2}{2σ_{total}^2}$$
Explanation: The gauge capability ratio ρp is the ratio of the variances of the product error and the gauge errors. It is expressed as,
$$ρ_p=\frac{\sigma_P^2}{σ_{Gauge}^2}$$
13. The gauge capability ratio ρM is expressed as ____________
a) $$ρ_p=\frac{\sigma_p^2}{2σ_{Gauge}^2}$$
b) $$ρ_p=\frac{\sigma_{gauge}^2}{σ_{Total}^2}$$
c) $$ρ_p=\frac{\sigma_P^2}{σ_{Gauge}^2}$$
d) $$ρ_p=\frac{\sigma_{gauge}^2}{2σ_{total}^2}$$
Explanation: The value of gauge capability ratio ρM is a ratio of the variances of the gauge errors and the total observed errors. It is expressed as,
$$ρ_p=\frac{\sigma_{gauge}^2}{σ_{Total}^2}$$
14. The general rule, that is used to define a measurement system adequate by using P/T ratio equal to or less than 0.1, can be used without caution.
a) True
b) False
Explanation: The caution should be used in accepting this general rule of thumb in all cases. A gauge must be capable to measure product accurately enough and precisely enough, for the analyst to make a correct decision. This may not necessarily require P/T <=0.1.
15. ρP=1-ρM.
a) True
b) False
$$ρ_p=\frac{\sigma_{gauge}^2}{σ_{Total}^2}; ρ_p=\frac{\sigma_P^2}{σ_{Gauge}^2}; σ_{Total}^2=σ_P^2+σ_{Gauge}^2$$ | 2020-07-03 20:59:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8862901926040649, "perplexity": 1410.6861387956333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882934.6/warc/CC-MAIN-20200703184459-20200703214459-00197.warc.gz"} |
https://www.intechopen.com/chapters/48108 | Open access peer-reviewed chapter
# Enhancing Biomass Utilization for Bioenergy — Crop Rotation Systems and Alternative Conversion Processes
Written By
Ronald Hatfield
Submitted: June 10th, 2014 Reviewed: November 11th, 2014 Published: September 30th, 2015
DOI: 10.5772/59883
From the Edited Volume
## Biofuels
Edited by Krzysztof Biernat
Chapter metrics overview
View Full Metrics
## 1. Introduction
With ever increasing global populations there is a rising demand for energy to support even modest changes in lifestyle. It has been recognized for some time now that with decreasing oil reserves on a global scale there is a need for alternative energy sources. Many of our needs for energy utilizing electricity can be met by alternatives to petroleum and coal-based power generation. Of particularly high potential is the efficient utilization of solar energy. According to Lewis and Nocera [1], the earth receives approximately 7000 times more energy from the sun than is utilized by all of mankind. There are several technologies that are being utilized, ranging from photovoltaic to focusing mirrors to super heat fluids for steam generation in the production of electricity. The continued development of these technologies, along with other types such as wind-driven turbines, geothermal, hydroelectric, and ocean wave motion for electricity production, will greatly lessen the demand on petroleum-based energy. However, a critical need is liquid fuels for transportation. The movement of people and goods over great distances is a vital part of the world economy.
Part of the answer may still lie in the utilization of solar energy; not in a direct manner to power vehicles (cars, trucks, trains, and airplanes), but what it has been doing for billions of years in providing energy to growing plants. Conversion of plant biomass to energy or the production of bio-based liquid fuels (biofuels) has received greater attention in the last couple of decades. Although there is a tremendous amount of potential energy stored in the total plant biomass as it goes through its normal life cycle, much of the current technology has focused on the utilization of grains (corn, cereals, and soybeans) or sugars from storage organs of specialty plants (sugar cane, sugar beets). This has allowed a rapid ramping up of liquid fuel production in the form of ethanol. The technology needed for this production was not something that required a lot of development, but was basically a matter of scale. After all the brewing industry has been utilizing this process for centuries. For corn grain and cereals, it is a matter of converting starch to glucose, a simple enzymatic process followed by the fermentation of glucose by yeast to ethanol. In the case of sugar cane or sugar beets, the same technology was already being utilized to efficiently remove the sugar (sucrose) from plant biomass and easily convert to sugars fermentable with yeast [2]. Even for the production of plant-derived biodiesel, the grains from oil-producing crops are pressed to release oils in which the fatty acids can be methyl- or ethyl–esterified, producing a suitable diesel alternative. Biodiesel lags well behind other types of biofuel production systems and seems to be focused primarily on the utilization of waste products from the food industry[3].
With current scenarios, the ethanol industry will have to compete with increasing demands on grains for feed and food [2]. A concern has been the diversion of land from food production to energy production and rightly so with increasing world populations. With this in mind, much attention has been directed to the conversion of cellulosic biomass to liquid fuels. This subject has been highly reviewed in the past few years, addressing a wide range of concerns and potential advantages. It is clear that crop residues will play a key role in meeting the projected total biomass needed to provide the amount of liquid fuel to meet the goal of replacing 30% of U.S petroleum consumption by 2030 [2]. Dedicated biofuel crops such as switchgrass and fast-growing poplar also figure prominently into meeting this goal. It is envisioned that the dedicated energy crops could be grown on marginal lands poorly suited for the high capacity needs of feed and food [4]. Recently Schmer et.al.,2008 [5]demonstrated that switchgrass grown in areas considered to be margin cropland could be an effective source of biomass for biofuels. It has been proposed that establishment of low input man made prairies could be an economical way of producing biomass for biofuels [6]. Although this could be a way to supply some of the required biomass it may fall well short of the amount needed per acre to make it a practical enterprise for harvest and transportation. Well-managed switchgrass plots on marginal croplands supplied higher estimated ethanol yields per acre (93% greater than poor management) [5]. Genetic improvement is a critical component to establish switchgrass as a major biomass source that can meet the demands for more biofuels [7]. It should be kept in mind that biofuel programs must fit into an agricultural system that maximizes the production potential of each acre of farmland while protecting the environment. In this respect switchgrass on marginal croplands could also provide a nutrient sink for nitrogen waste from animal production. Switchgrass needs little nitrogen input but as with any crop production increases with the application of nitrogen [5]. Well-managed switchgrass plots could extend the useful life of croplands no longer fit for typical row crop production. Perennial grasses such as switchgrass can provide runoff protection as buffer strips along streams and rivers to keep nutrients out of waterways and lakes, thus providing dual benefits.
Although there have been a wide range of crop residues proposed to contribute to the total biomass needed for biofuel production, corn stover would be the largest contributor. It has been estimated that corn stover would contribute as much as 20% of the total biomass requirement [2]. One of the concerns of removing crop residues is the long-term impact upon soils. Removing large portions of the residues leaves the soil surface vulnerable to wind and water erosion. Guidelines have been proposed for leaving sufficient biomass on the fields to keep this from becoming too much of a problem [8]. In addition, removing large portions of the biomass leads to a depletion of the soil organic carbon levels [9]. If sufficient amounts were left in place to meet these demands, this in turn would limit the amount of biomass for biofuel production [10]. With anticipated small profit margins, especially in the early going, there will be a temptation to remove more of the biomass, leaving the soils vulnerable to erosion and risking soil organic carbon depletion. Once these soils have reached high depletion levels, productivity will be severely restricted and returning them to better productivity will be a monumental task. Switching these lands to crops such as switchgrass that can do well in marginal soils would help the biofuels industry, but some of the most productive farmland for food and feed would be lost. This would most certainly sharpen the debate over land use for biofuels vs. food. No matter the approach it is clear multiple scenarios will need to be investigated to meet biomass for biofuel needs in a sustainable manner. The driving force behind future directions should be one of maintaining our existing high production lands while capturing increased value from lands that are should not be in continuous crop production. The challenge moving forward is to develop farming systems that are both economic and environmentally sustainable while meeting the increasing demands of food, feed, fiber, and now bioenergy. There is no doubt that crop residues, especially corn stocks, play a major part in making this vision a reality but as already pointed out it is walking a fine line between productivity and maintaining soil health.
## 2. The role of crop rotations
At one time crop rotations utilizing nitrogen fixing legumes were much more prevalent on the landscape due to the cost and availability of commercial fertilizers. With the availability of commercial fertilizers there was no longer a need for utilizing legume forages that are particularly good at fixing nitrogen to be used for subsequent crop production. In the most productive regions in the United States particularly the Midwest Breadbasket there is economic pressure to produce monocultures of crops such as corn. This is made possible due to the relatively cheap source of commercial nitrogen-based fertilizer [11] and to the development of pesticides and herbicides. The Haber-Bosch process to produce ammonia requires large amounts of energy and appropriate catalysts to complete the transformation of hydrogen and nitrogen into ammonia. The commercialization of this process has been referred to as the detonator for the world population explosion because lands could now produce much higher levels of food to support increased populations [12]. Although this has allowed increased grain production the cost of nitrogen fertilizers has increased nearly 8 to 14 fold from a low in early 1970s to 2013 (USDA-REE statistics, http://www.ers.usda.gov/dataproducts/ferti izer-use-and-price.aspx#.VDwPcOe9i-Q). Much of the increased cost of nitrogen based commercial fertilizers has been driven by rising energy costs not only for production of anhydrous ammonia but also for transportation. As fossil based fuels continue to become in greater demand and at some point become limiting the price of fertilizers will continue to go up (See fertilizer price trends USDA-REE statistics) putting greater pressure on the value of crops produced on each acre of land. An alternative is to find other methods of increasing soil fertility. In farming regions where animal production is an integral part of the farming system, animal waste provides a valuable nutrient source (e.g., dairy production). Although a good source of nitrogen based nutrients for crops, good management is critical to maintaining nutrient availability for crop production and preventing excessive soil erosion.
Production of forage legumes in rotation with row crops provides opportunities for increasing nitrogen for crop production while stabilizing and improving the environment (Figure 1). In 2010, a workshop (organized by National Alfalfa & Forage Alliance, Pioneer, USDA-Agricultural Research Service, and the National Corn Growers Association) was held to discuss the feasibility and benefits of establishing alfalfa-corn rotations to meet food and feed demands, as well as providing biomass for biofuel production (proceedings available online: www.alfalfa-forage.org). Workshop attendees evaluated the feasibility of using crop rotations to maintain soil fertility while providing sufficient biomass for biofuel production. Jung reported [13] alfalfa (Medicago sativaL.) is a deep-rooted perennial legume forage typically used as a feed source for ruminant animal production. Because of its high capacity to fix nitrogen, there is no need for the addition of nitrogen fertilizer for its own growth. Nitrogen stored in the roots after two years of growth would be sufficient to supply approximately 75% of the next two years of corn production [13]. This result would have several positive environmental impacts: 1) decreased greenhouse gas emissions from reduced dependence upon commercial fertilizers; 2) reduced soil erosion; 3) reduced nutrient run-off; and 4) improved carbon sequestration [13]. A potential advantage of such a rotation system would be the accumulation of soil organic carbon if proper soil/plant management was put into place [14] (Figure 1). However, Baker [15] cautions that assessing changes in soil organic carbon is not easy in a rotation system due to the relatively short duration of the alfalfa in its rotation sequence especially in the early years of adaption of such a farming system. Having the organic matter incorporated into the soil already in the form of extensive root systems eliminates the need for soil tillage to assist in moving organic matter in crop residue to the soil biome.
Accumulation of fixed nitrogen in alfalfa is substantial (152 kg N ha-1 over a range of environments and soil types) [16]. This decreases the need for application of commercial fertilizer that is dependent upon fossil fuels in the form of methane for production. As a perennial legume, alfalfa’s early spring growth as well as late fall growth provides cover for soils when row crops would be planted and after harvest when soils are most vulnerable to erosion. This does not remove the need for good management practices during the corn production part of the cycle; the severity is greatly reduced over a continual corn or corn-soybean rotation. According to Vadas et.al., [17] alfalfa-corn rotations for bioenergy production can have significant advantages mostly in terms of efficiency of energy production and decreased soil erosion and less nitrogen leaching compared to continuous corn. The bottom line was continuous corn had the greatest production costs but also had the greatest profit potential. This is not assigning a cost to the soil erosion. Scientists at the U.S. Dairy Forage Research Center in conjunction with University of Wisconsin-Madison researchers Grabber, Renz, and Lauer have shown that inter-seeding alfalfa with corn can double the first-year yields from the alfalfa [18]. Such a practice would insure cover-crop availability once the corn is harvested and would provide a jumpstart on the production of alfalfa the following spring [19]. The use of alfalfa as a cover crop would appear to have some drag on total corn production during the establishment year but alfalfa production would to significantly increased during the first full year of production. Most importantly the soil would be better protected during the last year of corn production and during the alfalfa establishment decreasing soil erosion potential during alfalfa establishment. Additionally since alfalfa is a deep-rooted perennial it can recover nitrogen that has leached beyond the limited root zone of corn, helping prevent further leaching and contamination of ground water.
In the early 90s (1993 to 2000) a pilot program was initiated to test the feasibility of alfalfa-corn rotation for energy production [13]. The alliance involved the University of Minnesota, USDA-Agricultural Research Service, Minnesota Valley Alfalfa Producers, and the DOE. The proposed system utilized dry baled alfalfa from which stems were mechanically separated from the leaves creating two feedstock components; one being the high fiber stems for energy production and the other leaf meal as a high protein fraction. Feeding trials with the alfalfa leaf meal found that it could successfully replace other protein sources such as soybean meal in diets of calves, dairy cows, and feedlot steers [13]. Although the early work indicated feasibility and advantages of alfalfa-corn rotations in a bioenergy production system the project fell apart before it could move to the next stages of testing and the project abandoned. However, these initial results indicated an existing infrastructure for handling alfalfa that could be easily adapted to a biofuel production program.
There is no doubt that rotation of corn and alfalfa would have significant environmental benefits over continuous corn. What is the economic and environmental impact upon available biomass for biofuels and the need for feed and food? Alfalfa leaves can contain as much as 30% or more protein as a fraction of the total dry matter. Typically during plant development, the stem becomes an increasing proportion of the total biomass; being lower in protein, the total plant protein decreases [20]. Harvesting schemes currently in place requires cutting the alfalfa at early-bud stage of development to keep the fiber content as low as possible and the protein content as high as possible. The down side to this harvesting practice is the need for frequent trips over the field to catch plant development at the early-bud stage. This may be reasonable for feed production for ruminant animals, but does not lend itself to practices that would be widely adopted in corn-alfalfa rotations. However, due to the high protein content of the leaves, separation of leaves from stems results in a rich source of protein for a potentially wide range of uses (Figure 2).
Earlier work using a dry fractionation system to separate leaves from stems resulted in an alfalfa leaf meal (pellets) with an estimated value of $200/ton [21]. However, there are few, if any, existing processing plants in North America today to determine if the value would be more or less than this predicted value [22]. A newly proposed system for harvesting alfalfa separates the leaves from the stems as they are harvested in the field, producing two components. One fraction is rich in protein (leaves) and the other is rich in fiber (stems) [23]. The leaf fraction could be used in a wide range of applications including direct ensiling for high-protein feed, or dehydrated as alfalfa meal or other value-added products requiring high-protein materials [22]. The stems could be used as a source of biomass for biofuel production or for feed depending upon the needs of fiber in the ruminants diet. Because the alfalfa leaf does not change appreciably in protein content over the development of the plant, harvest can be delayed to allow greater amounts of total biomass accumulation [24]. According to Shinners, the advantages of field harvesting and fractionation include 1) production of a high-value protein fraction that avoids losses due to weather, 2) fractionation occurs at harvest so no further processing steps or equipment are needed, 3) capital costs of fractionation equipment are low, 4) fractionation occurs on the farm so only the desired fractions need leave the farm, and 5) ruminant feeds can be recombined to produce high-quality rations[22]. This system would provide an alternative to the harvesting/marketing system that is available today for alfalfa and may provide the farmer with a cash crop incentive to produce more alfalfa in conjunction with corn (See Figure 3). It is envisioned harvesting alfalfa using in field fractionation creates two product streams to enhance the total value of the alfalfa crop. Prototype machines have been built to effectively remove the leaves from stems creating two alfalfa components at harvest [23]. One of the real advantages of this type of harvest system is the ability to open the harvest window to avoid bad weather and to decrease the total number of harvests. A prototype leaf stripper was used to harvest alfalfa leaves and stems during the summer of 2013 to test the feasibility of creating high quality diets for dairy cows when harvesting late in plant development (full bloom stage). The idea is to decrease the number of harvests per season to limit production costs, but be able to recombine the two fractions in appropriate amounts of stems and leaves to meet the needs of a high producing dairy cow. Results of feeding trial indicated total milk production and quality of the milk remained the same and excess stems could be used for other applications such as biofuel production [25]. Although this was centered around a feeding trial it demonstrated the feasibility of having a viable harvest system that creates two value components from the alfalfa plant. Energy inputs into such a harvest system are less than what is required under the normal production scenarios [22]. Separation of leaves from the stems also allows additional in field processing to render the stems more digestible. Maceration breaks the stem material open allowing easier access of enzymes or microbes to enhance degradability/digestibility [26]. Processing the stems separately from the leaves does not risk the loss of protein from the leaf due to juicing this material during the maceration process. Hence the high protein fraction is preserved and the high fiber fraction is processed in the field requiring less post harvest processing at the biofuel production sites. The genetic make up of alfalfa has been studied over the past 20 years to maximize quality and digestibility. A key component of this research in the past has been genetic selection for alfalfa germplasm that can withstand frequent cuttings as opposed to the accumulation of large amounts of biomass. Now there is interest to exploit the genetic potential to increase more biomass then is currently available for alfalfa. Efforts to genetically select for a biomass-type alfalfa that produces larger stems and more branching with greater total yields has been successful[13, 24, 27]. According to Lamb et.al.,[24, 27] alfalfa genetically selected for increased biomass production and managed to maximize yields resulted in a 40% increase in tons per acre. Revised management techniques amounted to decreased stand density providing more space for individual plant growth and development coupled with a delayed harvest i.e., switching from early bud stage to plants at 50% bloom or later. This provides the biomass alfalfa plant to accumulate higher amounts of total plant material, both leaves and stems. With the larger more robust stems lodging is minimized compared to the typical hay type alfalfa [13]. Coupled with a new harvesting technique of in-field fractionation, this could improve the amount of biomass for biofuels while still producing a high-protein fraction for value-added products. The theoretical ethanol yield for alfalfa stems would be 137 gal/acre compared to 174 gal/acre for corn stover assuming only half of the stover is removed to maintain soil health and long term productivity[13]. Including the grain for ethanol production (473 gal/acre), corn far outpaces the amount of ethanol potential from alfalfa. However, the estimated protein yield per acre would be 0.49 tons/acre for alfalfa leaves, zero for the corn stover and 0.34 tons/acre for corn grain [13]. In the face of growing world populations protein production will be of increasing concern. In terms of outright biomass production, the system of crop rotations between corn and alfalfa lags behind year after year of corn production. From an economic perspective alfalfa-corn rotations provide several advantages in the corn production following alflalfa; 1) yield benefit of$30 to 60/acre, 2) lower fertilizer nitrogen inputs required (2 year time frame) $75 to 150/acre, and 3) no insecticide required the first of corn production$15/acre [13]. This results in an accumulative savings potential of \$120 to 225/acre. The rotation system does provide for a more sustainable system, both from an environmental and economic standpoint, primarily from decreasing the application of commercial fertilizers by 75% over two years of production. These economic values do not take in to account the impact of carbon sequestration that would help offset aggressive removal of corn stover during that phase of the rotation cycle.
## 3. Alternatives for biofuel production
Current technologies rely primarily on the yeast-ethanol platform to create liquid fuels. The process has been well studied and continues to undergo development to utilize more of the cell wall sugars in addition to the cellulosic glucose. Much of the current biofuel industry is based on yeast fermentation of glucose that is derived from starch primarily from corn grain, although any cereal grain could be used. Brazil has adopted a slightly different approach and has based much of its ethanol production on sugarcane using yeast fermentation. These systems are not sustainable in the long run due to ever increasing populations with increasing demands for food. Capturing biomass for conversion to biofuels is a big part of the vision for decreasing dependence upon fossil fuels. Biomass to biofuels does not directly compete with production needs for food and feed and provides opportunities to maximize utilization of our landscape in ways that are sustainable and improves productivity. However, converting biomass to biofuels efficiently is a critical part of the story.
Cell wallComponent AlfalfaStem (N=153) Corn Stover(N=32) Cob(N=56) % Dry Matter Glucose 18-37 23-34 20-33 Other Hexoses 21-41 26-36 23-34 Xylose 5-13 15-23 18-33 Other Pentoses 6-15 18-27 22-35 Lignin 7-22 6-12 3-15
### Table 1.
Cell wall composition of alfalfa stems compared to corn stover and corncobs. Other hexoses include the C6 sugars galactose and mannose and other pentoses refers primarily to the C5 sugar arabinose. Data from [13] and [55].
Ethanol is not the only biofuel under consideration as a product for biomass. Alternative systems for the conversion of biomass to biofuel are the syngas platform (details of this system can be found on the National Renewable Energy Laboratory website: www.nrel.gov/biomass/biorefinery.html) and the carboxylate platform. The syngas platform requires large inputs of energy to produce effective amounts of a useful biofuel. The carboxylate platform requires undefined mixed bacterial cultures under anaerobic conditions [31] (Figure 5). One of the big advantages of this system is the flexibility of the undefined mixed bacterial cultures to handle a wide range of substrates going into the system. More importantly they do not require a sterile environment in which to function. Popular sources of mixed anaerobic cultures are sewage sludge digesters and marine sediments[31-32]. The carboxylate platform works by the process of anaerobic degradation of carbohydrates to produce volatile fatty acids primarily acetic (C2), propionic (C3), and butyric (C4) acids although other VFAs can be produced.
An advantage of the carboxylate platform is the general low inputs needed to obtain materials that can be modified to produce biofuels or bio-refinery products. Pre-treatments are minimized and may be confined to particle size reduction or mild chemical treatments providing the greatest advantages[31]. Most importantly the carboxylate platform does not require an antiseptic environment in which to operate, greatly simplifying handling of raw materials going into digesters. Significant work has been done on carboxylate platforms utilizing mixed cultures from sewage sludge treatments [31, 33]. Such systems have a great deal of flexibility when it comes to handling a wide range and complexity of crop residues or other carbon based materials from agricultural practices. These organic materials may be relatively abundant and of relative low value in their present form before fermentation to VFAs. A disadvantage of the sewage sludge inoculum is the general slow conversion rate and methanogens producing large amounts of methane[31]. In the case of manure or other organic waste digesters where time is not a limiting factor this is quite acceptable and the methane can be easily captured and used as an energy source. With the right type of microbial mix, it is possible to produce longer-chain carboxylates caproate (C6) and caprylate (C8) from acetate in addition to the typical acetate, propionate, and butyrate through a process referred to as reverse β-oxidation[34]. The potential down side of this approach is the process tends to be slow and requires inhibition of methanogens to force the system to produce larger quantities of the longer-chain VFAs, e.g., n-caproate (C6) and n-caprylate (C8). Inhibition of methanogens can be efficiently achieved with compounds like bromoethane sulfonic acid, but this is relatively expensive and would be prohibitive on a large scale[31].
An alternative source of anaerobic microbes for the carboxylate platform for the conversion of plant biomass would be the cow’s rumen. In comparison to waste stream anaerobic microbes, the rumen is a more specialized system having evolved to extract nutrient value out of a wide range of plant materials [35]. Although cell wall degradation and total feed utilization by dairy and beef cows can be improved, the microbial community in these ruminants has evolved to degrade fibrous plant material relatively quickly to supply needed nutrients to the animal [36]. The rumen is a mixed culture of anaerobic organisms effectively degrades carbohydrates, proteins, and fats present in feed mixtures to produce short-chain VFAs. The efficiency of this ruminal system appears to be much greater than what is in the typical waste stream systems[37]. The advantage of a ruminant-based carboxylate platform is the ability to degrade all the organic materials (polysaccharides, proteins, fats, and oils) with the exception of the lignin within short time periods of 24-72 hours. High producing ruminants like the dairy cow must be able to extract sufficient energy from feed materials within 48 hours to support her maintenance and milk production. Cow ruminant microbial communities have evolved over time to handle a diversity of substrates (i.e., easily degraded starch to more recalcitrant fiber materials). Ruminal microbial communities are quite complex with redundancy in the types of hydrolytic abilities that may come into play as substrates change coming into the cow [36]. Due to the relatively short incubation times slower growing acetogens (convert C3-C6 VFAs to acetate) and the methanogens (convert acetate to methane) do not have a chance to become well established. This in turn restricts methane production (8-15% of total energy) in this type of carboxylate platform avoiding the need to add specific methane inhibitors [36]. The small amount of methane that is produced could be captured and utilized as an energy input to maintain incubation temperatures.
Recently Weimer et.al., 2014 [38] demonstrated the ability of rumen microbial cultures to produce large amounts of valeric and caproic in short time periods of 48-72 hour incubations. It has been demonstrated that the addition of dilute amounts of ethanol to mixed culture fermentations in the carboxylate platform results in the extension of the short chain VFAs to medium length molecules thus capturing the fuel value of ethanol in a form that could be more easily recovered [34, 39]. What is unique and promising about the work of Weimer et.al., is the ability to speed up this process using ruminal mixed culture fermentations as opposed to the typical source of sewage digesters [38]. In addition they found that supplementing the mixture with ruminal derived Clostridiumum kluyverian ethanol-utilizing bacteria resulted in production levels of 4.9-6.1 g/L of caproate in 48-72 hours using either switchgrass or alfalfa stems as the substrate. The level of caproate production seen by the Weimer group is similar to what others have achieved [34, 40], but in a 10 to 30 times less time frame for incubation. Being able to generate longer VFAs increases the energy density in each molecule increasing the value of the material for liquid fuels. In addition, the longer chain VFAs are easier to extract from the fermentation media decreasing recovery costs[38-39]. For any biomass to biofuel production process a key element is being able to produce sufficient amounts of fuel molecules in short periods of time and with limited inputs. The carboxylate platform based on ruminal microbes supplemented with additional strains of more specialized bacteria (e.g., Clostridiumum kluyveri) appears to hold a great deal of promise for biomass conversion. Little sample preparation was needed to treat the switchgrass and alfalfa stems for biofuel production using the ruminal microbial system. The fermentation process described here could be combined with other platforms that produce ethanol. For example concept of consolidated bioprocessing (CBP) [36, 41] is considered as a possible avenue for the production of ethanol from biomass to avoid the need for the addition of expensive hydrolytic enzymes. In most cases the CBP system does not produce sufficient ethanol to be cost effective [41]. However, coupled with a ruminal microbial based carboxylate platform the limited ethanol production could be effectively utilized to produce longer chain VFAs increasing energy density of each molecule [38].
Volatile fatty acids must be converted to a form that increases their volatility to be good energy molecules. The medium length VFAs can be recovered by extraction [42] to allow additional modifications. Conversion of VFAs can be accomplished in different ways depending upon the tis desirable end product and its potential use. Possible conversion practices could utilize pure cultures of specific bacteria, electrochemical and thermochemical process. Useful end products that could be used for energy, solvents, or other biorefinery intermediates include ketones, aldehydes, alcohols, and alkanes (Figure 6). Due to the flexibility in the type of end product there are several avenues available to reach the desired outcome. Conversion process can be accomplished in a multitude of different ways using a single or multiple steps to reach desired products. Products such as ketones from VFAs using catalytic coupling [43] or ketones and secondary alcohols as produced in the MixAlco process [33]. The formation of volatile esters can be formed as demonstrated by Lange et.al., [44],Levy et.al., [45] or using microbial systems [46]. Production of alkanes can be achieved by decarboxylation of using pure cultures of microbes [47] or the use of electrochemical process using the Kolbe and/or the Hoefer-Moest processes [48]. The conversion of VFAs especially the medium length (C4-C6) increases volatility and at the same time decreases miscibility with water improving extraction process to isolate the biofuel molecules. The added advantage of VFA production (C2-C6 or longer) coupled with conversion technologies is the flexibility to produce a wide range of molecules that can be used for higher energy density fuel molecules or as starting molecules for other organic materials.
Typically biomass to biofuel systems are envisioned with a centrally located processing plant to handle large amounts of biomass. Unlike the grain ethanol production systems in which the grain is of relatively high density in terms of potential energy per volume, biomass tends to be much bulkier unless it is pelletized to increase bulk density [49]. When one is considering the utilization of corn stover and/or alfalfa stems these materials can be field processed into relatively high-density bales to improve the efficiency of shipping [50]. This is just one step in the complete process of collecting and moving biomass to centralized points for conversion to biofuels [51]. The challenge is keeping the collection, improving bulk density, and transportation costs to minimal levels to help final economic returns and the minimizing the carbon footprint associated with biomass to biofuels[50]. Perhaps it would be feasible to consider on farm conversion at least for the initial steps of the conversion process. In this scenario the harvested plant material (corn stover, alfalfa stems, switchgrass, etc) would be stored on the farm more with an ensiling process compared to dry storage. This provides an opportunity to add enzymes or dilute chemicals to enhance the subsequent digestion of the materials. Size reduction could also be incorporated into the process and storing materials wet eliminates the need for rehydration for fermentation. It could be envisioned that small on farm digesters could be used to process the biomass materials to produce VFAs (select additions of pure cultures and ethanol to create products for special uses) that would be recovered and transported to conversion sites. Processing on farm eliminates the need for consolidating biomass for shipment to centralized processing plants and open opportunities for other types of storage that could enhance conversion efficiency. Recovery of the VFAs or conversion on site to intermediates followed by extraction results in a improvements in energy density and allows materials to be shipped greater distances for further processing into molecules that provide the greatest benefit either as biofuels or as precursors for other organic based materials.
One of the challenges of any biomass conversion platform is dealing with the fermentation residual materials. Lignin is a primary component of the fermentation waste and in many schemes it is recovered and burned to supply energy for other steps in the complete process. With the carboxylate platform based upon mixed ruminal microbes, one of the by products could be the microbial protein as a value-added material. In the normal rumination process, formation of microbial protein is an important component to supply needed protein to the animal. In dairy production, microbial protein helps supply critical amino acids required for milk production, especially methionine and lysine that are often low or lacking in many forage-based diets [52]. Harvesting the microbial protein after biomass conversion to biofuels could provide an important protein supplement for dairy cow diets that is enriched in methionine and lysine. The microbial proteins would be insoluble along with the typical insoluble materials, i.e., lignin and other cell wall components. Recovery of these insoluble materials would be relatively straightforward. As an alternative the lignin-microbial-carbohydrate residue from the fermentation process could be used to replace phenolic-formaldehyde based adhesives[53]. Many of the ruminal microbes contain glycocalyx materials surrounding the individual cells that help them adhere to plant materials during digestion. The glycocalyx is a glycoprotein-polysaccharide complex that surrounds the cell membrane of some bacteria[54]. It has also been demonstrated that the lignin-microbial residues from ruminal fermentations, as proposed for the carboxylate platform, could be used to replace phenol-formaldehyde compounds as adhesives in the production of plywood composites[53]. Up to 70% of the typical phenol-formaldehyde formulation could be replaced by the more environmentally friendly residues that are byproducts of ruminal-based fermentations. Even if it would not be possible to replace all of the phenol-formaldehyde adhesive, decreasing significant amounts of this material would provide for healthier composites by decreasing the amount of formaldehyde outgassing that are a human health concern[53]. Key to the effectiveness of fermentation residues is creating the correct balance of lignin, the blend of rumen microbes and the types of glycocalyx material, and other minor phenolic materials in the plant materials.
## 4. Conclusion
This chapter is not meant to be a comprehensive assessment of biomass to biofuels, but rather a look at unconventional approaches that would enhance the sustainability of the entire process. To meet the goals of biofuel production by 2030 will require optimizing land use for food, feed, and bioenergy production. It should be approached from a standpoint of developing a viable biofuel production system that increases the amount of energy stored in the molecules making up the biofuels, i.e., longer-chain molecules, more energy per unit of fuel. To be sustainable into the future we must be willing to develop alternative systems that supply a range of biomaterials. Although the producing energy alternatives is of major concern at the present time we should be evaluating and developing bioenergy systems that allow flexibility not only in terms of feedstock going in, but the products coming out. Development of biomass to biofuels systems should look at how we can maximize the value of the total process, that is, optimize land use, embrace farming systems that decrease or eliminate soil/nutrient losses, improve economics of production, utilization of value-added products, and total energy production versus inputs. The entire process must also be sustainable from an environmental standpoint and provide economic advantages to the producer. Our vision into the future should be one of maximizing the productivity of each acre of farmland while meeting the needs for feed, food, and energy along with improving the soil for future generations. Decisions made today should not be overly influenced solely by short term economic gains.
## References
1. 1. Lewis, N. S.; Nocera, D. G., Powering the planet: Chemical challenges in solar energy utilization.Proceedings of the National Academy of Sciences of the United States of America2006, 103, (43), 15729-15735.
2. 2. Dhugga, K. S., Maize biomass yield and composition for biofuels.Crop Science2007, 47, (6), 2211-2227.
3. 3. Young, H.; Sommerville, C., Developemnt of feedstocks for cellulosic biofuels.F1000 Reports Biology2012, 4, 10.
4. 4. Somerville, C.; Youngs, H.; Taylor, C.; Davis, S. C.; Long, S. P., Feedstocks for Lignocellulosic Biofuels.Science2010, 329, (5993), 790-792.
5. 5. Schmer, M. R.; Vogel, K. P.; Mitchell, R. B.; Perrin, R. K., Net energy of cellulosic ethanol from switchgrass.Proceedings of the National Academy of Sciences of the United States of America2008, 105, (2), 464-469.
6. 6. Tilman, D.; Hill, J.; Lehman, C., Carbon-negative biofuels from low-input high-diversity grassland biomass.Science2006, 314, (5805), 1598-1600.
7. 7. Casler, M. D.; Vogel, K. P., Selection for Biomass Yield in Upland, Lowland, and Hybrid Switchgrass.Crop Science2014, 54, (2), 626-636; bPrice, D. L.; Casler, M. D., Divergent Selection for Secondary Traits in Upland Tetraploid Switchgrass and Effects on Sward Biomass Yield.Bioenergy Research2014, 7, (1), 329-337; cPrice, D. L.; Casler, M. D., Predictive Relationships between Plant Morphological Traits and Biomass Yield in Switchgrass.Crop Science2014, 54, (2), 637-645.
8. 8. Wilhelm, W. W.; Johnson, J. M. F.; Hatfield, J. L.; Voorhees, W. B.; Linden, D. R., Crop and soil productivity response to corn residue removal: A literature review.Agronomy Journal2004, 96, (1), 1-17.
9. 9. Wilhelm, W. W.; Johnson, J. M. E.; Karlen, D. L.; Lightle, D. T., Corn stover to sustain soil organic carbon further constrains Biomass supply.Agronomy Journal2007, 99, (6), 1665-1667.
10. 10. Lane, J., A looming cellulosic feed stock shortage? InBiofuels Digest, Ascension Publishing Inc.: 2014; pp 1-4.
11. 11. Smil, V.,Enriching the Earth: Fritz Haber, Carl Bosch, and the Transformation of World Food Production. MIT Press: Cambridge, MA, 2000.
12. 12. Smil, V., Detonator of the population explosion.Nature1999, 400, (6743), 415-415.
13. 13. Jung, H. G. InAlfalfa: A Comapnion Crop With Corn, Alfalfa/Corn Rotations for Sustainable Cellulosic Biofuels Prodution 2010; available online: http://www.alfalfa-forage.org: 2010.
14. 14. Su, Y. Z., Soil carbon and nitrogen sequestration following the conversion of cropland to alfalfa forage land in northwest China.Soil & Tillage Research2007, 92, (1-2), 181-189.
15. 15. Baker, J. InSoil Carbon, Alfalfa/Corn Rotations for Sustainable Cellulosic Biofuels Prodution Johnston Iowa, 2010; Johnston Iowa, 2010; p 13.
16. 16. Russelle, M. P.; Birr, A. S., Large-scale assessment of symbiotic dinitrogen fixation by crops: Soybean and alfalfa in the Mississippi river basin.Agronomy Journal2004, 96, (6), 1754-1760.
17. 17. Vadas, P. A.; Barnett, K. H.; Undersander, D. J., Economics and Energy of Ethanol Production from Alfalfa, Corn, and Switchgrass in the Upper Midwest, USA.Bioenergy Research2008, 1, (1), 44-55.
18. 18. Holin, F., Jump start Alfalfa by Interseeding it Into Corn.Hay and Forage GrowerFeburary 9, 2014, 2014, p 1.
19. 19. Grabber, J. H., Interseeding Alfalfa With Corn. In Madison, 2014.
20. 20. Albrecht, K. A.; Wedin, W. F.; Buxton, D. R., Cell-wall composition and digestibility of alfalfa stems and leaves.Crop Sci.1987, 27, (4), 735-41.
21. 21. Gray, A.; Kaan, D., Feasibility Study;Alfalfa Leaf Meal as a Value-added Crop and and Alfalfa Stems as Biomass Fuel. In 1996; Vol. National Technical Information Service Document No. PB97-105548.
22. 22. Shinners, K. InHarvest, Storage,and Fractionation of Alfalfa, Alfalfa/Corn Rotations for Sustainable Cellulosic Biofuels Prodution 2010; available online: http://www.alfalfa-forage.org: 2010.
23. 23. Shinners, K. J.; Herzmann, M. E.; Binversie, B. N.; Digman, M. F., Harvest fractionation of alfalfa.Transactions of the Asabe2007, 50, (3), 713-718.
24. 24. Lamb, J. F. S.; Jung, H. J. G.; Sheaffer, C. C.; Samac, D. A., Alfalfa leaf protein and stem cell wall polysaccharide yields under hay and biomass management systems.Crop Science2007, 47, (4), 1407-1415.
25. 25. Hatfield, R. D.; Hall, M. B.; Muck, R. E.; Radloff, W. J.; Shinners, K. J., Recombined, late harvested ensiled alfalfa leaves and stems give comparable performance to nomally harvested alfalfa silage.Journal of Dairy Science2014, Book of Abstracts, 70.
26. 26. Hong, B. J.; Broderick, G. A.; Koegel, R. G.; Shinners, K. J.; Straub, R. J., Effect of shredding alfalfa on cellulolytic activity, digestibility, rate of passage, and milk production.J. Dairy Sci1988, 71, 1546-1555; bKoegel, R. G.; Straub, R. J.; Shinners, K. J.; Broderick, G. A.; Mertens, D. R., An Overview of Physical Treatments of Lucerne Performed at Madison, Wisconsin, for Improving Properties.Journal of Agricultural Engineering Res.1992, 52, (3), 183-191.
27. 27. Lamb, J. F. S.; Sheaffer, C. C.; Samac, D. A., Population density and harvest maturity effects on leaf and stem yield in alfalfa.Agronomy Journal2003, 95, (3), 635-641.
28. 28. Alvira, P.; Tomas-Pejo, E.; Ballesteros, M.; Negro, M. J., Pretreatment technologies for an efficient bioethanol production process based on enzymatic hydrolysis: A review.Bioresource Technology2010, 101, (13), 4851-4861.
29. 29. Ralph, J.; Grabber, J. H.; Hatfield, R. D., Lignin-ferulate crosslinks in grasses: active incorporation of ferulate polysaccharide esters into ryegrass lignins.Carbohydrate Research1995, 275, (1), 167-178; bRalph, J.; Hatfield, R. D.; Grabber, J. H.; Jung, H. G.; Quideau, S.; Helm, R. F., Cell wall cross-linking in grasses by ferulates and diferulates. InLignin and Lignan Biosynthesis, Lewis, N. G.; Sarkanen, S., Eds. American Chemical Society: Washington, DC, 1998; Vol. 697, Amer. Chem. Soc. Symp. Ser., pp 209-236.
30. 30. Zhou, S. F.; Weimer, P. J.; Hatfield, R. D.; Runge, T. M.; Digman, M., Improving ethanol production from alfalfa stems via ambient-temperature acid pretreatment and washing.Bioresource Technology2014, 170, 286-292.
31. 31. Agler, M. T.; Wrenn, B. A.; Zinder, S. H.; Angenent, L. T., Waste to bioproduct conversion with undefined mixed cultures: the carboxylate platform.Trends in Biotechnology2011, 29, (2), 70-78.
32. 32. Chang, H. N.; Kim, N. J.; Kang, J.; Jeong, C. M., Biomass-derived Volatile Fatty Acid Platform for Fuels and Chemicals.Biotechnology and Bioprocess Engineering2010, 15, (1), 1-10.
33. 33. Holtzapple, M. T.; Granda, C. B., Carboxylate Platform: The MixAlco Process Part 1: Comparison of Three Biomass Conversion Platforms.Applied Biochemistry and Biotechnology2009, 156, (1-3), 525-536.
34. 34. Steinbusch, K. J. J.; Hamelers, H. V. M.; Plugge, C. M.; Buisman, C. J. N., Biological formation of caproate and caprylate from acetate: fuel and chemical production from low grade biomass.Energy & Environmental Science2011, 4, (1), 216-224.
35. 35. Mertens, D. R., Creating a system for meeting the fiber requirements of dairy cows.Journal of Dairy Science1997, 80, (7), 1463-1481.
36. 36. Weimer, P. J.; Russell, J. B.; Muck, R. E., Lessons from the cow: What the ruminant animal can teach us about consolidated bioprocessing of cellulosic biomass.Bioresource Technology2009, 100, (21), 5323-5331.
37. 37. Weimer, P. J. InThe relevance of ruminant animals to chemical conversion and biofuels technologies., Proceedings 2nd International Conference on Microbiology and Biotechnology, Minas Gerias, Brazil, 2013; al., H. C. M. e., Ed. Minas Gerias, Brazil, 2013.
38. 38. Weimer, P. J.; Nerdahl, M.; Brandl, D. J., Production of medium-chain volatile fatty acids by mixed ruminal microrganisms is enhanced by ethanol in co-culture with Clostridium kluyveri.Bioresource Technology2014, in press.
39. 39. Agler, M. T.; Spirito, C. M.; Usack, J. G.; Werner, J. J.; Angenent, L. T., Chain elongation with reactor microbiomes: upgrading dilute ethanol to medium-chain carboxylates.Energy & Environmental Science2012, 5, (8), 8189-8192; bVasudevan, D.; Richter, H.; Angenent, L. T., Upgrading dilute ethanol from syngas fermentation to n-caproate with reactor microbiomes.Bioresource Technology2014, 151, 378-382.
40. 40. Grootscholten, T. I. M.; dal Borgo, F. K.; Hamelers, H. V. M.; Buisman, C. J. N., Promoting chain elongation in mixed culture acidification reactors by addition of ethanol.Biomass & Bioenergy2013, 48, 10-16.
41. 41. Lynd, L. R.; Weimer, P. J.; van Zyl, W. H.; Pretorius, I. S., Microbial cellulose utilization: Fundamentals and biotechnology (vol 66, pg 506, 2002).Microbiology and Molecular Biology Reviews2002, 66, (4), 739-739.
42. 42. Singhania, R. R.; Patel, A. K.; Christophe, G.; Fontanille, P.; Larroche, C., Biological upgrading of volatile fatty acids, key intermediates for the valorization of biowaste through dark anaerobic fermentation.Bioresource Technology2013, 145, 166-174.
43. 43. Gaertner, C. A.; Serrano-Ruiz, J. C.; Braden, D. J.; Dumesic, J. A., Catalytic coupling of carboxylic acids by ketonization as a processing step in biomass conversion.Journal of Catalysis2009, 266, (1), 71-78.
44. 44. Lange, J. P.; Price, R.; Ayoub, P. M.; Louis, J.; Petrus, L.; Clarke, L.; Gosselink, H., Valeric Biofuels: A Platform of Cellulosic Transportation Fuels.Angewandte Chemie-International Edition2010, 49, (26), 4479-4483.
45. 45. Levy, P. F.; Sanderson, J. E.; Kispert, R. G.; Wise, D. L., Biorefining of Biomass to Liquid Fuels and Organic-Chemicals.Enzyme and Microbial Technology1981, 3, (3), 207-215; bSanderson, J. E.; Barnard, G. W.; Levy, P. F., Conversion of Biomass-Derived Organic-Acids to Liquid Fuels by Electrochemical Oxidation in Aqueous-Solutions.Journal of the Electrochemical Society1981, 128, (3), C123-C123.
46. 46. Park, Y. C.; Shaffer, C. E. H.; Bennett, G. N., Microbial formation of esters.Applied Microbiology and Biotechnology2009, 85, (1), 13-25.
47. 47. Schirmer, A.; Rude, M. A.; Li, X. Z.; Popova, E.; del Cardayre, S. B., Microbial Biosynthesis of Alkanes.Science2010, 329, (5991), 559-562.
48. 48. Levy, P. F.; Sanderson, J. E.; Wise, D. L., Development of a Process for Production of Liquid Fuels from Biomass.Biotechnology and Bioengineering1981, 239-248; bKuhry, A. B.; Weimer, P. J. Biological/Electrolytic Conversion of Biomass to Hydrocarbons. US8518680 B2, 2013.
49. 49. Kaliyan, N.; Morey, R. V.; Schmidt, D. R., Roll press compaction of corn stover and perennial grasses to increase bulk density.Biomass & Bioenergy2013, 55, 322-330.
50. 50. Morey, R. V.; Kaliyan, N.; Tiffany, D. G.; Schmidt, D. R., A Corn Stover Supply Logistics System.Applied Engineering in Agriculture2010, 26, (3), 455-461.
51. 51. Sokhansanj, S.; Kumar, A.; Turhollow, A. F., Development and implementation of integrated biomass supply analysis and logistics model (IBSAL).Biomass & Bioenergy2006, 30, (10), 838-847.
52. 52. Reynal, S. M.; Broderick, G. A., Optimal nutrient intake and digestion for ruminal microbial protein and milk yields in lactating dairy cows.Journal of Animal Science2006, 84, 81-81; bBroderick, G. A.; Reynal, S. M.; Patton, R. A.; Heimbeck, W.; Lodi, P., Use of plasma concentrations to estimate bioavailability of methionine in rumen-protected products fed to dairy cows.Journal of Dairy Science2010, 93, 236-236.
53. 53. Weimer, P. J.; Koegel, R. G.; Lorenz, L. F.; Frihart, C. R.; Kenealy, W. R., Wood adhesives prepared from lucerne fiber fermentation residues of Ruminococcus albus and Clostridium thermocellum.Applied Microbiology and Biotechnology2005, 66, (6), 635-640; bWeimer, P. J.; Conner, A. H.; Lorenz, L. F., Solid residues from Ruminococcus cellulose fermentations as components of wood adhesive formulations.Applied Microbiology and Biotechnology2003, 63, (1), 29-34.
54. 54. Weimer, P. J.; Price, N. P. J.; Kroukamp, O.; Joubert, L. M.; Wolfaardt, G. M.; Van Zyl, W. H., Studies of the extracellular glycocalyx of the anaerobic cellulolytic bacterium Ruminococcus albus 7.Applied and Environmental Microbiology2006, 72, (12), 7559-7566.
55. 55. Hatfield, R. D., Carbohydrate composition of alfalfa cell walls isolated from stem sections differing maturity. Journal of Agricultural and Food Chemistry 1992, 40, (3), 424-430.
Written By
Ronald Hatfield
Submitted: June 10th, 2014 Reviewed: November 11th, 2014 Published: September 30th, 2015 | 2022-06-27 08:24:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6002851724624634, "perplexity": 8986.177557771163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103329963.19/warc/CC-MAIN-20220627073417-20220627103417-00476.warc.gz"} |
https://economics.stackexchange.com/questions/15685/metric-for-evaluating-sales-with-dynamic-pricing | # Metric for evaluating sales with dynamic pricing
Suppose you have a sausage maker. He buys batches of ground meat, then makes and sells sausages. Suppose each batch of ground meat makes N sausages, and each batch has specific level of quality that dictates the price of the sausage, so each batch of sausages is priced differently. Let's also assume there isn't any shortage of ground meat so the sausage maker can always make more sausage.
The sausages are also priced dynamically in that the price you pay per sausage decreases as you increase the size of your order (i.e. buy 1 for \$2 and 3 for \$5).
Let's also assume that consumers have perfect information about the quality of the sausage. The sausage maker doesn't know the best way to price the sausage based on the quality, so the sausage maker wants to evaluate the performance of each batch of ground meat to try to model the consumer's demand as a function of quality. What would be the best metric?
Naively one could look at the lifetime value of the batch (i.e. how much total revenue was generated from sales from this batch). But this seems wrong, because it fails to take into account the time it took to sell it, and it rewards batches that consumers buy less of in bulk because the price per unit is higher (when in reality consumers buying more of a batch in bulk is an indication of high quality).
So revenue per day (or some time period) seems like the next logical metric, but this also seems naive.
I am certain that this sort of problem is in the Economic literature, but I am not familiar enough to find it. Any guidance would be greatly appreciated. | 2020-01-18 18:41:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5394264459609985, "perplexity": 760.0942168832782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593295.11/warc/CC-MAIN-20200118164132-20200118192132-00507.warc.gz"} |
https://repository.uantwerpen.be/link/irua/98302 | Publication
Title
Study of high-$p_{T}$ charged particle suppression in PbPb compared to pp collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
Author
Institution/Organisation
Collaboration, C
Abstract
The transverse momentum spectra of charged particles have been measured in pp and PbPb collisions at root s(NN) = 2.76 TeV by the CMS experiment at the LHC. In the transverse momentum range p(T) = 5-10 GeV/c, the charged particle yield in the most central PbPb collisions is suppressed by up to a factor of 7 compared to the pp yield scaled by the number of incoherent nucleon-nucleon collisions. At higher p(T), this suppression is significantly reduced, approaching roughly a factor of 2 for particles with p(T) in the range p(T) = 40-100 GeV/c.
Language
English
Source (journal)
European physical journal : C : particles and fields. - Berlin
Publication
Berlin : 2012
ISSN
1434-6044
1434-6052
Volume/pages
72:3(2012), p. 1-22
Article Reference
1945
ISI
000302540000003
Medium
E-only publicatie
Full text (Publisher's DOI)
Full text (open access)
UAntwerpen
Faculty/Department Research group Project info Publication type Subject Affiliation Publications with a UAntwerp address | 2021-07-30 17:26:33 | {"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8599485754966736, "perplexity": 4885.395322683517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153971.20/warc/CC-MAIN-20210730154005-20210730184005-00376.warc.gz"} |
http://mathoverflow.net/feeds/question/66588 | Number of spanning forests in a graph - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-24T19:45:34Z http://mathoverflow.net/feeds/question/66588 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/66588/number-of-spanning-forests-in-a-graph Number of spanning forests in a graph Aleks Vlasev 2011-05-31T20:12:01Z 2011-07-18T05:11:05Z <p>Hello,</p> <p>I have two questions that have been bugging me recently. The first is about the number of spanning forests in a graph and the second is about enumerating these with edge labels.</p> <p>Q1: I am aware of Kirchhoff's Matrix-Tree theorem regarding the number of spanning trees in a graph. I was wondering if there is a generalization to this theorem that counts the number of spanning k-forests in a graph. What I am mostly interested in is this: is there a method of finding the number of k-forests in a graph by taking a determinant of some matrix?</p> <p>Q2: Suppose you label each edge as $e_{i,j}$ meaning that you are taking the undirected edge from $v_i$ to $v_j$ in the graph. Then in the Laplacian matrix if you plug in the sum of $e_{i,j}$'s instead of $\deg(v_i)$ and $-e_{i,j}$ instead of -1 when that edge connects vertices $i$ and $j$, you get the combinatorial Laplacian. Taking the determinant of a minor of this matrix gives the Kirchhoff polynomial which is an enumeration of the spanning trees of the graph, where each monomial contains the variables for all the edges in the given tree. My question is whether we can generalize this to spanning forests.</p> http://mathoverflow.net/questions/66588/number-of-spanning-forests-in-a-graph/66606#66606 Answer by David Speyer for Number of spanning forests in a graph David Speyer 2011-05-31T22:59:10Z 2011-05-31T22:59:10Z <p>Two points:</p> <p>(1) There is a result which is very similar to this. Let $G$ be a graph. Let $A$ be the matrix whose rows and columns are indexed by the vertices of $G$, where $A_{ij}$ is negative the number of edges from $i$ to $j$, for $i \neq j$, and where $A_{ii} = \lambda_i + \deg_i(G)$. Then the coefficient of $\lambda_{i_1} \cdots \lambda_{i_k}$ in $\det A$ is the number of spanning forests of $G$ with $k$ components, such that $i_1$, $i_2$, ... and $i_k$ are in separate components. Notice that the case $k=1$ is the matrix tree theorem: The coefficient of $\lambda_{i_1}$ is clearly the determinant of the adjacency matrix of $G$, with the $i_1$-st rows and column deleted. Basically any proof of the matrix-tree theorem should prove this as well.</p> <p>If you set all the $\lambda_i$'s equal to each other, then the coefficient of $\lambda^k$ in $\det(A)$ will be the weighted number of $k$-spanning-forests, where the weight of a forest with components $T_1$, $T_2$, ..., $T_k$ is $\prod |T_i|$. (Because that is the number of ways to choose $k$ vertices, one in each component.) Perhaps this is good enough for your purposes.</p> <p>(2) It is highly unlikely that there is a reasonable formula which gives you an exact count. More precisely, counting spanning forests is a <a href="http://en.wikipedia.org/wiki/Sharp-P" rel="nofollow">#P-complete</a> problem. See <a href="http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=2092796" rel="nofollow">On the Computational Complexity of the Jones and Tutte Polynomials</a> and note that counting spanning forests is equivalent to evaluating Tutte at $(1,2)$. This means that, assuming $P \neq NP$, there is no polynomial algorithm to count spanning forests. Now, "reasonable formula" is not a precisely defined notion, but I suspect that anything you would consider reasonable would give a polynomial time algorithm. In particular, note that computing an $n \times n$ determinant, all of whose entries have at most $n$ digits, is polynomial time.</p> http://mathoverflow.net/questions/66588/number-of-spanning-forests-in-a-graph/66610#66610 Answer by Simon for Number of spanning forests in a graph Simon 2011-06-01T00:28:20Z 2011-06-01T00:53:18Z <p>With regards to your Q2, in quantum field theory there is a commonly used generalization of the Kirchhoff polynomial to 2-trees (2-component spanning forests). It's normally called the 2nd Symanzik polynomial, as the 1st Symanzik polynomial is basically identical to the Kirchhoff polynomial. I'm not sure if it can generalize to $k$-spanning forests.</p> <p>To calculate the 2nd Symanzik polynomial you need to associate a variable with each vertex. (In QFT this is the incoming momentum at that vertex.)</p> <p>There's a nice recent review article which discusses some of this<br> Feynman graph polynomials <a href="http://arxiv.org/abs/1002.3458v3" rel="nofollow">(arXiv:1002.3458v3)</a> </p> <p>I also made a Mathematica demonstration that lets you draw graphs and calculates the polynomials.<br> <a href="http://demonstrations.wolfram.com/ScalarFeynmanDiagramsAndSymanzikPolynomials/" rel="nofollow">Scalar Feynman Diagrams And Symanzik Polynomials</a>. </p> <p>The classic reference is<br> N. Nakanishi, Graph Theory and Feynman Integrals, Newark, NJ: Gordon and Breach, 1971.</p> http://mathoverflow.net/questions/66588/number-of-spanning-forests-in-a-graph/66614#66614 Answer by Aaron Meyerowitz for Number of spanning forests in a graph Aaron Meyerowitz 2011-06-01T03:02:39Z 2011-07-18T05:11:05Z <p>I started this, went to dinner, and came back to see that David Speyer anticipated me, but I'll put it in anyway. Here is a theorem, a story and a result somewhat along the lines you want. </p> <p>Theorem: If $\mathcal{F}$ is a forest with $n$ (labelled) vertices and $k$ connected components of sizes $n_1,n_2,\cdots,n_k$ then the number $T(\mathcal{F})$ of completions to a labelled tree is $n_1n_2\cdots n_kn^{k-2}.$</p> <p>The proof <strike> is left to the interested reader but it</strike> is an easy induction on $k \ge 1$ and includes at the other extreme of $k=n$ the usual enumeration of labelled trees. (see below)</p> <p>Story: I discovered this as an undergraduate. Of course I knew that there are like 100 proofs of the $n^{n-2}$ theorem but I thought the induction on the number of undetermined edges <em>might</em> be a slightly new twist. I had the occasion to show it to Frank Harary who said: "This is well written and deserves a place in the literature, I am editing a new journal, submit it." So I did. A few months later it came back with a letter from Haray: "I got the referees report, never submit this anywhere again!" and I didn't (until now!).</p> <p>So for your Q2: Label an edge from $v_i$ to $v_j$ as $e_{i,j}$ (with $i \lt j$). Then in the Laplacian matrix if you plug in $n-1+\sum_j e_{i,j}$ instead of $\deg(v_i)$ and $-1-e_{i,j}$ instead of -1 when that edge connects vertices $i$ and $j$, you get a modified combinatorial Laplacian. Taking the determinant of any minor of this matrix gives a modified Kirchhoff polynomial which is a weighted enumeration of the spanning forests of the graph, where each term is a monomial contains the variables for all the edges in a given forest $\mathcal{F}$ and the coefficient is $T(\mathcal{F}).$ So this polynomial spits out all the spanning forests including the empty one (times $n^{n-2}$) ,each of the spanning trees, and everything in between.</p> <p>Looking over the comments i see that one could just add the identity to the original combinatorial Laplacian (i.e. set all the $\lambda_{ii}=1$) and get a sum over the spanning trees with positive weights.</p> <p><strong>the proof</strong> The case $k=1$ is obvious (alternately, start with $k=2$.) Suppose now that the result is true for forests made of $k-1$ trees and that $\mathcal{F}$ is a forest with $n$ (labelled) vertices and $k$ connected components $T_1,T_2,\cdots,T_k$ of sizes $n_1,n_2,\cdots,n_k$. We will find $(k-1)T(\mathcal{F})$, the number of ways to add a distinguished edge (obtaining a forest with $k-1$ components) and then complete <em>that</em> to a tree. If the distinguished edge goes from $T_i$ to $T_j$ then, by assumption, there are $(n_i+n_j)n_1n_2\cdots\widehat{n_i}\cdots\widehat{n_j}\cdots n_kn^{k-3}$ such completions. Since there are $n_in_j$ such edges, the number mentioned is <code>$$\sum_{1\le i<j \le k}(n_i+n_j)\left(n_1\cdots n_kn^{k-3}\right)=(k-1)\sum_{i=1}^kn_i\left( n_1\cdots n_kn^{k-3}\right)=(k-1)n\left( n_1\cdots n_kn^{k-3}\right)$$</code> Hence, $T(\mathcal{F})=n_1n_2\cdots n_kn^{k-2}.$</p> http://mathoverflow.net/questions/66588/number-of-spanning-forests-in-a-graph/70418#70418 Answer by Nitin Bhagat for Number of spanning forests in a graph Nitin Bhagat 2011-07-15T10:02:19Z 2011-07-15T10:02:19Z <p>Please refer to the following articles</p> <p>C. J. Liu and Yutze Chow. Enumeration of Connected Spanning Subgraphs Of A Planar Graph. Acta Mathematica Hungarica, 41(3):27–36, 1983. C. J. LIU and YUTZE CHOW. On Operator And Formal Sum Methods For Graph Enumeration Problems. SIAM Journal of Control and Optimization, 5(3):384–406, 1984. C. J. Liu and Yutze Chow. Enumeration of Connected Spanning Subgraphs Of A Planar Graph. Acta Mathematica Hungarica, 60(1):81–91, 1992. C. J. Liu and Yutze Chow. Enumeration Of Forests In Graph. Proceeding of American Mathematical Society, 83(3):659–662, 1981.</p> http://mathoverflow.net/questions/66588/number-of-spanning-forests-in-a-graph/70443#70443 Answer by Abdelmalek Abdesselam for Number of spanning forests in a graph Abdelmalek Abdesselam 2011-07-15T15:32:40Z 2011-07-15T15:32:40Z <p>I am surprised no one mentioned the work of Alan Sokal and coworkers on precisely this issue of weighted enumeration of spanning forests which is related to the $q\rightarrow 0$ limit of the Potts model as well as the multivariate Tutte polynomial.</p> <p>A determinant expression corresponds to a Fermionic (Grassmann/Berezin) integral with a quadratic `action' in the exponential. There is an analogue of the matrix-tree theorem for spanning forests with a Fermionic integral with quartic action see: <a href="http://arxiv.org/abs/cond-mat/0403271" rel="nofollow">http://arxiv.org/abs/cond-mat/0403271</a></p> <p>which appeared in PRL. Follow ups such as <a href="http://arxiv.org/abs/0706.1509" rel="nofollow">http://arxiv.org/abs/0706.1509</a></p> <p>can be found by looking at Sokal's papers on arxiv: <a href="http://arxiv.org/find/grp_math/1/au:+sokal/0/1/0/all/0/1" rel="nofollow">http://arxiv.org/find/grp_math/1/au:+sokal/0/1/0/all/0/1</a></p> <p>Note that there was a whole semester at the Isaac Newton Institute revolving around this topic: <a href="http://www.newton.ac.uk/programmes/CSM/" rel="nofollow">http://www.newton.ac.uk/programmes/CSM/</a></p> <p>One can even watch the videos of the talks. The one perhaps most relevant to this question is the talk by Andrea Sportiello in the fourth workshop.</p> | 2013-05-24 19:45:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8574597239494324, "perplexity": 535.9868807696025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705020058/warc/CC-MAIN-20130516115020-00087-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://offshoremechanics.asmedigitalcollection.asme.org/article.aspx?articleid=1456674 | 0
Research Papers
# Multihull and Surface-Effect Ship Configuration Design: A Framework for Powering Minimization
[+] Author and Article Information
Ronald W. Yeung1
Department of Mechanical Engineering, University of California, Berkeley, CA 94720rwyeung@berkeley.edu
Hui Wan
Department of Mechanical Engineering, University of California, Berkeley, CA 94720wanh@berkeley.edu
We note in that paper the value of $∀=64.8$ on page 160 and Fig. 23 should have been stated as 129.6.
1
Correspondence author.
J. Offshore Mech. Arct. Eng 130(3), 031005 (Jul 15, 2008) (9 pages) doi:10.1115/1.2904590 History: Received March 28, 2007; Revised July 10, 2007; Published July 15, 2008
## Abstract
The powering issue of a high-speed marine vehicle with multihulls and air-cushion support is addressed, since there is an often need to quickly evaluate the effects of several configuration parameters in the early stage of the design. For component hulls with given geometry, the parameters considered include the relative locations of individual hulls and the relative volumetric ratios. Within the realm of linearized theory, an interference-resistance expression for hull-to-hull interaction is first reviewed, and then a new formula for hull-and-pressure distribution interference is derived. Each of these analytical expressions is expressed in terms of the Fourier signatures or Kochin functions of the interacting component hulls, with the separation, stagger, and speed as explicit parameters. Based on this framework, an example is given for assessing the powering performance of a catamaran (dihull) as opposed to a tetrahull system. Also examined is the wave resistance of a surface-effect ship of varying cushion support in comparison with that of a base line catamaran, subject to the constraint of constant total displacement.
<>
## Figures
Figure 11
Total wave-resistance coefficient CwT of SES
Figure 12
Cw of base line catamaran and SES at Λ=0.4
Figure 13
Powering of base line catamaran and SES at Λ=0.4
Figure 1
Coordinate systems for two hulls with separation and stagger
Figure 2
Entry web page for the MULTIRES (MULTI-RES ) code
Figure 3
Isometric views of a tetrahull, the SS Lin–Day (left), and a catamaran (dihull, right) of the same displacement for a comparative powering study. The elemental geometry is a normalized form of that used by Lin and Day (16).
Figure 4
Wave-resistance coefficients (top) of the SS Lin–Day tetrahull and a catamaran (dihull) of the same displacement versus Froude number based on Lt with interference-resistance coefficients corresponding to Rw–intf also shown. Resistance and powering requirements of the two alternatives (bottom) are shown with the effects of skin friction based on ITTC (14).
Figure 5
Pressure distribution P(x,y) of an air cushion
Figure 6
RwP generated by a pressure P(x,y), α=5, β=20
Figure 7
Comparison of resistances of an air cushion and a monohull of the same displacement
Figure 8
Configuration of pressure cushion and a single hull
Figure 9
Configuration of a pressure cushion and a catamaran
Figure 10
Contours of hull-alone resistance coefficient CwH (left) and cushion-hull interference-resistance coefficient CwPH (right)
## Errata
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections | 2019-04-19 19:00:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17224860191345215, "perplexity": 7294.280927856328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527866.58/warc/CC-MAIN-20190419181127-20190419203127-00389.warc.gz"} |
https://applying-maths-book.com/chapter-5/chapter-5-G-Q35-44.html | # Questions 35 - 44#
## Q35 Lennard-Jones potential#
The Lennard-Jones 6-12 potential, which describes the intermolecular energy between a pair of molecules, has a minimum at $$\displaystyle r_e = 2^{1/6}\sigma$$. The potential is
$\displaystyle U(r)= -4\epsilon\left[\left (\frac{\sigma}{r}\right)^6 - \left (\frac{\sigma}{r}\right)^{12} \right]$
where $$\epsilon$$ is the depth of the energy well and $$\sigma$$ the diameter of a molecule.
(a) Show that the minimum energy is $$-\epsilon$$.
(b) Expand the potential about the minimum energy at intermolecular separation $$r_e$$ using a Taylor series.
(c) Calculate the approximate Hooke’s law force constant, $$k$$, around the minimum energy. This is the slope of the derivative of the potential with extension $$x$$, i.e. $$\displaystyle dU/dx = -kx$$. Calculate the approximate classical vibrational frequency in the bottom of the potential using parameters for Xe; $$\epsilon = 20.0$$ meV and $$\sigma = 398$$ pm.
(d) To check your calculation, plot the equation and the expansion to the second power of $$(r - r_e)$$.
Strategy: (b) Use a Taylor expansion about the minimum separation $$r_e$$, and then ignore terms in higher powers of $$(r - r_e)$$ as the change from the equilibrium position is small. Because you have to take the derivative of the potential to find the force constant, expand the potential at least to quadratic terms.
## Q36 Dipole - ion interaction#
A dipole $$q^+ - q^-$$ will interact with an ion in solution because the electric field of the ion will extend through the solution and so cause a force to exist between them. The electric field strength $$E$$ around a charge is the force / unit charge, or $$E = f/q$$. Because there is a force between the charges, energy is needed to place the dipole and ion at any given separation. Your textbook will state that this force varies as the inverse cube of the separation from the dipole when the separation is larger than the size of the dipole itself. However, two isolated point charges, $$q_1\, q_2$$, will interact with a force given by the inverse square of their separation,
$f=\frac{q_1q_2}{(4\pi\epsilon_0)}\frac{1}{\epsilon r^2}$
where $$\epsilon_0$$ is the permittivity of free space and $$\epsilon$$ is the relative permittivity (dielectric constant) of the intervening medium, such as the solvent. Force written in this way has SI units of J m$$^{-1}$$. The interaction energy in joules between two point charges $$q_1,\, q_2$$ at separation $$r$$ is
$\displaystyle U=\frac{q_1q_2}{(4\pi\epsilon_0)}\frac{1}{\epsilon r}$
(a) By calculating the electric field at the ion situated along the x-axis with charge +$$z$$, show that the ion-dipole interaction varies with separation as $$1/x^3$$; the next figure illustrates the geometry.
(b) What is the interaction energy at separation $$x$$? Determine that it has the correct units.
The figure below shows the geometry of the ion-dipole interaction. A more complete, and more complicated calculation, would allow the ion to be at any angle to the dipole and the results averaged, but the result is qualitatively the same. The dipole length is $$2d$$.
Figure 5. Geometry of the ion-dipole interaction. A more complete, and more complicated calculation, would allow the ion to be at any angle to the dipole and the results averaged, but the result is qualitatively the same. The dipole length is $$2d$$.
Strategy: Calculate the electric field using charges +$$q$$ and $$-q$$ then calculate the energy. In electrostatic calculations the field and energy is always calculated as the sum of the individual contributions between each pair of charges. As the separation is large compared to the size of the ion or dipole expand the field in terms of the fractional separation. Using the diagram, the dipole has charges +$$q$$ and $$-q$$ and the ion +$$z$$. Note that $$E$$ is used to represent the electric field.
## Q37 Dipole-dipole energy#
The energy of two interacting dipoles, with the geometry shown below, is
$U=\frac{q^2}{(4\pi\epsilon_0)}\left[ \frac{1}{x}+\frac{1}{x+d_2-d_1}-\frac{1}{x-d_1}-\frac{1}{x+d_2} \right]$
Figure 6. Definition of dipole’s geometry.
(a) Explain how this equation is derived.
(b) Show that if $$x \gg d_1$$ and $$d_2$$, the energy varies as $$\mu_1\mu_2/x^3$$ where $$\mu_1, \, \mu_2$$ are the dipole moments equal to $$qd_1$$ and $$qd_2$$ respectively.
(c) Calculate the interaction energy if two dipoles each of $$5$$ D are separated by $$2$$ nm, as shown in the figure. Compare this to thermal energy at room temperature.
Strategy: Because the interaction is electrostatic (or Coulomb) in nature, the energy is always calculated by adding together the interaction between pairs of charges; one charge each end of the dipole on one molecule with each of the charges on the other. The energy is inversely proportional to the separation of each pair of charges so there are four terms to consider.
## Q38 Doppler effect#
The pitch of an ambulance’s siren sounds higher as it speeds towards us and lower as it recedes. This is caused by the Doppler effect. Because the source is moving, the separation between the sound waves becomes smaller as the source approaches and longer as it recedes. The Doppler effect is widely used for example in radar speed cameras, in flow measurements in pipes or blood flow in veins and arteries. The Mossbaauer effect in the $$gamma$$ ray region of the spectrum also relies on the Doppler effect by moving the sample it can come into resonance with the source and so transitions are detected by absorption.
In approaching you, the sound frequency appears to be up-shifted from $$f_0$$ to $$\displaystyle f = f_0\left(\frac{v_s+v_0}{v_s-v} \right)$$ where $$v_s$$ is the speed of sound in air, approximately $$331\, \mathrm{m\,s^{-1}}$$ or $$740$$ m.p.h., $$f_0$$ the true frequency of the siren is $$440$$ Hz and $$v$$ the velocity of the ambulance is $$60$$ m.p.h. and $$v_0$$ is your, the observers, speed.
When the vehicle moves away from you the perceived frequency is lower as now $$\displaystyle f = f_0\left(\frac{v_s+v_0}{v_s+v} \right)$$ ; notice the sign change.
(a) Sketch how the sound frequency perceived by a stationary observer positioned, as shown in the figure, would change as the vehicle passes.
Figure 7. In the figure the stationary vehicle produces sound wave-fronts that are equally spaced from one another in all directions if measured at equal time intervals. The moving vehicle causes the sound waves to appear to close up in the direction of travel, and to move apart in the opposite direction. If you are at the side of the road, the sound is that component of the forward motion in your direction. If you are in the vehicle, the pitch of the sound appears to be the same whether you are moving or stationary because the sound waves are always generated at the same frequency and because they are moving much faster than the vehicle.
(b) Show that the perceived frequency shift $$\Delta f=(f - f_0)/f$$ is proportional to $$v$$, the speed of the ambulance. Assume that your speed $$v_0$$ is small compared to the speed of sound.
Strategy: (a) The frequency heard is higher than normal when the ambulance is approaching and coming directly towards us, but is at exactly frequency $$f_0$$ when it is right in front of us, and falls as it departs. (b) If we were to assume that both $$t_0$$ and $$t$$ are small compared to $$v_s$$, the speed of sound in air, and simply ignore them, then $$f = f_0$$ and the frequency would not change. Experience tells us that the perceived frequency does change, so this assumption cannot be correct because it is too crude. Instead, rearrange the frequency equation into two parts, and ratio the speeds to produce terms such as $$\displaystyle (1 - v/v_s)^{-1}$$ and then expand into a series.
## Q39 H atom Lyman-$$\alpha$$ line#
(a) The relativistic red shift observed in the H atom Lyman-$$\alpha$$ line from a star in a distant galaxy is $$\displaystyle \frac{\Delta \lambda}{\lambda}= \sqrt{\frac{1+v/c}{1-v/c}}-1$$ where $$c$$ is the speed of light and $$v$$ the relative velocity of the star.
(b) Show that for a small relative star velocity $$\displaystyle \frac{\Delta \lambda}{\lambda}=\frac{v}{c}$$.
(c) If the laboratory reference transition is $$\Delta \lambda = 0.1$$ nm wide, what is the smallest speed a star must be receding by to separate it from the reference line, assuming that a separation of $$2\Delta \lambda$$ is needed?
## Q40 Coupled molecular energy levels#
Two molecular energy levels of energy $$E_1$$ and $$E_2$$ and separation $$\Delta E$$ interact with a ‘coupling energy’ $$V$$. Perturbation theory applied to quantum mechanics allows us to calculate how these levels are shifted in energy as a result of this interaction. One level rises, the other falls and their new energies are,
$E_\pm = \frac{E_1}{2}+\frac{E_2}{2}+\frac{1}{2}\sqrt{\Delta E^2+4V^2}$
and the total energy remains the same, as shown in the figure. The initial two levels interact to form two new levels. Overall the energy is reduced but if two or three electrons fill the energy levels. If zero or four fill both levels then there is no energy saving.
Figure 8. Initial levels (left) interact with coupling $$V$$ to produce two new levels (right).
(a) Calculate the total energy before and after the interaction and show that they are the same.
(b) Calculate the two energies when $$V \ll \Delta E$$, both being positive, and when $$V \gg \Delta E$$.
(c) Plot the correct energies if $$E_1 = 2, E_2 = 3$$ and $$V$$ varies from $$0 \to 1$$, and compare them with the approximations from (b).
Strategy: In (b) when $$V \ll \Delta E$$ expand the square root in $$E\pm$$. Do this by rearranging to get a term in $$\displaystyle \sqrt{1+4V^2/\Delta E^2}$$.
Crystals of simple salts consist of ordered lattices of anions and cations where the forces are predominantly due to the Coulomb electrostatic interaction. As there are many ions, the total interaction acting upon any one of them is due to the effect of all the others. The energy between any two ions 1 and 2 separated by a distance $$d$$ is $$\displaystyle U_{12}=\frac{q_1q_2}{(4\pi \epsilon_0)d}$$ where the charge on an ion is $$q=eZ$$,and $$e$$ is the electronic charge $$1.6022 \cdot 10^{-19}$$ C. The charge number $$Z$$ can be positive or negative.
(a) Find the total energy of a positively charged ion in a linear chain of alternating positive and negatively charged ions with charges $$Z$$ and $$-Z$$. Find the Madelung constant $$M$$, which is the numerical factor that contributes to the energy and is due solely to the positions and charges of the ions. The total energy is $$\displaystyle U_{12}=M\frac{q_1q_2}{(4\pi \epsilon_0)d}$$.
Figure 9a. Ions placed on a line.
(b) Repeat the calculation on a square grid of alternating charges as shown below where the diagonal shown has a length of $$d\sqrt{13}$$. Now the summation has to be evaluated numerically. You will need very many terms (thousands) to make the addition converge. The result is $$-1.612$$ but a reasonable number of terms produce $$-1.6$$.
Figure 9b. Some of the ions whose charges alternate on a square lattice of atoms with grid spacing $$d$$. The diagonal shown has length $$d\sqrt{13}$$.
Strategy: The total energy of several charged species, of any sort, is always the sum of the individual pair-wise interactions, + to +, - to - and + to - as appropriate. For example, the interaction between any two ions 1 and 2 is $$\displaystyle U_{12}=\frac{q_1q_2}{(4\pi \epsilon_0)}\frac{1}{d_1 - d_2}$$ where $$d_1 - d_2$$ is their separation, and in a line or on grid, the nearest separation is always $$d$$. Consider only the interaction of any two species at a time, and if there are many charges these add up as pair-wise contributions ignoring any intervening or other nearby charges.
## Q42 Lennard-Jones potential#
The Lennard-Jones potential between a pair of atoms with separation $$r$$ is
$\displaystyle U=-4\epsilon \left[\left(\frac{\sigma}{r}\right)^6 -\left(\frac{\sigma}{r}\right)^{12} \right]$
The potential acts mainly at short range and $$\epsilon$$ is the strength of the intermolecular interaction and $$\sigma$$ scales the interaction and is approximately $$0.3$$ nm for solids of the noble gases. The interaction energy $$\epsilon$$ is $$0.0031$$ eV for Ne and $$0.020$$ eV for Xe. When there are many atoms in a solid the cohesive energy $$U_c$$ is calculated as the sum of the pair-wise interactions between atoms $$i$$ and $$j$$:
$\displaystyle U_c = -4\epsilon \sum\limits_{j \ne i}\left[\left(\frac{\sigma}{r_{ij}}\right)^6 -\left(\frac{\sigma}{r_{ij}}\right)^{12} \right]$
In a cubic crystal the separation of any pair of atoms is represented in terms of multiples of the near neighbour separation, $$R$$, where $$r_{ij} = \alpha_{ij}R$$. The number $$\alpha$$, which need not be an integer, clearly depends on the crystal geometry. The summation becomes
$\displaystyle U_c=-4\epsilon \sum\limits_{j \ne i} \left[ \left( \frac{\sigma}{\alpha_{ij}R}\right)^6 -\left(\frac{\sigma}{\alpha_{ij}R} \right)^{12} \right]$
Calculate the lattice sums $$\displaystyle A_6=\sum\limits_{j \ne i} \alpha_{ij}^{-6}$$ and $$\displaystyle A_{12} = \sum\limits_{j \ne i} \alpha_{ij}^{-12}$$ for a simple cubic crystal lattice using the diagram below which shows a simple cubic structure with near neighbours ($$A$$), and some of the next near neighbours ($$B$$) and ($$C$$). Atoms in the other adjacent unit cells, which are not shown, will also contribute to the summation.
Calculate the value for a unit cell then use Python to calculate the sum over as many cells as necessary to achieve two decimal places of accuracy.
Figure 10. A simple cubic structure with near neighbours (A), and some of the next near neighbours (B ) and (C ). Atoms in the other adjacent unit cells, which are not shown, will also contribute to the summation.
## Q43 Dipole selection rules#
The electric dipole selection rules for vibrational transitions in diatomic molecules are described by expanding the dipole moment in a Taylor series about the equilibrium bond length Re, and then evaluating the transition dipole moment, which is the integral
$\displaystyle M=\int \psi_f^*\mu \psi_i dx$
This integral must not be zero if a transition is allowed. The transition dipole is $$\mu$$, and $$\psi_i$$ and $$\psi_f$$ are the initial and final wavefunctions with vibrational quantum numbers $$f$$ and $$i$$ respectively. The displacement of the nuclei from equilibrium, which is the bond extension, is $$R - R_e = x$$.
(a) Show that, in the harmonic oscillator, the selection rule for a transition is such that only adjacent energy levels are linked with a photon. If the initial vibrational level has quantum number $$i$$ the final one is $$f = i \pm 1$$, provided $$i \ne 0$$, i.e. $$\Delta v = f - i = \pm 1$$.
(b) Next show that in the anharmonic oscillator the selection rule is additionally that the level $$i$$ can undergo an optical transition to $$i \pm 2$$.
Notes: In a harmonic oscillator, the dipole varies linearly with bond extension, but in the anharmonic oscillator, the dipole $$\mu$$ varies in a non-linear fashion with extension. The vibrational wavefunctions are orthonormal, therefore $$\displaystyle \int \psi_f^*\psi_i dx =\delta_{if}$$ where $$\delta_{if}$$ is the Kronecker delta function which is $$1$$ if $$i = f$$, otherwise it is zero. The wavefunctions have alternatively odd - even symmetry character, which means that
$\begin{split}\displaystyle\int \psi_f^*\,x\,\psi_i dx \ne 0\quad\text{ if }\quad f = i +1 \\ \int \psi_f^*\,x^2\,\psi_i dx \ne 0\quad \text{ if }\quad f=i + 2\end{split}$
and otherwise the integrals are zero. These results can be confirmed by direct integration using the equations for the Hermite polynomials.
Strategy: It is hard to know where to start as we are not told much about $$\mu$$. All we know is that it is a dipole, so it is, by definition charge times distance, and in this case the distance is the bond extension $$x$$. These facts mean that $$\mu$$ can be expanded as a function of $$x$$ about $$x = 0$$, which corresponds to the equilibrium bond extension, as suggested in the question. The expansion is rather like generating an equation out of nothing or, figuratively, pulling a rabbit out of a hat!
The importance of integration ‘odd’ and ‘even’ functions is clear; if the function is odd the integral over all space is always zero, if even the integral generally is not zero. In the more general sense, group theory should be used to determine if the integral belongs to the totally symmetric representation of the point group of the molecule, which, if it does, the integral is finite. See Chapter 7 (Matrices) for a fuller discussion.
## Q44 Hellmann - Feynman theorem#
The Hellmann - Feynman theorem states that for a property $$q$$ the energy of a molecule $$U$$ and its Hamiltonian $$H$$ are related as $$\displaystyle \frac{dU}{dq}=\left<\frac{dH}{dq}\right>$$. The angle brackets indicate an average value is measured.
Suppose the property $$q$$ is an external electric field $$E$$ then $$q \equiv E$$, and in the presence of this field, the Hamiltonian is $$\displaystyle H = -\bar\mu\cdot \bar E$$ where $$\bar\mu$$, and $$\bar E$$ are vector quantities. To simplify matters suppose that the field only exists along the z-axis then $$H = -\mu_zE$$.
(a) Calculate $$dU/dE$$.
(b) Use a Taylor series to expand the molecular energy $$U$$ in terms of the electric field $$E$$ about the energy $$U_0$$ in a field, which is zero.
(c) If $$\displaystyle \langle \mu_z \rangle = \mu_{z0} + \alpha E + \beta E^2 /2 + \cdots$$ where $$\alpha$$ is the polarizability and $$\beta$$ the hyper - polarizability, find expressions for $$\alpha$$ and $$\beta$$ as derivatives of the energy with field strength. | 2023-03-26 02:48:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8402575850486755, "perplexity": 315.44747360088957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00383.warc.gz"} |
https://stats.stackexchange.com/questions/306325/can-i-fold-in-the-data-detection-probabililtes-in-maximum-likelihood-fitting | # Can i fold in the data detection probabililtes in Maximum likelihood fitting?
I have a set of data points ($x_1,x_2,x_3 ...$) and would like to use the MLE estimator to fit a density function $f(x,a_1,a_2)$ to $x$, with some parameter $a_1, a_2$.
Normally, to do MLE I calculate the probablility of each data points in the function given specific parameter values, and minimize the joint likelihood function, say, $L= \prod p(x_i)$.
Imagine now for each of these data points, I also have a detection probability $p_{det}(x_i)$ to indicate whether they should be included in this dataset, high value indicates that it is a very reliable data point. This probability is pre-determined elsewhere.
How should I incorporate this probability into my MLE function fitting? I cannot just mulitply this with the above, say $L= \prod ( p(x_i) p_{det}(x_i)$, because then the two probability will be completely separable and the minimum of the likelihood function will be identical.
Is there a way to do this? Thanks a lot!
• Would you provide more mathematical details as to what the detection/inclusion/reliability probabilities are? If the "inclusion" probabilities are known, then under the model you're considering is there a positive probability that all data points will be selected and also that no data points are selected? – JimB Oct 16 '17 at 5:47 | 2019-10-18 04:27:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9199728965759277, "perplexity": 301.79252392711663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677884.28/warc/CC-MAIN-20191018032611-20191018060111-00094.warc.gz"} |
https://zbmath.org/?q=an%3A1186.93076 | ## Identification for multirate multi-input systems using the multi-innovation identification theory.(English)Zbl 1186.93076
Summary: This paper considers identification problems of multirate multi-input sampled-data systems. Using the continuous-time system discretization technique with zero-order holds, the mapping relationship (state-space model) between available multirate input and output data is set up. The multi-innovation identification theory is applied to estimate the parameters of the obtained multirate models and to present a multi-innovation stochastic gradient algorithm for the multirate systems from the multirate input-output data. Furthermore, the convergence properties of the proposed algorithm are analyzed. An illustrative example is given.
### MSC:
9.3e+13 Identification in stochastic control theory
Full Text:
### References:
[1] Wang, D.Q.; Ding, F., Extended stochastic gradient identification algorithms for hammerstein – wiener ARMAX systems, Computers & mathematics with applications, 56, 12, 3157-3164, (2008) · Zbl 1165.65308 [2] Ding, J.; Ding, F., The residual based extended least squares identification method for dual-rate systems, Computers & mathematics with applications, 56, 6, 1479-1487, (2008) · Zbl 1155.93435 [3] Sheng, J.; Chen, T.; Shah, S.L., Optimal filtering for multirate systems, IEEE transactions on circuits and systems, 52, 4, 228-232, (2005) [4] Li, W.H.; Shah, S.H.; Xiao, D.Y., Kalman filters in non-uniformly sampled multirate systems: for FDI and beyonds, Automatica, 44, 1, 199-208, (2008) · Zbl 1138.93056 [5] Shi, Y.; Chen, T., 2-norm based iterative design of filterbank transceivers: A control perspective, Journal of control science and engineering, 7, (2008), Article ID 143085 · Zbl 1229.94020 [6] Shi, Y.; Ding, F.; Chen, T., 2-norm based recursive design of transmultiplexers with designable filter length, Circuits, systems, and signal processing, 25, 4, 447-462, (2006) · Zbl 1130.94312 [7] Lu, N.Y.; Yang, Y.; Gao, F.R.; Wang, F.L., Multirate dynamic inferential modeling for multivariable process, Chemical engineering science, 59, 4, 855-864, (2004) [8] Li, D.; Shah, S.L.; Chen, T., Identification of fast-rate models from multirate data, International journal of control, 74, 7, 680-689, (2001) · Zbl 1038.93017 [9] Ding, F.; Chen, T., Hierarchical identification of lifted state – space models for general dual-rate systems, IEEE transactions on circuits and systems-I: regular papers, 52, 6, 1179-1187, (2005) · Zbl 1374.93342 [10] Yu, B.; Shi, Y.; Huang, H., $$l_2 - l_\infty$$ filtering for multirate systems using lifted models, Circuits, systems, and signal processing, 27, 5, 699-711, (2008) · Zbl 1173.93360 [11] Ding, F.; Chen, T., Combined parameter and output estimation of dual-rate systems using an auxiliary model, Automatica, 40, 10, 1739-1748, (2004) · Zbl 1162.93376 [12] Ding, F.; Chen, T., Parameter estimation of dual-rate stochastic systems by using an output error method, IEEE transactions on automatic control, 50, 9, 1436-1441, (2005) · Zbl 1365.93480 [13] Shi, Y.; Ding, F.; Chen, T., Multirate crosstalk identification in xdsl systems, IEEE transactions on communication, 54, 10, 1878-1886, (2006) [14] Ding, F.; Chen, T., Performance analysis of multi-innovation gradient type identification methods, Automatica, 43, 1, 1-14, (2007) · Zbl 1140.93488 [15] Chen, T.; Qiu, L., $$\mathcal{H}_\infty$$ design of general multirate sampled-data control systems, Automatica, 30, 7, 1139-1152, (1994) · Zbl 0806.93038 [16] Qiu, L.; Chen, T., $$\mathcal{H}_2$$-optimal design of multirate sampled-data systems, IEEE transactions on automatic control, 39, 12, 2506-2511, (1994) · Zbl 0825.93436 [17] Chen, T.; Francis, B., Optimal sampled-data control systems, (1995), Springer-Verlag London · Zbl 0847.93040 [18] Goodwin, G.C.; Sin, K.S., Adaptive filtering prediction and control, (1984), Prentice-hall Englewood Cliffs, New Jersey · Zbl 0653.93001 [19] L.L. Han, F. Ding, Multi-innovation stochastic gradient algorithms for multi-input multi-output systems, Digital Signal Processing, 19 (2009), in press (doi:10.1016/j.dsp.2008.12.002)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2022-07-01 17:39:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8116515278816223, "perplexity": 11499.0889243994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103943339.53/warc/CC-MAIN-20220701155803-20220701185803-00153.warc.gz"} |
https://howlingpixel.com/i-en/Absorptance | # Absorptance
Absorptance of the surface of a material is its effectiveness in absorbing radiant energy. It is the fraction of incident electromagnetic power that is absorbed at an interface, in contrast to the absorption coefficient, which is the ratio of the absorbed to incident electric field.[1] This should not be confused with absorbance.
## Mathematical definitions
### Hemispherical absorptance
Hemispherical absorptance of a surface, denoted A, is defined as[2]
${\displaystyle A={\frac {\Phi _{\mathrm {e} }^{\mathrm {a} }}{\Phi _{\mathrm {e} }^{\mathrm {i} }}},}$
where
• Φea is the radiant flux absorbed by that surface;
• Φei is the radiant flux received by that surface.
### Spectral hemispherical absorptance
Spectral hemispherical absorptance in frequency and spectral hemispherical absorptance in wavelength of a surface, denoted Aν and Aλ respectively, are defined as[2]
${\displaystyle A_{\nu }={\frac {\Phi _{\mathrm {e} ,\nu }^{\mathrm {a} }}{\Phi _{\mathrm {e} ,\nu }^{\mathrm {i} }}},}$
${\displaystyle A_{\lambda }={\frac {\Phi _{\mathrm {e} ,\lambda }^{\mathrm {a} }}{\Phi _{\mathrm {e} ,\lambda }^{\mathrm {i} }}},}$
where
### Directional absorptance
Directional absorptance of a surface, denoted AΩ, is defined as[2]
${\displaystyle A_{\Omega }={\frac {L_{\mathrm {e} ,\Omega }^{\mathrm {a} }}{L_{\mathrm {e} ,\Omega }^{\mathrm {i} }}},}$
where
• Le,Ωa is the radiance absorbed by that surface;
• Le,Ωi is the radiance received by that surface.
### Spectral directional absorptance
Spectral directional absorptance in frequency and spectral directional absorptance in wavelength of a surface, denoted Aν,Ω and Aλ,Ω respectively, are defined as[2]
${\displaystyle A_{\nu ,\Omega }={\frac {L_{\mathrm {e} ,\Omega ,\nu }^{\mathrm {a} }}{L_{\mathrm {e} ,\Omega ,\nu }^{\mathrm {i} }}},}$
${\displaystyle A_{\lambda ,\Omega }={\frac {L_{\mathrm {e} ,\Omega ,\lambda }^{\mathrm {a} }}{L_{\mathrm {e} ,\Omega ,\lambda }^{\mathrm {i} }}},}$
where
## SI radiometry units
SI radiometry units
Quantity Unit Dimension Notes
Name Symbol[nb 1] Name Symbol Symbol
Radiant energy Qe[nb 2] joule J ML2T−2 Energy of electromagnetic radiation.
Radiant energy density we joule per cubic metre J/m3 ML−1T−2 Radiant energy per unit volume.
Radiant flux Φe[nb 2] watt W = J/s ML2T−3 Radiant energy emitted, reflected, transmitted or received, per unit time. This is sometimes also called "radiant power".
Spectral flux Φe,ν[nb 3]
or
Φe,λ[nb 4]
watt per hertz
or
watt per metre
W/Hz
or
W/m
ML2T−2
or
MLT−3
Radiant flux per unit frequency or wavelength. The latter is commonly measured in W⋅nm−1.
Radiant intensity Ie,Ω[nb 5] watt per steradian W/sr ML2T−3 Radiant flux emitted, reflected, transmitted or received, per unit solid angle. This is a directional quantity.
Spectral intensity Ie,Ω,ν[nb 3]
or
Ie,Ω,λ[nb 4]
watt per steradian per hertz
or
watt per steradian per metre
W⋅sr−1⋅Hz−1
or
W⋅sr−1⋅m−1
ML2T−2
or
MLT−3
Radiant intensity per unit frequency or wavelength. The latter is commonly measured in W⋅sr−1⋅nm−1. This is a directional quantity.
Radiance Le,Ω[nb 5] watt per steradian per square metre W⋅sr−1⋅m−2 MT−3 Radiant flux emitted, reflected, transmitted or received by a surface, per unit solid angle per unit projected area. This is a directional quantity. This is sometimes also confusingly called "intensity".
Spectral radiance Le,Ω,ν[nb 3]
or
Le,Ω,λ[nb 4]
watt per steradian per square metre per hertz
or
watt per steradian per square metre, per metre
W⋅sr−1⋅m−2⋅Hz−1
or
W⋅sr−1⋅m−3
MT−2
or
ML−1T−3
Radiance of a surface per unit frequency or wavelength. The latter is commonly measured in W⋅sr−1⋅m−2⋅nm−1. This is a directional quantity. This is sometimes also confusingly called "spectral intensity".
Irradiance
Flux density
Ee[nb 2] watt per square metre W/m2 MT−3 Radiant flux received by a surface per unit area. This is sometimes also confusingly called "intensity".
Spectral irradiance
Spectral flux density
Ee,ν[nb 3]
or
Ee,λ[nb 4]
watt per square metre per hertz
or
watt per square metre, per metre
W⋅m−2⋅Hz−1
or
W/m3
MT−2
or
ML−1T−3
Irradiance of a surface per unit frequency or wavelength. This is sometimes also confusingly called "spectral intensity". Non-SI units of spectral flux density include jansky (1 Jy = 10−26 W⋅m−2⋅Hz−1) and solar flux unit (1 sfu = 10−22 W⋅m−2⋅Hz−1 = 104 Jy).
Radiosity Je[nb 2] watt per square metre W/m2 MT−3 Radiant flux leaving (emitted, reflected and transmitted by) a surface per unit area. This is sometimes also confusingly called "intensity".
Spectral radiosity Je,ν[nb 3]
or
Je,λ[nb 4]
watt per square metre per hertz
or
watt per square metre, per metre
W⋅m−2⋅Hz−1
or
W/m3
MT−2
or
ML−1T−3
Radiosity of a surface per unit frequency or wavelength. The latter is commonly measured in W⋅m−2⋅nm−1. This is sometimes also confusingly called "spectral intensity".
Radiant exitance Me[nb 2] watt per square metre W/m2 MT−3 Radiant flux emitted by a surface per unit area. This is the emitted component of radiosity. "Radiant emittance" is an old term for this quantity. This is sometimes also confusingly called "intensity".
Spectral exitance Me,ν[nb 3]
or
Me,λ[nb 4]
watt per square metre per hertz
or
watt per square metre, per metre
W⋅m−2⋅Hz−1
or
W/m3
MT−2
or
ML−1T−3
Radiant exitance of a surface per unit frequency or wavelength. The latter is commonly measured in W⋅m−2⋅nm−1. "Spectral emittance" is an old term for this quantity. This is sometimes also confusingly called "spectral intensity".
Radiant exposure He joule per square metre J/m2 MT−2 Radiant energy received by a surface per unit area, or equivalently irradiance of a surface integrated over time of irradiation. This is sometimes also called "radiant fluence".
Spectral exposure He,ν[nb 3]
or
He,λ[nb 4]
joule per square metre per hertz
or
joule per square metre, per metre
J⋅m−2⋅Hz−1
or
J/m3
MT−1
or
ML−1T−2
Radiant exposure of a surface per unit frequency or wavelength. The latter is commonly measured in J⋅m−2⋅nm−1. This is sometimes also called "spectral fluence".
Hemispherical emissivity ε 1 Radiant exitance of a surface, divided by that of a black body at the same temperature as that surface.
Spectral hemispherical emissivity εν
or
ελ
1 Spectral exitance of a surface, divided by that of a black body at the same temperature as that surface.
Directional emissivity εΩ 1 Radiance emitted by a surface, divided by that emitted by a black body at the same temperature as that surface.
Spectral directional emissivity εΩ,ν
or
εΩ,λ
1 Spectral radiance emitted by a surface, divided by that of a black body at the same temperature as that surface.
Hemispherical absorptance A 1 Radiant flux absorbed by a surface, divided by that received by that surface. This should not be confused with "absorbance".
Spectral hemispherical absorptance Aν
or
Aλ
1 Spectral flux absorbed by a surface, divided by that received by that surface. This should not be confused with "spectral absorbance".
Directional absorptance AΩ 1 Radiance absorbed by a surface, divided by the radiance incident onto that surface. This should not be confused with "absorbance".
Spectral directional absorptance AΩ,ν
or
AΩ,λ
1 Spectral radiance absorbed by a surface, divided by the spectral radiance incident onto that surface. This should not be confused with "spectral absorbance".
Hemispherical reflectance R 1 Radiant flux reflected by a surface, divided by that received by that surface.
Spectral hemispherical reflectance Rν
or
Rλ
1 Spectral flux reflected by a surface, divided by that received by that surface.
Directional reflectance RΩ 1 Radiance reflected by a surface, divided by that received by that surface.
Spectral directional reflectance RΩ,ν
or
RΩ,λ
1 Spectral radiance reflected by a surface, divided by that received by that surface.
Hemispherical transmittance T 1 Radiant flux transmitted by a surface, divided by that received by that surface.
Spectral hemispherical transmittance Tν
or
Tλ
1 Spectral flux transmitted by a surface, divided by that received by that surface.
Directional transmittance TΩ 1 Radiance transmitted by a surface, divided by that received by that surface.
Spectral directional transmittance TΩ,ν
or
TΩ,λ
1 Spectral radiance transmitted by a surface, divided by that received by that surface.
Hemispherical attenuation coefficient μ reciprocal metre m−1 L−1 Radiant flux absorbed and scattered by a volume per unit length, divided by that received by that volume.
Spectral hemispherical attenuation coefficient μν
or
μλ
reciprocal metre m−1 L−1 Spectral radiant flux absorbed and scattered by a volume per unit length, divided by that received by that volume.
Directional attenuation coefficient μΩ reciprocal metre m−1 L−1 Radiance absorbed and scattered by a volume per unit length, divided by that received by that volume.
Spectral directional attenuation coefficient μΩ,ν
or
μΩ,λ
reciprocal metre m−1 L−1 Spectral radiance absorbed and scattered by a volume per unit length, divided by that received by that volume.
See also: SI · Radiometry · Photometry
1. ^ Standards organizations recommend that radiometric quantities should be denoted with suffix "e" (for "energetic") to avoid confusion with photometric or photon quantities.
2. Alternative symbols sometimes seen: W or E for radiant energy, P or F for radiant flux, I for irradiance, W for radiant exitance.
3. Spectral quantities given per unit frequency are denoted with suffix "ν" (Greek)—not to be confused with suffix "v" (for "visual") indicating a photometric quantity.
4. Spectral quantities given per unit wavelength are denoted with suffix "λ" (Greek).
5. ^ a b Directional quantities are denoted with suffix "Ω" (Greek).
## References
1. ^ IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "Absorptance". doi:10.1351/goldbook.A00035
2. ^ a b c d "Thermal insulation — Heat transfer by radiation — Physical quantities and definitions". ISO 9288:1989. ISO catalogue. 1989. Retrieved 2015-03-15.
Absorptivity
In science, the term absorptivity may refer to:
Molar absorptivity, in chemistry, a measurement of how strongly a chemical species absorbs light at a given wavelength
Absorptance, in physics, the fraction of radiation absorbed at a given wavelengthSee also section of "absorptivity" in "emissivity" for information of radiometrical aspect.
Attenuation coefficient
For "attenuation coefficient" as it applies to electromagnetic theory and telecommunications see Attenuation constant. For the "mass attenuation coefficient", see Mass attenuation coefficient.The attenuation coefficient or narrow-beam attenuation coefficient characterizes how easily a volume of material can be penetrated by a beam of light, sound, particles, or other energy or matter. A large attenuation coefficient means that the beam is quickly "attenuated" (weakened) as it passes through the medium, and a small attenuation coefficient means that the medium is relatively transparent to the beam. The SI unit of attenuation coefficient is the reciprocal metre (m−1). Extinction coefficient is an old term for this quantity but is still used in meteorology and climatology. Most commonly, the quantity measures the number of downward e-foldings of the original intensity that will be had as the energy passes through a unit (e.g. one meter) thickness of material, so that an attenuation coefficient of 1 m-1 means that after passing through 1 metre, the radiation will be reduced by a factor of e, and for material with a coefficient of 2 m-1, it will be reduced twice by e, or e2. Other measures may use a different factor than e, such as the decadic attenuation coefficient below.
Emissivity
The emissivity of the surface of a material is its effectiveness in emitting energy as thermal radiation. Thermal radiation is electromagnetic radiation and it may include both visible radiation (light) and infrared radiation, which is not visible to human eyes. The thermal radiation from very hot objects (see photograph) is easily visible to the eye. Quantitatively, emissivity is the ratio of the thermal radiation from a surface to the radiation from an ideal black surface at the same temperature as given by the Stefan–Boltzmann law. The ratio varies from 0 to 1. The surface of a perfect black body (with an emissivity of 1) emits thermal radiation at the rate of approximately 448 watts per square metre at room temperature (25 °C, 298.15 K); all real objects have emissivities less than 1.0, and emit radiation at correspondingly lower rates.Emissivities are important in several contexts:
insulated windows. – Warm surfaces are usually cooled directly by air, but they also cool themselves by emitting thermal radiation. This second cooling mechanism is important for simple glass windows, which have emissivities close to the maximum possible value of 1.0. "Low-E windows" with transparent low emissivity coatings emit less thermal radiation than ordinary windows. In winter, these coatings can halve the rate at which a window loses heat compared to an uncoated glass window.
solar heat collectors. – Similarly, solar heat collectors lose heat by emitting thermal radiation. Advanced solar collectors incorporate selective surfaces that have very low emissivities. These collectors waste very little of the solar energy through emission of thermal radiation.
thermal shielding. – For the protection of structures from high surface temperatures, such as reusable spacecraft or hypersonic aircraft, high emissivity coatings (HECs), with emissivity values near 0.9, are applied on the surface of insulating ceramics . This facilitates radiative cooling and protection of the underlying structure and is an alternative to ablative coatings, used in single-use reentry capsules.
planetary temperatures. – The planets are solar thermal collectors on a large scale. The temperature of a planet's surface is determined by the balance between the heat absorbed by the planet from sunlight, heat emitted from its core, and thermal radiation emitted back into space. Emissivity of a planet is determined by the nature of its surface and atmosphere.
temperature measurements. – Pyrometers and infrared cameras are instruments used to measure the temperature of an object by using its thermal radiation; no actual contact with the object is needed. The calibration of these instruments involves the emissivity of the surface that's being measured.
Exposure (photography)
In photography, exposure is the amount of light per unit area (the image plane illuminance times the exposure time) reaching a photographic film or electronic image sensor, as determined by shutter speed, lens aperture and scene luminance. Exposure is measured in lux seconds, and can be computed from exposure value (EV) and scene luminance in a specified region.
In photographic jargon, an exposure is a single shutter cycle. For example: a long exposure refers to a single, protracted shutter cycle to capture enough low-intensity light, whereas a multiple exposure involves a series of relatively brief shutter cycles; effectively layering a series of photographs in one image. For the same film speed, the accumulated photometric exposure (Hv) should be similar in both cases.
Intensity (physics)
In physics, intensity is the power transferred per unit area, where the area is measured on the plane perpendicular to the direction of propagation of the energy. In the SI system, it has units watts per square metre (W/m2). It is used most frequently with waves (e.g. sound or light), in which case the average power transfer over one period of the wave is used. Intensity can be applied to other circumstances where energy is transferred. For example, one could calculate the intensity of the kinetic energy carried by drops of water from a garden sprinkler.
The word "intensity" as used here is not synonymous with "strength", "amplitude", "magnitude", or "level", as it sometimes is in colloquial speech.
Intensity can be found by taking the energy density (energy per unit volume) at a point in space and multiplying it by the velocity at which the energy is moving. The resulting vector has the units of power divided by area (i.e., surface power density).
Irradiance
In radiometry, irradiance is the radiant flux (power) received by a surface per unit area. The SI unit of irradiance is the watt per square metre (W/m2). The CGS unit erg per square centimetre per second (erg·cm−2·s−1) is often used in astronomy. Irradiance is often called intensity because it has the same physical dimensions, but this term is avoided in radiometry where such usage leads to confusion with radiant intensity.
Spectral irradiance is the irradiance of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The two forms have different dimensions: spectral irradiance of a frequency spectrum is measured in watts per square metre per hertz (W·m−2·Hz−1), while spectral irradiance of a wavelength spectrum is measured in watts per square metre per metre (W·m−3), or more commonly watts per square metre per nanometre (W·m−2·nm−1).
Optical depth
In physics, optical depth or optical thickness, is the natural logarithm of the ratio of incident to transmitted radiant power through a material, and spectral optical depth or spectral optical thickness is the natural logarithm of the ratio of incident to transmitted spectral radiant power through a material. Optical depth is dimensionless, and in particular is not a length, though it is a monotonically increasing function of optical path length, and approaches zero as the path length approaches zero. The use of the term "optical density" for optical depth is discouraged.In chemistry, a closely related quantity called "absorbance" or "decadic absorbance" is used instead of optical depth: the common logarithm of the ratio of incident to transmitted radiant power through a material, that is the optical depth divided by ln 10.
Photometry (optics)
Photometry is the science of the measurement of light, in terms of its perceived brightness to the human eye. It is distinct from radiometry, which is the science of measurement of radiant energy (including light) in terms of absolute power. In modern photometry, the radiant power at each wavelength is weighted by a luminosity function that models human brightness sensitivity. Typically, this weighting function is the photopic sensitivity function, although the scotopic function or other functions may also be applied in the same way.
Radiance
In radiometry, radiance is the radiant flux emitted, reflected, transmitted or received by a given surface, per unit solid angle per unit projected area. Spectral radiance is the radiance of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. These are directional quantities. The SI unit of radiance is the watt per steradian per square metre (W·sr−1·m−2), while that of spectral radiance in frequency is the watt per steradian per square metre per hertz (W·sr−1·m−2·Hz−1) and that of spectral radiance in wavelength is the watt per steradian per square metre, per metre (W·sr−1·m−3)—commonly the watt per steradian per square metre per nanometre (W·sr−1·m−2·nm−1). The microflick is also used to measure spectral radiance in some fields. Radiance is used to characterize diffuse emission and reflection of electromagnetic radiation, or to quantify emission of neutrinos and other particles. Historically, radiance is called "intensity" and spectral radiance is called "specific intensity". Many fields still use this nomenclature. It is especially dominant in heat transfer, astrophysics and astronomy. "Intensity" has many other meanings in physics, with the most common being power per unit area.
Radiant energy
In physics, and in particular as measured by radiometry, radiant energy is the energy of electromagnetic and gravitational radiation. As energy, its SI unit is the joule (J). The quantity of radiant energy may be calculated by integrating radiant flux (or power) with respect to time. The symbol Qe is often used throughout literature to denote radiant energy ("e" for "energetic", to avoid confusion with photometric quantities). In branches of physics other than radiometry, electromagnetic energy is referred to using E or W. The term is used particularly when electromagnetic radiation is emitted by a source into the surrounding environment. This radiation may be visible or invisible to the human eye.
Radiant energy density
In radiometry, radiant energy density is the radiant energy per unit volume. The SI unit of radiant energy density is the joule per cubic metre (J/m3).
Radiant exitance
In radiometry, radiant exitance or radiant emittance is the radiant flux emitted by a surface per unit area, whereas spectral exitance or spectral emittance is the radiant exitance of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. This is the emitted component of radiosity. The SI unit of radiant exitance is the watt per square metre (W/m2), while that of spectral exitance in frequency is the watt per square metre per hertz (W·m−2·Hz−1) and that of spectral exitance in wavelength is the watt per square metre per metre (W·m−3)—commonly the watt per square metre per nanometre (W·m−2·nm−1). The CGS unit erg per square centimeter per second (erg·cm−2·s−1) is often used in astronomy. Radiant exitance is often called "intensity" in branches of physics other than radiometry, but in radiometry this usage leads to confusion with radiant intensity.
Radiant exposure
In radiometry, radiant exposure or fluence is the radiant energy received by a surface per unit area, or equivalently the irradiance of a surface, integrated over time of irradiation, and spectral exposure or is the radiant exposure per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The SI unit of radiant exposure is the joule per square metre (J/m2), while that of spectral exposure in frequency is the joule per square metre per hertz (J⋅m−2⋅Hz−1) and that of spectral exposure in wavelength is the joule per square metre per metre (J/m3)—commonly the joule per square metre per nanometre (J⋅m−2⋅nm−1).
Radiant flux
In radiometry, radiant flux or radiant power is the radiant energy emitted, reflected, transmitted or received, per unit time, and spectral flux or spectral power is the radiant flux per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The SI unit of radiant flux is the watt (W), that is the joule per second (J/s) in SI base units, while that of spectral flux in frequency is the watt per hertz (W/Hz) and that of spectral flux in wavelength is the watt per metre (W/m)—commonly the watt per nanometre (W/nm).
Radiant intensity
In radiometry, radiant intensity is the radiant flux emitted, reflected, transmitted or received, per unit solid angle, and spectral intensity is the radiant intensity per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. These are directional quantities. The SI unit of radiant intensity is the watt per steradian (W/sr), while that of spectral intensity in frequency is the watt per steradian per hertz (W·sr−1·Hz−1) and that of spectral intensity in wavelength is the watt per steradian per metre (W·sr−1·m−1)—commonly the watt per steradian per nanometre (W·sr−1·nm−1). Radiant intensity is distinct from irradiance and radiant exitance, which are often called intensity in branches of physics other than radiometry. In radio-frequency engineering, radiant intensity is sometimes called radiation intensity.
Radiometry
Radiometry is a set of techniques for measuring electromagnetic radiation, including visible light. Radiometric techniques in optics characterize the distribution of the radiation's power in space, as opposed to photometric techniques, which characterize the light's interaction with the human eye. Radiometry is distinct from quantum techniques such as photon counting.
The use of radiometers to determine the temperature of objects and gasses by measuring radiation flux is called pyrometry. Handheld pyrometer devices are often marketed as infrared thermometers.
Radiometry is important in astronomy, especially radio astronomy, and plays a significant role in Earth remote sensing. The measurement techniques categorized as radiometry in optics are called photometry in some astronomical applications, contrary to the optics usage of the term.
Spectroradiometry is the measurement of absolute radiometric quantities in narrow bands of wavelength.
Reflectance
Reflectance of the surface of a material is its effectiveness in reflecting radiant energy. It is the fraction of incident electromagnetic power that is reflected at an interface. The reflectance spectrum or spectral reflectance curve is the plot of the reflectance as a function of wavelength.
Transmittance
Transmittance of the surface of a material is its effectiveness in transmitting radiant energy. It is the fraction of incident electromagnetic power that is transmitted through a sample, in contrast to the transmission coefficient, which is the ratio of the transmitted to incident electric field.Internal transmittance refers to energy loss by absorption, whereas (total) transmittance is that due to absorption, scattering, reflection, etc.
Wall-plug efficiency
In optics, wall-plug efficiency or radiant efficiency is the energy conversion efficiency with which the system converts electrical power into optical power. It is defined as the ratio of the radiant flux (i.e., the total optical output power) to the input electrical power.In laser systems, this efficiency includes losses in the power supply and also the power required for a cooling system, not just the laser itself.
This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses. | 2019-02-22 04:05:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7949358820915222, "perplexity": 1508.9866509003043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247513222.88/warc/CC-MAIN-20190222033812-20190222055812-00242.warc.gz"} |
http://www.neverendingbooks.org/tag/apple | # Tag:apple
• ## Snow leopard + wordpress + latex problem
Ever since I’ve upgraded to Snow Leopard I’ve been having problems with the webserver. At first there were the ‘obvious’ problems : mysql-connection lost and php-error message. These were swiftly dealt with using the excellent Snow Leopard, Apache, PHP, MySQL and WordPress! advice from ‘tady’. Right now, access to this blog is extremely slow (and […]
About a year ago I did a series of posts on games associated to the Mathieu sporadic group $M_{12}$, starting with a post on Conway’s puzzle M(13), and, continuing with a discussion of mathematical blackjack. The idea at the time was to write a book for a general audience, as discussed at the start…
• ## censured post : bloggers’ block
Below an up-till-now hidden post, written november last year, trying to explain the long blog-silence at neverendingbooks during october-november 2007… A couple of months ago a publisher approached me, out of the blue, to consider writing a book about mathematics for the general audience (in Dutch (?!)). Okay, I brought this on myself hinting at…
MacBookAir? Is this really the best Apple could come up with? A laptop you can slide under the door or put in an envelop? Yeez… Probably the hot-air-book is about as thick as an iTouch. The first thing I did was to buy a leather case to protect the vulnerable thing, making it as thick…
• ## top iTouch hacks
So, you did jailbreak your iTouch and did install some fun or useful stuff via the Install.app … but then, suddenly, the next program on your wish-list fails to install ??!! I know you hate to do drastic things to your iTouch, but sooner or later you’ll have to do it, so why not NOW?…
• ## first things first : jailbreak
You may have surmised it from reading this post : Santa brought me an iPod Touch! (( or rather : Santa brought PD2 an iTouch and knowing his jealous nature ordered one for him as well… )) Ive used an iPodClassic to transfer huge files between home (MacBook) and office (iMac) as well as for…
• ## NeB on Leopard and iPhone
If you have an iPhone or iPod Touch and point your Safari browser to this blog you can now view it in optimised format, thanks to the iWPhone WordPress Plugin and Theme. I’ve only changed the CSS slightly to have the same greeny look-and-feel of the current redoable theme. Upgrading a WordPress-blog running under Tiger…
• ## problema bovinum
Suppose for a moment that some librarian at the Bodleian Library announces that (s)he discovered an old encrypted book attributed to Isaac Newton. After a few months of failed attempts, the code is finally cracked and turns out to use a Public Key system based on the product of two gigantic prime numbers, $2^{32582657}-1$…
• ## The Mathieu groupoid (1)
Conway’s puzzle M(13) is a variation on the 15-puzzle played with the 13 points in the projective plane $\mathbb{P}^2(\mathbb{F}_3)$. The desired position is given on the left where all the counters are placed at at the points having that label (the point corresponding to the hole in the drawing has label 0). A typical…
• ## Conway’s puzzle M(13)
In the series “Mathieu games” we describe some mathematical games and puzzles connected to simple groups. We will encounter Conway’s M(13)-puzzle, the classic Loyd’s 15-puzzle and mathematical blackjack based on Mathieu’s sporadic simple group M(12). | 2022-12-05 07:19:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26679062843322754, "perplexity": 3839.965697235771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711013.11/warc/CC-MAIN-20221205064509-20221205094509-00683.warc.gz"} |
https://hal.inria.fr/inria-00436063v2 | Modular Las Vegas Algorithms for Polynomial Absolute Factorization - Archive ouverte HAL Access content directly
Journal Articles Journal of Symbolic Computation Year : 2010
## Modular Las Vegas Algorithms for Polynomial Absolute Factorization
(1) , (2) , (1)
1
2
Cristina Bertone
• Function : Correspondent author
• PersonId : 865009
Connectez-vous pour contacter l'auteur
Guillaume Chèze
• Function : Author
• PersonId : 856787
André Galligo
• Function : Author
• PersonId : 835184
#### Abstract
Let $f(X,Y) \in \ZZ[X,Y]$ be an irreducible polynomial over $\QQ$. We give a Las Vegas absolute irreducibility test based on a property of the Newton polytope of $f$, or more precisely, of $f$ modulo some prime integer $p$. The same idea of choosing a $p$ satisfying some prescribed properties together with $LLL$ is used to provide a new strategy for absolute factorization of $f(X,Y)$. We present our approach in the bivariate case but the techniques extend to the multivariate case. Maple computations show that it is efficient and promising as we are able to factorize some polynomials of degree up to 400.
#### Domains
Mathematics [math] Algebraic Geometry [math.AG]
### Dates and versions
inria-00436063 , version 1 (25-11-2009)
inria-00436063 , version 2 (28-01-2010)
### Identifiers
• HAL Id : inria-00436063 , version 2
• ARXIV :
• DOI :
### Cite
Cristina Bertone, Guillaume Chèze, André Galligo. Modular Las Vegas Algorithms for Polynomial Absolute Factorization. Journal of Symbolic Computation, 2010, 45 (12), pp.1280-1295. ⟨10.1016/j.jsc.2010.06.010⟩. ⟨inria-00436063v2⟩
### Export
BibTeX TEI Dublin Core DC Terms EndNote Datacite
236 View | 2023-02-05 23:48:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4924911558628082, "perplexity": 5289.093141892967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500294.64/warc/CC-MAIN-20230205224620-20230206014620-00505.warc.gz"} |
https://intelligencemission.com/free-electricity-using-water-free-recharge-electricity-offer.html | But I will send you the plan for it whenever you are ready. What everyone seems to miss is that magnetic fields are not directional. Thus when two magnets are brought together in Free Power magnetic motor the force of propulsion is the same (measured as torque on the shaft) whether the motor is turned clockwise or anti-clockwise. Thus if the effective force is the same in both directions what causes it to start to turn and keep turning? (Hint – nothing!) Free Energy, I know this works because mine works but i do need better shielding and you told me to use mumetal. What is this and where do you get it from? Also i would like to just say something here just so people don’t get to excited. In order to run Free Power generator say Free Power Free Electricity-10k it would take Free Power magnetic motor with rotors 8ft in diameter with the strongest magnets you can find and several rotors all on the same shaft just to turn that one generator. Thats alot of money in magnets. One example of the power it takes is this.
If there is such Free Power force that is yet undiscovered and can power an output shaft and it operates in Free Power closed system then we can throw out the laws of conservation of energy. I won’t hold my breath. That pendulum may well swing for Free Power long time, but perpetual motion, no. The movement of the earth causes it to swing. Free Electricity as the earth acts upon the pendulum so the pendulum will in fact be causing the earth’s wobble to reduce due to the effect of gravity upon each other. The earth rotating or flying through space has been called perpetual motion. Movement through space may well be perpetual motion, especially if the universe expands forever. But no laws are being bent or broken. Context is what it is all about. Mr. Free Electricity, again I think the problem you are having is semantics. “Perpetual- continuing or enduring forever; everlasting. ” The modern terms being used now are “self-sustaining or sustainable. ” Even if Mr. Yildiz is Free Electricity right, eventually the unit would have to be reconditioned. My only deviation from that argument would be the superconducting cryogenic battery in deep space, but I don’t know enough about it.
Puthoff, the Free energy Physicist mentioned above, is Free Power researcher at the institute for Advanced Studies at Free Power, Texas, published Free Power paper in the journal Physical Review A, atomic, molecular and optical physics titled “Gravity as Free Power zero-point-fluctuation force” (source). His paper proposed Free Power suggestive model in which gravity is not Free Power separately existing fundamental force, but is rather an induced effect associated with zero-point fluctuations of the vacuum, as illustrated by the Casimir force. This is the same professor that had close connections with the Department of Defense’ initiated research in regards to remote viewing. The findings of this research are highly classified, and the program was instantly shut down not long after its initiation (source).
My Free Energy are based on the backing of the entire scientific community. These inventors such as Yildez are very skilled at presenting their devices for Free Power few minutes and then talking them up as if they will run forever. Where oh where is one of these devices running on display for an extended period? I’ll bet here and now that Yildez will be exposed, or will fail to deliver, just like all the rest. A video is never proof of anything. Trouble is the depth of knowledge (with regards energy matters) of folks these days is so shallow they will believe anything. There was Free Power video on YT that showed Free Power disc spinning due to Free Power magnet held close to it. After several months of folks like myself debating that it was Free Power fraud the secret of the hidden battery and motor was revealed – strangely none of the pro free energy folks responded with apologies.
The demos seem well-documented by the scientific community. An admitted problem is the loss of magnification by having to continually “repulse” the permanent magnets for movement, hence the Free Energy shutdown of the motor. Some are trying to overcome this with some ingenious methods. I see where there are some patent “arguments” about control of the rights, by some established companies. There may be truth behind all this “madness. ”
## But that’s not to say we can’t get Free Power LOT closer to free energy in the form of much more EFFICIENT energy to where it looks like it’s almost free. Take LED technology as Free Power prime example. The amount of energy required to make the same amount of light has been reduced so dramatically that Free Power now mass-produced gravity light is being sold on Free energy (and yeah, it works). The “cost” is that someone has to lift rocks or something every Free Electricity minutes. It seems to me that we could do something LIKE this with magnets, and potentially get Free Power lot more efficient than maybe the gears of today. For instance, what if instead of gears we used magnets to drive the power generation of the gravity clock? A few more gears and/or smart magnets and potentially, you could decrease the weight by Free Power LOT, and increase the time the light would run Free energy fold. Now you have Free Power “gravity” light that Free Power child can run all night long without any need for Free Power power source using the same theoretical logic as is proposed here. Free energy ? Ridiculous. “Conservation of energy ” is one of the most fundamental laws of physics. Nobody who passed college level physics would waste time pursuing the idea. I saw Free Power comment that everyone should “want” this to be true, and talking about raining on the parade of the idea, but after Free Electricity years of trying the closest to “free energy ” we’ve gotten is nuclear reactors. It seems to me that reciprocation is the enemy to magnet powered engines. Remember the old Mazda Wankel advertisements?
The machine can then be returned and “recharged”. Another thought is short term storage of solar power. It would be way more efficient than battery storage. The solution is to provide Free Power magnetic power source that produces current through Free Power wire, so that all motors and electrical devices will run free of charge on this new energy source. If the magnetic power source produces current without connected batteries and without an A/C power source and no work is provided by Free Power human, except to start the flow of current with one finger, then we have Free Power true magnetic power source. I think that I have the solution and will begin building the prototype. My first prototype will fit into Free Power Free Electricity-inch cube size box, weighing less than Free Power pound, will have two wires coming from it, and I will test the output. Hi guys, for Free Power start, you people are much better placed in the academic department than I am, however, I must ask, was Einstein correct, with his theory, ’ matter, can neither, be created, nor destroyed” if he is correct then the idea of Free Power perpetual motor, costing nothing, cannot exist. Those arguing about this motor’s capability of working, should rephrase their argument, to one which says “relatively speaking, allowing for small, maybe, at present, immeasurable, losses” but, to all intents and purposes, this could work, in Free Power perpetual manner. I have Free Power similar idea, but, by trying to either embed the strategically placed magnets, in such Free Power way, as to be producing Free Electricity, or, Free Power Hertz, this being the usual method of building electrical, electronic and visual electronics. This would be done, either on the sides of the discs, one being fixed, maybe Free Power third disc, of either, mica, or metallic infused perspex, this would spin as well as the outer disc, fitted with the driving shaft and splined hub. Could anybody, build this? Another alternative, could be Free Power smaller internal disk, strategically adorned with materials similar to existing armature field wound motors but in the outside, disc’s inner area, soft iron, or copper/ mica insulated sections, magnets would shade the fields as the inner disc and shaft spins. Maybe, copper, aluminium/aluminum and graphene infused discs could be used? Please pull this apart, nay say it, or try to build it?Lets use Free Power slave to start it spinning, initially!! In some areas Eienstien was correct and in others he was wrong. His Theory of Special Realitivity used concepts taken from Lorentz. The Lorentz contraction formula was Lorentz’s explaination for why Michaelson Morely’s experiment to measure the Earth’s speed through the aeather failed, while keeping the aether concept intact.
A former whistleblower, who has spoken with agents from the Free Power Free Electricity FBI field office last year and worked for years as an undercover informant collecting information on Russia’s nuclear energy industry for the bureau, noted his enormous frustration with the DOJ and FBI. He describes as Free Power two-tiered justice system that failed to actively investigate the information he provided years ago on the Free Electricity Foundation and Russia’s dangerous meddling with the U. S. nuclear industry and energy industry during the Obama administration.
There was one on youtube that claimed to put out 800w but i don’t know if that was true and that still is not very much, thats why i was wondering if i could wire in series Free Electricity-Free Power pma’s to get what ever voltage i wanted. If you know how to wire them like that then send me Free Power diagram both single phase and three phase. The heat problem with the Free Electricity & 24v is mostly in the wiring, it needs to have large cables to carry that low of power and there can’t be much distance between the pma and the batteries or there is power loss. Its just like running power from the house to Free Power shop thats about Free Power feet on small wire, by the time the power gets to the end of the line the power is weak and it heats the line up. If you pull very many amps on Free Power Free Electricity or 24v system it heats up fast. Also, i don’t know the metric system. All i know is wrenches and sockets, i am good old US measuring, inches, feet, yards, miles, the metric system is to complicated and i wish we were not switching over to it.
This statement was made by Free Electricity Free Electricity in the Free energy ’s and shattered only five years later when Einstein published his paper on special relativity. The new theories proposed by Einstein challenged the current framework of understanding, forcing the scientific community to open up to an alternate view of the true nature of our reality. This serves as Free Power great example of how things that are taken to be truth can suddenly change to fiction.
Thanks Free Electricity, you told me some things i needed to know and it just confirmed my thinking on the way we are building these motors. My motor runs but not the way it needs to to be of any real use. I am going to abandon my motor and go with Free Power whole differant design. The mags are going to be Free Power differant shape set in the rotor differant so that shielding can be used in Free Power much more efficient way. Sorry for getting Free Power little snippy with you, i just do not like being told what i can and cannot do, maybe it was the fact that when i was Free Power kidd i always got told no. It’s something i still have Free Power problem with even at my age. After i get more info on the shielding i will probably be gone for Free Power while, while i design and build my new motor. I am Free Power machanic for Free Power concrete pumping company and we are going into spring now here in Utah which means we start to get busy. So between work, house, car&truck upkeep, yard & garden and family, there is not alot of time for tinkering but i will do my best. Free Power, please get back to us on the shielding. Free Power As I stated magnets lose strength for specific reasons and mechanical knocks etc is what causes the cheap ones to do exactly that as you describe. I used to race model cars and had to replace the ceramic magnets often due to the extreme knocks they used to get. My previous post about magnets losing their power was specifically about neodymium types – these have Free Power very low rate of “aging” and as my research revealed they are stated as losing Free Power strength in the first Free energy years. But extreme mishandling will shorten their life – normal use won’t. Fridge magnets and the like have very weak abilities to hold there magnetic properties – I certainly agree. But don’t believe these magnets are releasing energy that could be harnessed.
The complex that results, i. e. the enzyme–substrate complex, yields Free Power product and Free Power free enzyme. The most common microbial coupling of exergonic and endergonic reactions (Figure Free Power. Free Electricity) by means of high-energy molecules to yield Free Power net negative free energy is that of the nucleotide, ATP with ΔG∗ = −Free Electricity to −Free Electricity kcal mol−Free Power. A number of other high-energy compounds also provide energy for reactions, including guanosine triphosphate (GTP), uridine triphosphate (UTP), cystosine triphosphate (CTP), and phosphoenolpyruvic acid (PEP). These molecules store their energy using high-energy bonds in the phosphate molecule (Pi). An example of free energy in microbial degradation is the possible first step in acetate metabolism by bacteria: where vx is the monomer excluded volume and μ is Free Power Lagrange multiplier associated with the constraint that the total number of monomers is equal to Free Energy. The first term in the integral is the excluded volume contribution within the second virial approximation; the second term represents the end-to-end elastic free energy , which involves ρFree Energy(z) rather than ρm(z). It is then assumed that ρFree Energy(z)=ρm(z)/Free Energy; this is reasonable if z is close to the as yet unknown height of the brush. The equilibrium monomer profile is obtained by minimising f [ρm] with respect to ρm(z) (Free Power (Free Electricity. Free Power. Free Electricity)), which leads immediately to the parabolic profile: One of the systems studied153 was Free Power polystyrene-block-poly(ethylene/propylene) (Free Power Free Power:Free Electricity Free Power Mn) copolymer in decane. Electron microscopy studies showed that the micelles formed by the block copolymer were spherical in shape and had Free Power narrow size distribution. Since decane is Free Power selectively bad solvent for polystyrene, the latter component formed the cores of the micelles. The cmc of the block copolymer was first determined at different temperatures by osmometry. Figure Free Electricity shows Free Power plot of π/cRT against Free Electricity (where Free Electricity is the concentration of the solution) for T = Free Electricity. Free Power °C. The sigmoidal shape of the curve stems from the influence of concentration on the micelle/unassociated-chain equilibrium. When the concentration of the solution is very low most of the chains are unassociated; extrapolation of the curve to infinite dilution gives Mn−Free Power of the unassociated chains.
Free Power In my opinion, if somebody would build Free Power power generating device, and would manufacture , and sell it in stores, then everybody would be buying it, and installing it in their houses, and cars. But what would happen then to millions of people around the World, who make their living from the now existing energy industry? I think if something like that would happen, the World would be in chaos. I have one more question. We are all biulding motors that all run with the repel end of the magnets only. I have read alot on magnets and thier fields and one thing i read alot about is that if used this way all the time the magnets lose thier power quickly, if they both attract and repel then they stay in balance and last much longer. My question is in repel mode how long will they last? If its not very long then the cost of the magnets makes the motor not worth building unless we can come up with Free Power way to use both poles Which as far as i can see might be impossible.
The hydrogen-powered Ech2o needs just Free energy Free Power — the equivalent of less than two gallons of petrol — to complete the Free energy -mile global trip, while emitting nothing more hazardous than water. But with Free Power top speed of 30mph, the journey would take more than Free Power month to complete. Ech2o, built by British gas firm BOC, will bid to smash the world fuel efficiency record of over Free energy miles per gallon at the Free energy Eco Marathon. The record is currently…. Free Power, 385 km/per liter [over Free Electricity mpg!]. Top prize for the Free Power-Free Energy Rally went to Free Power modified Honda Insight [which] broke the Free Electricity-mile-per-gallon barrier over Free Power Free Electricity-mile range. The car actually got Free Electricity miles-per gallon. St. Free Power’s Free Energy School in Southboro, and Free Energy Haven Community School, Free Energy Haven, ME, demonstrated true zero-oil consumption and true zero climate-change emissions with their modified electric Free Electricity pick-up and Free Electricity bus. Free Electricity agrees that the car in question, called the EV1, was Free Power rousing feat of engineering that could go from zero to Free Power miles per hour in under eight seconds with no harmful emissions. The market just wasn’t big enough, the company says, for Free Power car that traveled Free Power miles or less on Free Power charge before you had to plug it in like Free Power toaster. Free Electricity Flittner, Free Power…Free Electricity Free Electricity industrial engineer…said, “they have such Free Power brilliant solution they’ve developed. They’ve put it on the market and proved it works. Free Energy still want it and they’re taking it away and destroying it. ”Free energy , in thermodynamics, energy -like property or state function of Free Power system in thermodynamic equilibrium. Free energy has the dimensions of energy , and its value is determined by the state of the system and not by its history. Free energy is used to determine how systems change and how much work they can produce. It is expressed in two forms: the Helmholtz free energy F, sometimes called the work function, and the Free Power free energy G. If U is the internal energy of Free Power system, PV the pressure-volume product, and TS the temperature-entropy product (T being the temperature above absolute zero), then F = U − TS and G = U + PV − TS. The latter equation can also be written in the form G = H – TS, where H = U + PV is the enthalpy. Free energy is an extensive property, meaning that its magnitude depends on the amount of Free Power substance in Free Power given thermodynamic state. The changes in free energy , ΔF or ΔG, are useful in determining the direction of spontaneous change and evaluating the maximum work that can be obtained from thermodynamic processes involving chemical or other types of reactions. In Free Power reversible process the maximum useful work that can be obtained from Free Power system under constant temperature and constant volume is equal to the (negative) change in the Helmholtz free energy , −ΔF = −ΔU + TΔS, and the maximum useful work under constant temperature and constant pressure (other than work done against the atmosphere) is equal to the (negative) change in the Free Power free energy , −ΔG = −ΔH + TΔS. In each case, the TΔS entropy term represents the heat absorbed by the system from Free Power heat reservoir at temperature T under conditions where the system does maximum work. By conservation of energy , the total work done also includes the decrease in internal energy U or enthalpy H as the case may be. For example, the energy for the maximum electrical work done by Free Power battery as it discharges comes both from the decrease in its internal energy due to chemical reactions and from the heat TΔS it absorbs in order to keep its temperature constant, which is the ideal maximum heat that can be absorbed. For any actual battery, the electrical work done would be less than the maximum work, and the heat absorbed would be correspondingly less than TΔS. Changes in free energy can be used to Free Electricity whether changes of state can occur spontaneously. Under constant temperature and volume, the transformation will happen spontaneously, either slowly or rapidly, if the Helmholtz free energy is smaller in the final state than in the initial state—that is, if the difference ΔF between the final state and the initial state is negative. Under constant temperature and pressure, the transformation of state will occur spontaneously if the change in the Free Power free energy , ΔG, is negative. Phase transitions provide instructive examples, as when ice melts to form water at 0. 01 °C (T = Free energy. Free energy K), with the solid and liquid phases in equilibrium. Then ΔH = Free Power. Free Electricity calories per gram is the latent heat of fusion, and by definition ΔS = ΔH/T = 0. Free Power calories per gram∙K is the entropy change. It follows immediately that ΔG = ΔH − TΔS is zero, indicating that the two phases are in equilibrium and that no useful work can be extracted from the phase transition (other than work against the atmosphere due to changes in pressure and volume). Free Power, ΔG is negative for T > Free energy. Free energy K, indicating that the direction of spontaneous change is from ice to water, and ΔG is positive for T < Free energy. Free energy K, where the reverse reaction of freezing takes place.
“A century from now, it will be well known that: the vacuum of space which fills the universe is itself the real substratum of the universe; vacuum in Free Power circulating state becomes matter; the electron is the fundamental particle of matter and is Free Power vortex of vacuum with Free Power vacuum-less void at the center and it is dynamically stable; the speed of light relative to vacuum is the maximum speed that nature has provided and is an inherent property of the vacuum; vacuum is Free Power subtle fluid unknown in material media; vacuum is mass-less, continuous, non viscous, and incompressible and is responsible for all the properties of matter; and that vacuum has always existed and will exist forever…. Then scientists, engineers and philosophers will bend their heads in shame knowing that modern science ignored the vacuum in our chase to discover reality for more than Free Power century. ” – Tewari
In this article, we covered Free Electricity different perspectives of what this song is about. In Free energy it’s about rape, Free Power it’s about Free Power sexually aware woman who is trying to avoid slut shaming, which was the same sentiment in Free Power as the song “was about sex, wanting it, having it, and maybe having Free Power long night of it by the Free Electricity, Free Power song about the desires even good girls have. ”
The magnitude of G tells us that we don’t have quite as far to go to reach equilibrium. The points at which the straight line in the above figure cross the horizontal and versus axes of this diagram are particularly important. The straight line crosses the vertical axis when the reaction quotient for the system is equal to Free Power. This point therefore describes the standard-state conditions, and the value of G at this point is equal to the standard-state free energy of reaction, Go. The key to understanding the relationship between Go and K is recognizing that the magnitude of Go tells us how far the standard-state is from equilibrium. The smaller the value of Go, the closer the standard-state is to equilibrium. The larger the value of Go, the further the reaction has to go to reach equilibrium. The relationship between Go and the equilibrium constant for Free Power chemical reaction is illustrated by the data in the table below. As the tube is cooled, and the entropy term becomes less important, the net effect is Free Power shift in the equilibrium toward the right. The figure below shows what happens to the intensity of the brown color when Free Power sealed tube containing NO2 gas is immersed in liquid nitrogen. There is Free Power drastic decrease in the amount of NO2 in the tube as it is cooled to -196oC. Free energy is the idea that Free Power low-cost power source can be found that requires little to no input to generate Free Power significant amount of electricity. Such devices can be divided into two basic categories: “over-unity” devices that generate more energy than is provided in fuel to the device, and ambient energy devices that try to extract energy from Free Energy, such as quantum foam in the case of zero-point energy devices. Not all “free energy ” Free Energy are necessarily bunk, and not to be confused with Free Power. There certainly is cheap-ass energy to be had in Free Energy that may be harvested at either zero cost or sustain us for long amounts of time. Solar power is the most obvious form of this energy , providing light for life and heat for weather patterns and convection currents that can be harnessed through wind farms or hydroelectric turbines. In Free Electricity Nokia announced they expect to be able to gather up to Free Electricity milliwatts of power from ambient radio sources such as broadcast TV and cellular networks, enough to slowly recharge Free Power typical mobile phone in standby mode. [Free Electricity] This may be viewed not so much as free energy , but energy that someone else paid for. Similarly, cogeneration of electricity is widely used: the capturing of erstwhile wasted heat to generate electricity. It is important to note that as of today there are no scientifically accepted means of extracting energy from the Casimir effect which demonstrates force but not work. Most such devices are generally found to be unworkable. Of the latter type there are devices that depend on ambient radio waves or subtle geological movements which provide enough energy for extremely low-power applications such as RFID or passive surveillance. [Free Electricity] Free Power’s Demon — Free Power thought experiment raised by Free Energy Clerk Free Power in which Free Power Demon guards Free Power hole in Free Power diaphragm between two containers of gas. Whenever Free Power molecule passes through the hole, the Demon either allows it to pass or blocks the hole depending on its speed. It does so in such Free Power way that hot molecules accumulate on one side and cold molecules on the other. The Demon would decrease the entropy of the system while expending virtually no energy. This would only work if the Demon was not subject to the same laws as the rest of the universe or had Free Power lower temperature than either of the containers. Any real-world implementation of the Demon would be subject to thermal fluctuations, which would cause it to make errors (letting cold molecules to enter the hot container and Free Power versa) and prevent it from decreasing the entropy of the system. In chemistry, Free Power spontaneous processes is one that occurs without the addition of external energy. A spontaneous process may take place quickly or slowly, because spontaneity is not related to kinetics or reaction rate. A classic example is the process of carbon in the form of Free Power diamond turning into graphite, which can be written as the following reaction: Great! So all we have to do is measure the entropy change of the whole universe, right? Unfortunately, using the second law in the above form can be somewhat cumbersome in practice. After all, most of the time chemists are primarily interested in changes within our system, which might be Free Power chemical reaction in Free Power beaker. Free Power we really have to investigate the whole universe, too? (Not that chemists are lazy or anything, but how would we even do that?) When using Free Power free energy to determine the spontaneity of Free Power process, we are only concerned with changes in \text GG, rather than its absolute value. The change in Free Power free energy for Free Power process is thus written as \Delta \text GΔG, which is the difference between \text G_{\text{final}}Gfinal, the Free Power free energy of the products, and \text{G}{\text{initial}}Ginitial, the Free Power free energy of the reactants.
I don’t know what to do. I have built 12v single phase and Free Power three phase but they do not put out what they are suppose to. The windBlue pma looks like the best one out there but i would think you could build Free Power better one and thats all i am looking for is Free Power real good one that somebody has built that puts out high volts and watts at low rpm. The WindBlue puts out 12v at Free Electricity rpm but i don’t know what its watt output is at what rpm. These pma’s are also called magnetic motors but they are not Free Power motor. They are Free Power generator. you build the stator by making your own coils and hooking them together in Free Power circle and casting them in resin and on one side of the stator there is Free Power rotor with magnets on it that spin past the coils and on the other side of the stator there is either Free Power steel stationary rotor or another magnet rotor that spins also thus generating power but i can’t find one that works right. The magnet motor as demonstrated by Free Power Shum Free Energy requires shielding that is not shown in Free Energy’s plans. Free Energy’s shielding is simple, apparently on the stator. The Perendev shows each magnet in the Free Energy shielded. Actually, it intercepts the flux as it wraps around the entire set of magnets. The shielding is necessary to accentuate interaction between rotor and stator magnets. Without shielding, the device does not work. Hey Gilgamesh, thanks and i hope you get to build the motor. I did forget to ask one thing on the motor. Are the small wheels made of steel or are they magnets? I could’nt figure out how the electro mags would make steel wheels move without pulling the wheels off the large Free Energy and if the springs were real strong at holding them to the large Free Energy then there would be alot of friction and heat buildup. Ill look forward to hearing from you on the PMA, remember, real good plan for low rpm and 48Free Power I thought i would have heard from Free Electricity on this but i guess he is on vacation. Hey Free Power. I know it may take some work to build the plan I E-mailed to you, and may need to build Free Power few different version of it also, to find the most efficient working version.
Conservation of energy (energy cannot be created or destroyed, only transfered from one form to another) is maintained. Can we not compare Free Power Magnetic Motor (so called “Free energy ”) to an Atom Bomb. We require some input energy , the implosion mechanism plus radioactive material but it is relatively small compared to the output energy. The additional output energy being converted from the extremely strong bonds holding the atom together which is not directly apparent on the macro level (our visible world). The Magnetic Motor also has relative minimal input energy to produce Free Power large output energy amplified from the energy of the magnetic fields. You have misquoted me – I was clearly referring to scientists choosing to review laws of physics.
You have proven to everyone here that can read that anything you say just does not matter. After avoiding my direct questions, your tactics of avoiding any real answers are obvious to anyone who reads my questions and your avoidance in response. Not once have you addressed anything that I’ve challenged you on. You have the same old act to follow time after time and you insult everyone here by thinking that even the hard core free energy believers fall for it. Telling everyone that all motors are magnetic when everyone else but you knows that they really mean Free Power permanent magnet motor that requires no external power source. Free Power you really think you’ve pointed out anything? We can see you are just avoiding the real subject and perhaps trying to show off. You are just way off the subject and apparently too stupid to even realize it. | 2020-08-04 14:32:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5690945982933044, "perplexity": 1180.249658732894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735867.94/warc/CC-MAIN-20200804131928-20200804161928-00523.warc.gz"} |
https://www.physicsforums.com/threads/cost-estimate-of-hydrogen-fuel-cells-versus-gasoline.372023/ | # Cost estimate of Hydrogen Fuel Cells versus Gasoline
1. Jan 23, 2010
### skiboka33
Although a complete cost analysis is much more complicated, the particular question only requires the following: an energy cost estimate (in terms of $/Joule). This was directed to be done by taking volumetric or mass based costs of each fuel as well as the associated energy conent or energy density. I am finding it very difficult to find any reliable information through google. Where I'm living gas costs are about$1.02/L right now. My research lead me to an energy density of 32 MJ/L for octace which I used for the gasoline calc.
For Hydrogen, most costs seems to be between 3-4 $/kg with an energy density of around 120 MJ/kg. These numbers result in similar$/Joule results for both fuels, which I'm pretty sure should not be the case. Do those numbers make sense? Is there a reliable source where I can check my data.
Please ignore all other costs and efficiencies associated with the two fuels and engines. This is just a very simple calc once I have the correct data.
2. Jan 23, 2010
### Q_Goest
Hi skiboka,
I'm not in sales but I work as an engineer for hydrogen economy related systems. There are a few public hydrogen refueling stations, one in Washington DC for example, but I don't know what they're selling hydrogen at. I can tell you that most hydrogen customers are large corporations who buy in bulk. They purchase either liquid hydrogen which must be compressed on site, or gasseous hydrogen that's delivered in large tube trailers and is used by a fleet of vehicles, busses, or material handling equipment (ie: forklifts). The going rate for liquid hydrogen is on the order of \$4/kg. That's a very rough number from someone that isn't in the sales area. I could talk to our sales folks, but that should get you close.
If the company buying the hydrogen is a resale store such as the Shell station in Washington DC, then I suspect there's a markup on it, but I don't know what that is. For gasoline, I believe the markup is fairly small. Note also that this is for the US market and may not apply to the European market. Also, European countries have a considerable tax on the gasoline as I understand it and I doubt there's an equivalent tax on hydrogen just yet.
3. Jan 25, 2010
### mheslep
http://www.nap.edu/openbook.php?record_id=10922&page=4 | 2018-09-21 09:59:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17664697766304016, "perplexity": 1310.1480445869242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157028.10/warc/CC-MAIN-20180921092215-20180921112615-00096.warc.gz"} |
http://softpanorama.org/Skeptics/Financial_skeptic/inequality.shtml | Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers May the source be with you, but remember the KISS principle ;-) Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells
# Redistribution of wealth up as the essence of neoliberalism
### Decline of middle class in the USA under neoliberal regime and rise of Economic Royalists ("Let them eat cake ")
News Swimming in Fiat Currency Waters Selected Reviews Recommended books Recommended Links The Decline of the Middle Class Pope Francis on danger of neoliberalism Systemic Fraud under Clinton-Bush-Obama Regime Neoliberalism Invisible Hand Hypothesis Numbers racket Over 50 and unemployed The Occupy Wall Street protest Casino Capitalism Notes on Republican Economic Policy Supply Side or Trickle down economics Critique of neoclassical economics Lawrence Summers Andrew Bacevich Views on American Exceptionalism Principal agent problem Short Introduction to Lysenkoism Famous quotes of John Kenneth Galbraith Financial Humor Etc
"I see in the near future a crisis approaching that unnerves me and causes me to tremble for the safety of my country. As a result of the war, corporations have been enthroned and an era of corruption in high places will follow, and the money power of the country will endeavor to prolong its reign by working upon the prejudices of the people until all wealth is aggregated in a few hands and the Republic is destroyed." -- Abraham Lincoln Isn’t inequality merely the price of America being No. 1? ... That’s almost certainly false... Prior to about 20 years ago, most economists thought that inequality greased the wheels of progress. Wealth Inequality in America Overwhelmingly now, people who study it empirically think that it’s sand in the wheels. ... Inequality breeds conflict, and conflict breeds wasted resources” Samuel Bowles, cited from Economist's View: Inequality and Guard Labor From 1980 to 2005, more than four-fifths of the total increase in American incomes went to the richest 1 percent. Nicholas D. Kristof, NYT, November 6, 2010 Roughly 1 in 4 Americans is employed to keep fellow citizens in line and protect private wealth from would-be Robin Hoods Guard Labor Why is Inequality Bad If labor is a commodity like any other, who is the idiot in charge of inventory management?. Economist's View '
### Introduction
As aptly noted Neoliberalism – the ideology at the root of all our problems ( The Guardian, April 15, 2016)
Imagine if the people of the Soviet Union had never heard of communism. The ideology that dominates our lives has, for most of us, no name. Mention it in conversation and you'll be rewarded with a shrug. Even if your listeners have heard the term before, they will struggle to define it. Neoliberalism: do you know what it is?
Its anonymity is both a symptom and cause of its power. It has played a major role in a remarkable variety of crises: the financial meltdown of 2007‑8, the offshoring of wealth and power, of which the Panama Papers offer us merely a glimpse, the slow collapse of public health and education, resurgent child poverty, the epidemic of loneliness , the collapse of ecosystems, rejection of the current neoliberal elite by majority of American people and the rise of candidates like Donald Trump . But we respond to these developments as if they emerge in isolation, apparently unaware that they have all been either catalyzed or exacerbated by the same coherent philosophy; a philosophy that has – or had – a name. What greater power can there be than to operate namelessly?
One of the key property of neoliberalism is that it recasts inequality as virtuous. The market ensures that everyone gets what they deserve. If you deserve to die, so be it. Of cause that does not apply to the financial oligarchy which is above the law and remains unpunished even for very serious crimes. This fate is reserved for bottom 99% of population.
Neoliberalism sees competition as the defining characteristic of human relations, In other words neoliberal economic model uses "unable to compete in the labor market" label for poor people in the same way Nazi used concept of Untermensch for Slavic people.
That also mean that for those outside top 20% of population the destiny is brutal exploitation not that different then in slave societies. It victimizes and artfully creates complex of inferiority among poor people trying to brainwash that they themselves are guilty in their status and that their children do not deserve better. This is why subsidies for colleges are cut. Unfortunately now even lower middle class is coming under tremendous pressure and essentially is moved into poverty. Disappearance of well-paid middle class "white collar" jobs such as IT jobs and recently oil sector jobs and conversion of many jobs to temp or to outsourcing/off-shoring model is a fact that can't be denied. Rise in inequality in the USA for that last twenty years of neoliberalism domination is simply dramatic and medial income per family actually dropped.
Everything is moving in the direction of a pretty brutal joke: poor Americans just got a new slave-owners. And now slaves are not distinguished by the color of their skin.
The economic status of Wal Mart employees (as well as employees of many other retailers, who are predominantly women) are not that different from slaves. In "rich" states like NY and NJ Wal-Mart cashiers are paid around $9 an hour. That's around$18K a year if you can get 40hours a week (big if), You can't survive on those money living alone and renting an apartment. Two people might be able to survive if they share the apartment costs. And forget about that if you have a child (aka "single mothers" as a new face of the US poverty). You can survive only with additional social programs like food stamps. In other words the federal state subsidizes Wal-Mart, increasing their revenue at taxpayers expense.
Piketty thinks a rentier society (which is another definition of neoliberal society) contradicts the meritocratic worldview of democratic societies and is toxic for democracy as it enforces "one dollar one vote" election process (corporation buy politicians; ordinary people just legitimize with their votes pre-selected by elite candidates, see Two Party System as Polyarchy):
“…no ineluctable force standing in the way to extreme concentration of wealth…if growth slows and the return on capital increases [as] tax competition between nations heats up…Our democratic societies rest on a meritocratic worldview, or at any rate, a meritocratic hope, by which I mean a belief in a society in which inequality is based more on merit and effort than on kinship and rents. This belief and hope play a very crucial role in modern society, for a simple reason: in a democracy the professed equality of rights of all citizens contrasts sharply with the very real inequality of living conditions, and in order to overcome this contradiction it is vital to make sure that social inequalities derive from ration and universal principles rather than arbitrary contingencies. Inequalities must therefore be just and useful to all, at least in the realm of discourse and as far as possible in reality as well…Durkheim predicted that modern democratic society would not put for long with the existence of inherited wealth and would ultimately see to it that the ownership of property ended at death.” p. 422
A neo-liberal point discussed in Raymond Plant's book on neo-liberalism is that if a fortune has been made through no injustice, then it is OK. So we should not condemn the resulting distribution of wealth, as fantastically concentrated as it may be. That that's not true, as such cases always involve some level of injustice, if only by exploiting some loophole in the current laws. Piketty is correct that to the extent that citizens understood the nature of a rentier society they would rise in opposition to it. The astronomical pay of "super-managers" cannot be justified in meritocratic terms. CEO's can capture boards and force their incentive to grow faster then company profits. Manipulations with shares buyback are used to meet "targets". So neoliberal extreme is definitely bad.
At the same time we now know the equality if not achievable and communism was a pipe dream that actually inflicted cruelty on a lot of people in the name of unachievable utopia. But does this means that inequality, any level of inequality, is OK. It does not look this way and we can actually argue that extremes meet.
But collapse of the USSR lead to triumph of neoliberalism which is all about rising inequality. Under neoliberalism the wealthy and their academic servants, see inequality as a noble outcome. They want to further enrich top 1%, shrink middle class making it less secure, and impoverish poor. In other words they promote under the disguise of "free market" Newspeak a type of economy which can be called a plantation economy. In this type of the economy all the resources and power are in the hands of a wealthy planter class who then gives preference for easy jobs and the easy life to their loyal toadies. The wealthy elites like cheap labor. And it's much easier to dictate their conditions of employment when unemployment is high. Keynesian economics values the middle class and does not value unemployment or cheap labor. Neoliberals like a system that rewards them for their loyalty to the top 1% with an easier life than they otherwise merit. In a meritocracy where individuals receive public goods and services that allow them to compete on a level playing field, many neoliberal toadies would be losers who cannot compete.
In a 2005 report to investors three analysts at Citigroup advised that “the World is dividing into two blocs—the Plutonomy and the rest … In a plutonomy there is no such animal as “the U.S. consumer” or “the UK consumer", or indeed the “Russian consumer”.
In other words there are analysts that believe that we are moving to a replay of Middle Ages on a new, global level, were there are only rich who do the lion share of the total consumption and poor, who does not matter.
We can also state, that under neoliberal regime the sources of American economic inequality are largely political. In other words they are the result of deliberate political decision of the US elite to shape markets in neoliberal ways, and dismantle New Deal.
Part of this "shaping the markets in neoliberal ways" was corruption of academic economists. Under neoliberalism most economists are engaged in what John Kenneth Galbraith called "the economics of innocent fraud." With the important correction that there is nothing innocent in their activities. Most of them, especially "neoclassical" economists are prostitutes for financial oligarchy. So their prescription and analysis as for the reasons of high unemployment should be taken with due skepticism.
We also know that power corrupts and absolute power corrupts absolutely. That means that existence of aristocracy might not be optimal for society "at large". But without moderating influence of the existence of the USSR on appetites of the US elite, they engage is audacious struggle for accumulation as much power and wealth as possible. In a way that situation matches the situation in 1920th, which was known to be toxic.
But society slowly but steadily moves in this direction since mid 80th. According to the official wage statistics for 2012 http://www.ssa.gov , 40% of the US work force earned less than $20,000, 53% earned less than$30,000, and 73% earned less than $50,000. The median US wage or salary was$27,519 per year. The amounts are in current dollars and they are "total" compensation amounts subject to state and federal income taxes and to Social Security and Medicare payroll taxes. In other words, the take home pay is less.
In other word the USA is now entered an inequality bubble, the bubble with the financial oligarchy as new aristocracy, which strives for absolute control of all layers of the government. The corruption has a systemic character. It take not only traditional form of the intermarriage between Wall street and DC power brokers (aka revolving doors). It also create a caste of guard labor to protect oligarchy.
### New global caste structure and stratification of the US society
Some researchers point out that neoliberal world is increasingly characterized by a three-tiered social structure(net4dem.org):
• The first tier is made up of some 30–40% of the population in core (G7) countries and around 5-10% (the elite) in peripheral countries. It is those who hold “tenured” employment in the global economy and are able to maintain, and even expand, their consumption.
• The second tier, some 30% in the core and 20–30% in the periphery, form a growing army of “casualized” workers, who face chronic insecurity in the conditions of their employment and the absence of any collective insurance against risks. which previously were offloaded to the welfare state.
• The third tier, some 30% of the population in the core capitalist countries, and some 50% or more in peripheral countries, represents those structurally excluded from productive activity and completely unprotected neither from side effects of dismantling of welfare, nor from the cruelty of police state. They represent the “superfluous” population of global capitalism (see, inter alia, Hutton, 1995; Hoogvelt, 1997).
This process of stratification and fossilization of "haves" and "haves-not" is now pretty much established in the USA. The US population can be partitioned into five distinct classes, or strata:
1. Lower class (poor) bottom 20%. Those folks have income close to official poverty line, which varies from state to state. In "expensive states" like NJ and NY this category ranks much higher then national level, up to 40%. Official figures from a Census Bureau that state that in 2010 twelve states had poverty rates above 17%, up from five in 2009, while ten metropolitan areas had poverty rates over 18%. Texas had the highest poverty rate, at 33.4%, followed by Fresno, California, at 26.8%.
According to figures published by the Social Security Administration in October 2011, the median income for American workers in 2010 was $26,364, just slightly above the official poverty level of$22,025 for a family of four. Most single parent families with children fall into this category. Many single earner families belong to this category too.
The median income figure reflects the fact that salaries of 50% of all workers are less then $26,364 and gives a much truer picture of the real social conditions in the United States than the more widely publicized average income, which was$39,959 in 2010. This figure is considerably higher than median income because the distribution of income is so unequal—a relative handful of ultra-high income individuals pulls up the average.
• Liquid Asset Poverty Rate is around 43%. This is calculated as a "percentage of households without sufficient liquid assets to subsist at the poverty level for three months in the absence of income." In 2009 in USA it is around 43%. Edward Lambert (Economist's View Video Eichengreen on Dollar Dominance) stated:
He touched upon the importance of liquidity in the financial markets... but he didn't mention liquidity of households. There is very low household consumption in China.
There is a liquidity problem in the US households. That affects credit.
and maybe household liquidity makes no difference to a currency being a safe haven. Still, if liquidity of financial markets is so important, it should also be important for households.
The liquid asset poverty rate in the US was 43.1% in 2009. What could it be now considering that the savings rate is back to below 4%?
"Liquid Asset Poverty Rate... Definition... Percentage of households without sufficient liquid assets to subsist at the poverty level for three months in the absence of income."
Here is a report on liquid asset poverty in the US...
http://scorecard.assetsandopportunity.org/2012/measure/liquid-asset-poverty-rate
2. Lower middle class (60%). Depending on class model used, the middle class may constitute anywhere from 25% to 66% of households. Typically includes households with incomes above $46,326 (all households) or$67,348 (dual earners households) per year. The latter is more realistic. In order for two earners family to qualify each earner should get approximately $34K a year or more ($17 per hour wage with 40 hours workweek). Per household member income is around $23.5K The lower middle class... these are people in technical and lower-level management positions who work for those in the upper middle class as lower managers, craftspeople, and the like. They enjoy a reasonably comfortable standard of living, although it is constantly threatened by taxes and inflation. Generally, they have a Bachelor's and sometimes Masters college degree. —Brian K. William, Stacy C. Sawyer and Carl M. Wahlstrom, Marriages, Families & Intimate Relationships, 2006 (Adapted from Dennis Gilbert 1997; and Joseph Kahl 1993)[4] 3. Upper middle class (top 20%). The includes households with incomes above 91K per year. • Large percentage of those are educated two income families (when both members of the household have bachelor degree or better). Most graduates of Ivy League schools belongs to this category. • Important subgroup of upper middle class is top 10% ( millionaires). There was 3.4m millionaires in the USA in 2013, approximately 10% of population. Millionaire households constituted roughly seven percent of all American households. Half of all millionaire households in the US are headed by retirees. There are 12 million people on the planet that had investible assets of more than$1 million dollars. Collectively, this group controls $46.2 trillion dollars (2012). A quarter of them live in America (3.4m); followed by almost a sixth in Japan (1.9m) and a twelfth in Germany (over 1m). China and Great Britain round out the top 5. 4. Upper class (elite): top 1%. Annual comes (AGI) for this group exceed$380K per year. Commonly called multimillionaires (net worth two millions or more). In 2010 controlled at least 25% of total nation income (23.5% in 2007, 8.9% in 1979) . Top 1% owns more than 90% of combined or 33.8% of the nation private wealth.
5. Super rich (top 0.01%, oligarchs, super-elite, or top 1000 families). A close to this category of super-rich are billionaires. US is home of 425 billionaires, while Russia and China have 95 and 96 correspondingly. The average worth of the world's billionaires is now $3.5 billion, or$500 million more than last year.( Forbes)
• To get to the top 400 (Forbes 400 list) in the USA you need 1.3 billion. Top 400 families have as much wealth as lower half of the US population. The combined net worth of the Forbes 400 wealthiest Americans in 2007: $1.5 trillion. The combined net worth of the poorest 50% of American households:$1.6 trillion. Real number of billionaires in the USA is probably much higher. Billionaires want to keep a low profile for lots of reasons -- from personal safety concerns to not wanting financial competitors to know what they're up to.
• The youngest billionaire is the founder of the online social networking site Facebook -- 25-year-old American Mark Zuckerberger, whose net worth is estimated at $9 billion. Share of consumption for families outside upper middle class (with income, say, below$91K per year (80% of US households) is much less then commonly assumed. That means that in the USA consumer spending are driven by upper class and as such is pretty much isolated from decline of wages of lower 80% of population. The median household income in the United States is around 50K. ### Possibility of the return to the clan society The danger of high level of inequality might be revival of nationalism and return to clan (mafia) society in the form of corporatism or even some form of national socialism. Mark S. Weine made this point in his book The Rule of the Clan. What an Ancient Form of Social Organization Reveals About the Future of Individual Freedom . From one Amazon review: Weiner's book is more than worth its price simply as an armchair tour of interesting places and cultures and mores, deftly and briefly described. But he has a more serious and important point to make. While the social cohesion that the values of the clan promote is alluring, they are ultimately at odds with the values of individual autonomy that only the much-maligned modern liberal state can offer. Even the state's modern defenders tend to view it, at best, as a necessary evil. It keeps the peace, upholds (somewhat) international order, and manages the complexity of modern life in ways that allow individuals to get on with their journeys of personal fulfillment. Weiner shows (in too brief but nevertheless eloquent ways) that this reductive view of the state is insufficient to resist the seductive appeal of the clan, and that it will be for the worse if we can't find ways to combat this allure within the legal structures of modern liberalism. Read alongside James Ault's masterful participant study of fundamentalist Baptism, Spirit and Flesh, and draw your own conclusions. ### Dramatic increase in the use of guard labor and conversion of the state into National Security State Of course the elite is worried about security of their ill-gotten gains. And that's partially why the USA need such huge totally militarized police force and outsize military. Police and military are typical guard labor, that protects private wealth of the US plutocrats. Add to this equally strong private army of security contractors. Other suggested that not only the USA, but the global neoliberal society is deeply sick with the same disease that the US society expected in 20th (and like previously with globalism of robber barons age, the triumph of neoliberalism in 1990th was and is a global phenomenon). High inequality logically leads to dramatic increase of guard labor and inevitable conversion of state into National Security State. Which entail total surveillance over the citizens as a defining factor. Ruling elite is always paranoid, but neoliberal elite proved to be borderline psychopathic. They do not want merely security, they want to crush all the resistance. Butler Shaffer wrote recently that the old state system in the United States is dying before our very eyes: A system that insists on controlling others through increasing levels of systematic violence; that loots the many for the aggrandizement of the few; that regulates any expressions of human behavior that are not of service to the rulers; that presumes the power to wage wars against any nation of its choosing, a principle that got a number of men hanged at the Nuremberg trials; and finally, criminalizes those who would speak the truth to its victims, has no moral energy remaining with which to sustain itself. ### Low mobility created potential for the degeneration of the elite It is pretty clear that the USA became a society where there is de facto royalty. In the form of the strata which Roosevelt called "Economic royalists". Jut look at third generation of Walton family or Rocafeller family. Remember the degenerative Soviet Politburo, or, for a change, unforgettable dyslexic President George W Bush ? The painful truth is that in the most unequal nations including the UK and the US – the intergenerational transmission of income is very strong (in plain language they have a heredity-based aristocracy). See Let them eat cake. In more equal societies such as Denmark, the tendency of privilege to breed privilege is much lower but also exists and is on the rise. As Roosevelt observed in a similar situation of 30th: These economic royalists complain that we seek to overthrow the institutions of America. What they really complain of is that we seek to take away their power. ### High inequality undermines social cohesion Neoliberalism and its ideology(Randism) undermined social cohesion, making society members more hostile to each other and as such less willing to defend the country in case of real danger. Betrayal of the country is no longer an unspeakable crime. The purpose of government should be to foster a "civil society". The slogan of the "oligarchic right" is "me first", or, as in Paul Ryan's adoration of Ayn Rand, greed is good. Objectivism became kind of new civic religion, with the goal of maximizing the wealth of a single individual at the expense of the civil society is a virtue. And those new social norms (instilled by MSM) allow the fat cats simply to stole from everybody else without fear of punishment. See an outburst from Stephen Schwarzman. If there are two societies inside of the country with bridges burned, the bottom part is less willing to spill blood for the upper part. And having a contractual army has its own set of dangers, as it spirals into high level of militarism (being in war is a new normal for the USA during the last 30 years or so), which while enriching part of the elite bankrupts the country. The quality of roads is a testament of this process. Countervailing mechanisms and forces are destroyed. Plutocrats now can shape the conversation by buying up newspapers and television channels as well as funding political campaigns. The mousetrap of high inequality became irreversible without external shocks. The more unequal our societies become, the more we all become prisoners of that inequality. The key question is: Has our political system been so degraded by misinformation and disinformation that it can no longer function because it lost the touch with reality? The stream of outright falsehoods that MSM feed the lemmings (aka society members) is clearly politically motivated. But a side effect (externality) of all that brainwashing efforts is that nobody including players at the top of the government now understands what's going on. Look at Obama and Joe Biden. As the growth of manufacturing base slowed down and return on capital dropped, the elite wants less government social spending. They wants to end popular government programs such as Social Security, no matter how much such cuts would cause economic dislocation and strains in the current social safety net. The claims are that these programs are "Waste" and could be cut without anyone, but the "moochers" noticing the effects. They use the economic strain felt by many in the economy to promote these cuts. They promise that cuts to vital programs will leave more money in the pockets of the average person. In reality, the increase in money will be marginal, but the effects on security and loss of "group purchasing power" economy of scale will make the cuts worse than worthless (Economist's View Paul Krugman Moment of Truthiness) ### Two party system makes the mousetrap complete The US system of voting (winner take all) leads inexorably to Two party system. Third parties are only spoilers. Protest votes in the current system are COUNTERPRODUCTIVE (i.e. they help the evil, not the merely bad). Deliberate and grotesque gerrymandering further dilutes protest votes. Again, I would like to stress that rich consumers, few in number, getting the gigantic slice of income and the most of consumption (that's why the US consumption was so resilient during two last financial crises). There are the rest, the “non-rich”, accounting for surprisingly small bites of the national pie. The question arise "Why we should care?". Most of the readers of this page are not at the bottom bracket anyway. Many are pretty high up. Here is one possible answer: But should we care? There are two reasons we might: process and outcome. • We might worry that the gains of the rich are ill-gotten: the result of the old-boy network, or fraud, or exploiting the largesse of the taxpayer. • Or we might worry that the results are noxious: misery and envy, or ill-health, or dysfunctional democracy, or slow growth as the rich sit on their cash, or excessive debt and thus financial instability. ### Creating a strata of the outcasts aka permanently unemployed It is very difficult to understand the real situation with inequality in the USA today without experiencing long term unemployed. Or if you forced into job of a WalMart cashier or other low paid employee. Job that does not provide a living minimum wage. You need to watch this YouTube video Wealth Inequality in America to understand the reality. The video was posted anonymously by someone using the YouTube handle politizane. It is pretty clear that not only the USA became a society where there is de facto royalty, economic royalty but also a strata of people completely deprived. An Outcaste. And the royalty became recklessly like it should promoting to the top the likes of recovered alcoholic Bush II or "private equity shark" Romney (and remember who Romney father was). ### Education is no longer the answer to rising inequality In the current circumstances education is no longer the answer to rising inequality. Instead of serving as a social lift it, at least in some cases, became more of a social trap. This is connected with neoliberal transformation of education. With the collapse of post-war public funded educational model and privatization of the University education students face a pretty cruel world. World in which they are cows to milk. Now universities became institutions very similar to McDonalds ( or, in less politically correct terms, Bordellos of Higher Learning). Like McDonalds they need to price their services so that to receive nice profit and they to make themselves more attractive to industry they intentionally feed students with overspecialized curriculum instead of concentrating on fundamentals and the developing the ability to understand the world. Which was a hallmark of university education of the past. Since 1970th Neo-Liberal University model replaced public funded university model (Dewey model). It is now collapsing as there are not that many students, who are able (and now with lower job prospects and tale of graduates working as bartender, willing) to pay infated tuition fees. That means that higher education again by-and-large became privilege of the rich and upper middle class. Lower student enrollment first hit minted during dot-com boom expensive private colleges, who hunt for people with government support (such a former members of Arm forces). It remains viable only in elite universities, which traditionally serve the top 1% and rich foreigners. As David Schultz wrote in his article (Logos, 2012): Yet the Dewey model began to collapse in middle of the 1970s. Perhaps it was the retrenchment of the SUNY and CUNY systems in New York under Governor Hugh Carey in 1976 that began the end of the democratic university. What caused its retrenchment was the fiscal crisis of the 1970s. The fiscal crisis of the 1970s was born of numerous problems. Inflationary pressures caused by Vietnam and the energy embargoes of the 1970s, and recessionary forces from relative declines in American economic productivity produced significant economic shocks, including to the public sector where many state and local governments edged toward bankruptcy. Efforts to relieve declining corporate profits and productivity initiated efforts to restructure the economy, including cutting back on government services. The response, first in England under Margaret Thatcher and then in the United States under Ronald Reagan, was an effort to retrench the state by a package that included decreases in government expenditures for social welfare programs, cutbacks on business regulations, resistance to labor rights, and tax cuts. Collectively these proposals are referred to as Neo-liberalism and their aim was to restore profitability and autonomy to free markets with the belief that unfettered by the government that would restore productivity. Neo-liberalism had a major impact on higher education. First beginning under President Carter and then more so under Ronald Reagan, the federal and state governments cut taxes and public expenditures. The combination of the two meant a halt to the Dewey business model as support for public institutions decreased and federal money dried up. From a high in the 1960s and early 70s when states and the federal government provided generous funding to expand their public systems to educate the Baby Boomers, state universities now receive only a small percentage of their money from the government. As I pointed out in my 2005 Logos “The Corporate University in American Society” article in 1991, 74% of the funding for public universities came from states, in 2004; it was down to 64%, with state systems in Illinois, Michigan and Virginia down to 25%, 18%, and 8% respectively. Since then, the percentages have shrunk even more, rendering state universities public institutions more in name than in funding. Higher education under Neo-liberalism needed a new business model and it found it in the corporate university. The corporate university is one where colleges increasingly use corporate structures and management styles to run the university. This includes abandoning the American Association of University Professors (AAUP) shared governance model where faculty had an equal voice in the running of the school, including over curriculum, selection of department chairs, deans, and presidents, and determination of many of the other policies affecting the academy. The corporate university replaced the shared governance model with one more typical of a business corporation. For the corporate university, many decisions, including increasingly those affecting curriculum, are determined by a top-down pyramid style of authority. University administration often composed not of typical academics but those with business or corporate backgrounds had pre-empted many of the decisions faculty used to make. Under a corporate model, the trustees, increasingly composed of more business leaders than before, select, often with minimal input from the faculty, the president who, in turn, again with minimal or no faculty voice, select the deans, department heads, and other administrative personnel. ### University presidents became way too greedy Neoliberalism professes the idea the personal greed can serve positive society goals, which is reflected in famous neoliberal slogan "greed is good". And university presidents listen. Now presidents of neoliberal universities do not want to get100K per year salary, they want one, or better several, million dollar salary of the CEO of major corporation (Student Debt Grows Faster at Universities With Highest-Paid Leaders, Study Finds - NYTimes.com)
At the 25 public universities with the highest-paid presidents, both student debt and the use of part-time adjunct faculty grew far faster than at the average state university from 2005 to 2012, according to a new study by the Institute for Policy Studies, a left-leaning Washington research group.
The study, “The One Percent at State U: How University Presidents Profit from Rising Student Debt and Low-Wage Faculty Labor,” examined the relationship between executive pay, student debt and low-wage faculty labor at the 25 top-paying public universities.
The co-authors, Andrew Erwin and Marjorie Wood, found that administrative expenditures at the highest-paying universities outpaced spending on scholarships by more than two to one. And while adjunct faculty members became more numerous at the 25 universities, the share of permanent faculty declined drastically.
“The high executive pay obviously isn’t the direct cause of higher student debt, or cuts in labor spending,” Ms. Wood said. “But if you think about it in terms of the allocation of resources, it does seem to be the tip of a very large iceberg, with universities that have top-heavy executive spending also having more adjuncts, more tuition increases and more administrative spending.”
... ... ...
The Chronicle of Higher Education’s annual survey of public university presidents’ compensation, also released Sunday, found that nine chief executives earned more than $1 million in total compensation in 2012-13, up from four the previous year, and three in 2010-11. The median total compensation of the 256 presidents in the survey was$478,896, a 5 percent increase over the previous year.
... ... ...
As in several past years, the highest-compensated president, at $6,057,615 in this period, was E. Gordon Gee, who resigned from Ohio State last summer amid trustee complaints about frequent gaffes. He has since become the president of West Virginia University. This trick requires dramatic raising of tuition costs. University bureaucracy also got taste for better salaries and all those deans, etc want to be remunerated like vice presidents. So raising the tuition costs became the key existential idea of neoliberal university. Not quality of education, but tuition costs now are the key criteria of success. And if you can charge students$40K per semester it is very, very good. If does not matter that most population get less then $20 an hour. The same is true for professors, who proved to be no less corruptible. And some of them, such as economic departments, simply serve as prostitutes for financial oligarchy. So they were corrupted even before that rat race for profit. Of course there are exceptions. But they only prove the rule. As the result university tuition inflation outpaced inflation by leaps and bounds. At some point amount that you pay (and the level of debt after graduation) becomes an important factor in choosing the university. So children of "have" and "have nots" get into different educational institutions and do not meet each other. In a way aristocracy returned via back door. Neoliberal university professes "deep specialization" to create "ready for the market" graduates. And that creates another problem: education became more like stock market game and that makes more difficult for you to change you specialization late in the education cycle. But early choice entail typical stock market problem: you might miss the peak of the market or worse get into prolonged slump as graduates in finance learned all too well in 2008. That's why it is important not to accumulate too much debt: this is a kind of "all in" play in poker. You essentially bet that in a particular specialty there will be open positions with high salary, when you graduate. If you lose this bet you are done. As a result of this "reaction to the market trends" by neoliberal universities, when universities bacem appendixes of HR of large corporations students need to be more aware of real university machinery then students in 50th or 60th of the last century. And first of all assume that it is functioning not to their benefits. One problem for a student is that there are now way too many variables that you do not control. Among them: • Will it be a sizable market for graduates for the given specialty in four years from now (late specialization and attempt to get a job after bachelor degree might help here; in this case the selection of the master degree specialization can become more realistic). • What will be the general health of national economy on the moment of graduation? Remeber about students who graduated in 2008. • The total price of education (and by extension the size of the debt you get on the day of graduation from the college). • Whether you become a victim of rip offs sponsored by the university administration. Although this is a slight exaggeration, but the working hypothesis for student a modern neoliberal university should probably be "This is a hostile environment -- Beware financial ripoffs". That might involve gentle pushing you into obtaining worthless specialty or other intricate ways to screw you based on your lack of life experience, poor understanding of academic environment and natural youth maximalism. Loading you with loans to the max is another dirty trick. In private universities this is a new art that is polished to perfection and widely practiced on unsuspecting lemmings. On the deep level neoliberal university is not interested to help you to find specialization and place in life where can unleash your talents. You are just a paying customers much like in McDonalds, and university interests are such they might try to push you in wrong direction or load you with too much debt. If there is deep mismatch as was with computer science graduates after crash of dot-com boom, or simply bad job market due to economy stagnation and you can't find the job for your new specialty (or if you got "junk" specialty with inherent high level of unemployment among professionals) and you have substantial education debt, then waiting tables or having some other MacJob is a real disaster for you. As with such selaries you simply can't pay it back. So controlling the level of debt is very important and in this sence parents financial help is now necessary. In other words education became more and more "rich kids game". That does not mean that university education should be avoided for those from families with modest means. On the contrary it provides unique experience and help a person to mature in multiple ways difficult to achieve without it. It is still one of the best ways to get vertical mobility. But unless parents can support you you need to try to find the most economical way to obtain it without acquiring too much debt. This is you first university exam. And if you fail it you are in trouble. For example, computer science education is a great way to learn quite a few things necessary for a modern life. But the price does matter and prestige of the university institution that you attend is just one of the factors you should consider in your evaluation. It should not be the major factor ("vanity fair") unless your parents are rich and can support you. If you are good you can get later a master degree in a prestigious university after graduation from a regular college. Or even Ph.D. County colleges are greatly underappreciated and generally provide pretty high standard of education, giving ability to students to save money for the first two years before transferring to a four year college. They also smooth the transition as finding yourself among people who are only equal or superior then you (and have access to financial respource that you don't have) is a huge stress. The proverb say that it is better to be first in the village then last in the town has some truth in it. Prestigious universities might provide a career boost (high fly companies usually accept resumes only from Ivy League members), but they cost so much that you need to be a son or daughter of well-to-do parents to feel comfortably in them. Or extremely talented. Also amount of career boost that elite universities provide depends on whom your parents are and what connections they have. It does not depend solely on you and the university. Again, I would like to stress that you should resist "vanity fair" approach to your education: a much better way is to try to obtain BS in a regular university and them try to obtain MS and then, if you are good, PHD, in a prestigious university. Here is a fragment of an interesting discussion that covers this topic (Low Mobility Is Not a Social Tragedy?, Feb 13, 2013 ; I recommend you to read the whole discussion ): kievite: I would like to defend Greg Clack. I think that Greg Clack point is that the number of gifted children is limited and that exceptionally gifted children have some chance for upper move in almost all, even the most hierarchical societies (story of Alexander Hamilton was really fascinating for me, the story of Mikhail Lomonosov http://en.wikipedia.org/wiki/Mikhail_Lomonosov was another one -- he went from the very bottom to the top of Russian aristocracy just on the strength of his abilities as a scientist). In no way the ability to "hold its own" (typical for rich families kids) against which many here expressed some resentment represents social mobility. But the number of kids who went down is low -- that's actually proves Greg Clack point: (1) Studies of social mobility using surnames suggest two things. Social mobility rates are much lower than conventionally estimated. And social mobility rates estimated in this way vary little across societies and time periods. Sweden is no more mobile than contemporary England and the USA, or even than medieval England. Social mobility rates seem to be independent of social institutions (see the other studies on China, India, Japan and the USA now linked here). Francisco Ferreira rejects this interpretation, and restates the idea that there is a strong link between social mobility rates and inequality in his interesting post. What is wrong with the data Ferreira cites? Conventional estimates of social mobility, which look at just single aspects of social status such as income, are contaminated by noise. If we measure mobility on one aspect of status such as income, it will seem rapid. But this is because income is a very noisy measure of the underlying status of families. The status of families is a combination of their education, occupation, income, wealth, health, and residence. They will often trade off income for some other aspect of status such as occupation. A child can be as socially successful as a low paid philosophy professor as a high paid car salesman. Thus if we measure just one aspect of status such as income we are going to confuse the random fluctuations of income across generations, influenced by such things as career choices between business and philosophy, with true generalised social mobility. If these estimates of social mobility were anywhere near correct as indicating true underlying rates of social mobility, then we would not find that the aristocrats of 1700 in Sweden are still overrepresented in all elite occupations of Sweden. Further, the more equal is income in a society, the less signal will income give of the true social status of families. In a society such as Sweden, where the difference in income between bus drivers and philosophy professors is modest, income tells us little about the social status of families. It is contaminated much more by random noise. Thus it will appear if we measure social status just by income that mobility is much greater in Sweden than in the USA, because in the USA income is a much better indicator of the true overall status of families. The last two paragraphs of Greg Clark article cited by Mark Thoma are badly written and actually are somewhat disconnected with his line of thinking as I understand it as well as with the general line of argumentation of the paper. Again, I would like to stress that a low intergenerational mobility includes the ability of kids with silver spoon in their mouth to keep a status close to their parent. The fact that they a have different starting point then kids from lower strata of society does not change that. I think that the key argument that needs testing is that the number of challengers from lower strata of the society is always pretty low and is to a large extent accommodated by the societies we know (of course some societies are better then others). Actually it would be interesting to look at the social mobility data of the USSR from this point of view. But in no way, say, Mark Thoma was a regular kid, although circumstances for vertical mobility at this time were definitely better then now. He did possessed some qualities which made possible his upward move although his choice of economics was probably a mistake ;-). Whether those qualities were enough in more restrictive environments we simply don't know, but circumstances for him were difficult enough as they were. EC -> kievite... "the number of gifted children is limited" I stopped reading after that. I teach at a high school in a town with a real mix of highly elite families, working class families, and poor families, and I can tell you that the children of affluent parents are not obviously more gifted than the children of poor families. They do, however, have a lot more social capital, and they have vastly more success. But the limitations on being "gifted" are irrelevant. According to an extensive study (Turkheimer et al., 2003) of 50,000 pregnant women and the children they went on to have (including enough sets of twins to be able to study the role of innate genetic differences), variation in IQ among the affluent seems to be largely genetic. Among the poor, however, IQ has very little to do with genes -- probably because the genetic differences are swamped and suppressed by the environmental differences, as few poor kids are able to develop as fully as they would in less constrained circumstances. kievite -> EC... All you said is true. I completely agree that "...few poor kids are able to develop as fully as they would in less constrained circumstances." So there are losses here and we should openly talk about them. Also it goes without saying that social capital is extremely important for a child. That's why downward mobility of children from upper classes is suppressed, despite the fact that some of them are plain vanilla stupid. But how this disproves the point made that "exceptionally gifted children have some chance for upper move in almost all, even the most hierarchical societies"? I think you just jumped the gun... mrrunangun: The early boomers benefitted from the happy confluence of the postwar boom, LBJ's Great Society efforts toward financial assistance for those seeking to advance their educations, and the 1964 Civil Rights Act which opened opportunities for marginalized social groups in institutions largely closed to them under the prewar social customs in the US. The US Supreme Court is made up of only Jews and Catholics as of this writing, a circumstance inconceivable in the prewar America. Catholics were largely relegated to separate and unequal institutions. Jews' opportunities were limited by quotas and had a separate set of institutions of their own where their numbers could support such. Where their numbers were not sufficient, they were often relegated to second rate institutions. Jewish doctors frequently became the leading men in the Catholic hospitals in Midwestern industrial towns where they were unwelcome in the towns' main hospitals. Schools, clubs, hospitals, professional and commercial organizations often had quota or exclusionary policies. Meritocracy has its drawbacks, but we've seen worse in living memory. College textbook publishing became a racket with the growth of neoliberalism. That means at least since 1980. And it is pretty dirty racket with willing accomplishes in form of so called professors like Greg Mankiw. For instance, you can find a used 5th edition Mankiw introductory to Microeconomics for under$4.00, while a new 7th edition costs over $200. An interesting discussion of this problem can be found at Thoughts on High-Priced Textbooks' ### New generation of robber barons: US oligarchy never was so audacious As Jesse aptly noted at his blog post Echoes of the Past In The Economist - The Return of the Übermenschen the US oligarchy never was so audacious. And it is as isolated as the aristocracies of bygone days, isolation reinforced by newly minted royalty withdrawal into gated estates, Ivy League Universities, and private planes. They are not openly suggesting that no child should rise above the status of parents, presumably in terms of wealth, education, and opportunity. But their policies are directed toward this goal. If you are born to poor parents in the USA, all bets are off -- your success is highly unlikely, and your servile status, if not poverty is supposedly pre-destined by poor generic material that you got. This is of course not because the children of the elite inherit the talent, energy, drive, and resilience to overcome the many obstacles they will face in life from their parents. Whatever abilities they have (and regression to the mean is applicable to royalty children too), they are greatly supplemented, of course, by the easy opportunities, valuable connections, and access to power. That's why the result of SAT in the USA so strongly correlated with the wealth of parents. And a virtual freedom from prosecution does not hurt either, in case they have inherited a penchant for sociopathy, or something worse, along with their many gifts. The view that the children of the poor will not do well, because they are genetically inferior became kind of hidden agenda. These are the pesky 99% just deserve to be cheated and robbed by the elite, because of the inherent superiority of the top one percent. There is no fraud in the system, only good and bad breeding, natural predators and prey. This line of thinking rests on the assumption that I succeed, therefore I am. And if you do not, well, so be it. You will be low-paid office slave or waiter in McDonalds with a college diploma as it is necessary for the maximization of profits of the elite. There is no space at the top for everybody. Enjoy the ride... Here is an typical expression of such views: "Many commentators automatically assume that low intergenerational mobility rates represent a social tragedy. I do not understand this reflexive wailing and beating of breasts in response to the finding of slow mobility rates. The fact that the social competence of children is highly predictable once we know the status of their parents, grandparents and great-grandparents is not a threat to the American Way of Life and the ideals of the open society The children of earlier elites will not succeed because they are born with a silver spoon in their mouth, and an automatic ticket to the Ivy League. They will succeed because they have inherited the talent, energy, drive, and resilience to overcome the many obstacles they will face in life. Life is still a struggle for all who hope to have economic and social success. It is just that we can predict who will be likely to possess the necessary characteristics from their ancestry." Greg Clark, The Economist, 13 Feb. 2013 Mr. Clark is now a professor of economics and was the department chair until 2013 at the University of California, Davis. His areas of research are long term economic growth, the wealth of nations, and the economic history of England and India. And another one: "During this time, a growing professional class believed that scientific progress could be used to cure all social ills, and many educated people accepted that humans, like all animals, were subject to natural selection. Darwinian evolution viewed humans as a flawed species that required pruning to maintain its health. Therefore negative eugenics seemed to offer a rational solution to certain age-old social problems." David Micklos, Elof Carlson, Engineering American Society: The Lesson of Eugenics If we compare this like of thinking with the thinking of eightieth century and you will see that the progress is really limited: “With savages, the weak in body or mind are soon eliminated; and those that survive commonly exhibit a vigorous state of health. We civilized men, on the other hand, do our utmost to check the process of elimination; we build asylums for the imbecile, the maimed, and the sick; we institute poor-laws; and our medical men exert their utmost skill to save the life of every one to the last moment. There is reason to believe that vaccination has preserved thousands, who from a weak constitution would formerly have succumbed to small-pox. Thus the weak members of civilised societies propagate their kind. No one who has attended to the breeding of domestic animals will doubt that this must be highly injurious to the race of man. It is surprising how soon a want of care, or care wrongly directed, leads to the degeneration of a domestic race; but excepting in the case of man himself, hardly any one is so ignorant as to allow his worst animals to breed. The aid which we feel impelled to give to the helpless is mainly an incidental result of the instinct of sympathy, which was originally acquired as part of the social instincts, but subsequently rendered, in the manner previously indicated, more tender and more widely diffused. Nor could we check our sympathy, if so urged by hard reason, without deterioration in the noblest part of our nature. The surgeon may harden himself whilst performing an operation, for he knows that he is acting for the good of his patient; but if we were intentionally to neglect the weak and helpless, it could only be for a contingent benefit, with a certain and great present evil. Hence we must bear without complaining the undoubtedly bad effects of the weak surviving and propagating their kind; but there appears to be at least one check in steady action, namely the weaker and inferior members of society not marrying so freely as the sound; and this check might be indefinitely increased, though this is more to be hoped for than expected, by the weak in body or mind refraining from marriage.” Charles Darwin, The Descent of Man So all this screams of MSM about dropping consumer spending is just a smoke screen. In oligarchic republic which USA represents, consumption is heavily shifted to top 20% and as such is much less dependent of the conditions of the economy. And top 20% can afford$8 per gallon gas (European price) without any problems.
John Barkley Rosser, Jr. With Marina V. Rosser and Ehsan Ahmed, argued for a two-way positive link between income inequality (economic inequality) and the size of an underground economy in a nation (Rosser, Rosser, and Ahmed, 2000).
Globally in 2005, top fifth (20%) of the world accounted for 76.6% of total private consumption (20:80 Pareto rule). The poorest fifth just 1.5%. I do not think the USA differs that much from the rest of the world.
### Citigroup Plutonomy Research reports
There was two famous Citigroup Plutonomy research reports (2005 and 2006) featured in in Capitalism: A Love Story . Here is how Yves Smith summarized the findings (in her post High Income Disparity Leads to Low Savings Rates)
On the one hand, the authors, Ajay Kapur, Niall Macleod, and Narendra Singh get some credit for addressing a topic surprisingly ignored by mainstream economists. There have been some noteworthy efforts to measure the increase in concentration of income and wealth in the US most notably by Thomas Piketty and Edmund Saez. But while there have been some efforts to dispute their findings (that the rich, particularly the top 1%, have gotten relatively MUCH richer in the last 20 years), for the most part discussions of what to make of it (as least in the US) have rapidly descended into theological debates. One camp laments the fall in economic mobility (a predictable side effect), the corrosive impact of perceived unfairness, and the public health costs (even the richest in high income disparity countries suffer from shortened life spans). The other camp tends to focus on the Darwinian aspects, that rising income disparity is the result of a vibrant, open economy, and the higher growth rates that allegedly result will lift help all workers.
Yet as far as I can tell, there has been virtually no discussion of the macroeconomy effects of rising income and wealth disparities, or to look into what the implications for investment strategies might be. One interesting effect is that with rising inequality the share of "guard labor" grows very quickly and that puts an upper limit on the further growth of inequality (half of the citizens cannot be guards protecting few billionaires from the other half).
Now the fact that the Citi team asked a worthwhile question does not mean they came up with a sound answer. In fact, he reports are almost ludicrously funny in the way they attempt to depict what they call plutonomy as not merely a tradeable trend (as in leading to some useful investment ideas), but as a Brave New Economy development. I haven't recalled such Panglossian prose since the most delirious days of the dot-com bubble:
We will posit that:
1) the world is dividing into two blocs – the plutonomies, where economic growth is powered by and largely consumed by the wealthy few, and the rest. Plutonomies have occurred before in sixteenth century Spain, in seventeenth century Holland, the Gilded Age and the Roaring Twenties in the U.S.
What are the common drivers of Plutonomy? Disruptive technology-driven productivity gains, creative financial innovation, capitalist-friendly cooperative governments, an international dimension of immigrants and overseas conquests invigorating wealth creation, the rule of law, and patenting inventions. Often these wealth waves involve great complexity, exploited best by the rich and educated of the time…..Most “Global Imbalances” (high current account deficits and low savings rates, high consumer debt levels in the Anglo-Saxon world, etc) that continue to (unprofitably) preoccupy the world’s intelligentsia look a lot less threatening when examined through the prism of plutonomy. The risk premium on equities that might derive from the dyspeptic “global imbalance” school is unwarranted – the earth is not going to be shaken off its axis, and sucked into the cosmos by these “imbalances”. The earth is being held up by the muscular arms of its entrepreneur-plutocrats, like it, or not..
Yves here. Translation: plutonomy is such a great thing that the entire stock market would be valued higher if everyone understood it. And the hoops the reports go through to defend it are impressive. The plutomony countries (the notorious Anglo-Saxon model, the US, UK, Canada and Australia) even have unusually risk-seeking populations (and that is a Good Thing):
…a new, rather out-of-the box hypothesis suggests that dopamine differentials can explain differences in risk-taking between societies. John Mauldin, the author of “Bulls-Eye Investing” in an email last month cited this work. The thesis: Dopamine, a pleasure-inducing brain chemical, is linked with curiosity, adventure, entrepreneurship, and helps drive results in uncertain environments. Populations generally have about 2% of their members with high enough dopamine levels with the curiosity to emigrate. Ergo, immigrant nations like the U.S. and Canada, and increasingly the UK, have high dopamine-intensity populations.
Yves here. What happened to “Give me your tired, your poor/Your huddled masses yearning to breathe free/The wretched refuse of your teeming shore”? Were the Puritans a high dopamine population? Doubtful. How about the Irish emigration to the US, which peaked during its great famine?
Despite a good deal of romanticization standing in for analysis, the report does have one intriguing, and well documented finding: that the plutonomies have low savings rates. Consider an fictional pep rally chant:
We’re from Greenwich
We’re invincible
Living off our income
Never touch the principal
Think about that. If you are rich, you can afford to spend all your income. You don’t need to save, because your existing wealth provides you with a more than sufficient cushion.
The ramifications when you have a high wealth concentration are profound. From the October 2005 report:
In a plutonomy, the rich drop their savings rate, consume a larger fraction of their bloated, very large share of the economy. This behavior overshadows the decisions of everybody else. The behavior of the exceptionally rich drives the national numbers – the “appallingly low” overall savings rates, the “over-extended consumer”, and the “unsustainable” current accounts that accompany this phenomenon….
Feeling wealthier, the rich decide to consume a part of their capital gains right away. In other words, they save less from their income, the wellknown wealth effect. The key point though is that this new lower savings rate is applied to their newer massive income. Remember they got a much bigger chunk of the economy, that’s how it became a plutonomy. The consequent decline in absolute savings for them (and the country) is huge when this happens. They just account for too large a part of the national economy; even a small fall in their savings rate overwhelms the decisions of all the rest.
Yves here. This account rather cheerily dismisses the notion that there might be overextended consumers on the other end of the food chain. Unprecedented credit card delinquencies and mortgage defaults suggest otherwise. But behaviors on both ends of the income spectrum no doubt played into the low-savings dynamic: wealthy who spend heavily, and struggling average consumers who increasingly came to rely on borrowings to improve or merely maintain their lifestyle. And let us not forget: were encouraged to monetize their home equity, so they actually aped the behavior of their betters, treating appreciated assets as savings. Before you chide people who did that as profligate (naive might be a better characterization), recall that no one less than Ben Bernanke was untroubled by rising consumer debt levels because they also showed rising asset levels. Bernanke ignored the fact that debt needs to be serviced out of incomes, and households for the most part were not borrowing to acquire income-producing assets. So unless the rising tide of consumer debt was matched by rising incomes, this process was bound to come to an ugly end.
Also under Bush country definitely moved from oligarchy to plutocracy. Bush openly claimed that "have more" is his base. The top 1% of earners have captured four-fifths of all new income.
An interesting question is whether the extremely unequal income distribution like we have now make the broader society unstable. Or plebs is satisfied with "Bread and circuses" (aka house, SUV, boat, Daytona 500 and 500 channels on cable) as long as loot from the other parts of the world is still coming...
### What is the upper limit of inequality?
Martin Bento in his response to Risk Pollution, Market Failure & Social Justice — Crooked Timber made the following point:
Donald made a point I was going to. I would go a bit further though. It’s not clear to me that economic inequality is not desired for its own sake by the some of the elite. After all, studies suggest that once you get past the level of income needed for a reasonably comfortable life – about $40K for a single person in the US - the quest for money is mostly about status. Meeting your needs is not necessarily zero sum, but status is: my status can only be higher than yours to the extent that yours is lower than mine. The more inequality there is, the more status differentiation there is. Of course, there are other sources of status than money, but I’m talking specifically about people who value money for the status it confers. This is in addition to the “Donner Party Conservatism” calls to make sure the incentives to work are as strong as possible (to be fair, I think tolerating some inequality for the sake of incentives is worthwhile, but we seem to be well beyond that). For example currently the USA is No.3 in Gini measured inequality (cyeahoo, Oct 16, 2009), but still the society is reasonably stable: Gini score: 40.8 GDP 2007 (US$ billions): 13,751.4
Share of income or expenditure (%)
Poorest 10%: 1.9
Richest 10%: 29.9
Ratio of income or expenditure, share of top 10% to lowest 10%: 15.9
What is really surprising is how low the average American salary is: just $26,352 or ~$2,200 a month. This is equal approximately to $13 an hour. At the same time: • There are roughly 150,000 households in the United States with a net worth of at least$20 million.
• In 2004, the last time the Fed provided data, there were 649,000 American households worth $10 million or more, a nearly 300 percent jump since 1992. • More than 49,000 Americans are said to have more than$50 million.
• 125,000 more households in the $25 million to$50 million range.
Some interesting facts about upper class (top 1% of the US population). First of all this is pretty self-isolated group (a nation within a nation). They associate almost exclusively with members of their own social and economic standing, few members of the bottom 90% of Americans have ever even personally met a member of the upper class.
• In 1985 there were 13 US billionaires- today there are more than a 1,000.
• The richest 1% of Americans (the upper class) have income over $380K and control more wealth than 90% of the U.S. population combined (owned 34.6% of all privately held wealth,). The next 19% own 50% which means that 20% of people control 85% of total wealth, leaving only 15% to the bottom 80% (wage and salary workers). • The earners in the top 1 percent in income distribution bracket make 20% of the money. • Private jet travel is the fastest growing luxury market segment. Over 15% of all flights in the U.S. are by private jet. There are more than 1,000 daily private jet flights in key markets such as South Florida, New York and Los Angeles.- (Elite Traveler Magazine). Now about top 400: • 2007 was the first year that making Forbes magazine's list of 400 richest Americans required more than$1 billion. (The cutoff was $1.3 billion.) • In 2005, the top 400 earners in America collectively piled up$214 billion, more than the GDP of 149 nations.
Here are some interesting hypothesis about affect of inequality of the society:
• It's not ratio of poorest 10% to highest 10% that matter. It's the ratio of top 10% to middle class. Without a strong middle class, the democracy is not sustainable. The weakness of this hypothesis is that it does not tell what kind of democracy is not sustainable without the strong middle class. Also Asian states successfully demonstrated society progress is possible without or, more correctly, with very little democracy.
• As economic inequality grows beyond certain level, nations might become increasingly politically unstable. First of all the level of instability depends probably on the level of mass communication available in the society: ancient societies were much more unequal that any of modern societies and still they were pretty stable. Supporting evidence includes the USSR: as mass communications dramatically increased due to personal computers and Internet, the society lost internal stability despite the fact that economic difficulties it experiences were far less prominent that in the past.
At some point the anger creates destructive tendencies in society that are self-sustainable no matter what police force is available for the state (like nationalistic forces that blow out the USSR). In the meantime society experiences apathy and decline in all societal dimensions (mass alcoholism and hidden opposition to any productivity rising initiatives in the USSR). At the same time ruling elite became less and less intellectually astute ( dominated by gerontocrats in the USSR) and at some point pretty detached from reality ("let them eat cake").
• After inequality reaches certain level (critical mass) society became permanently stratified and rigid and that negatively affects the quality of the elite. Supporting evidence includes the USSR with its deteriorating "nomenclature". Recent research drawing on a series of studies from Europe, the United States and Australia has concluded that among comparable countries, the United States has an unusually rigid social system and limited possibilities for social mobility.
• In a very unequal society, the people at the top have to spend a lot of time and energy keeping the lower classes obedient and productive. High inequality also changes the composition of labor in a very negative way. because it leads to an excess of what is calls “guard labor.” In a 2007 paper on the subject, Samuel Bowles and co-author Arjun Jayadev, an assistant professor at the University of Massachusetts, made an eye opening finding: Roughly 1 in 10 Americans is employed to keep fellow citizens in line and protect private wealth from would-be Robin Hoods.
Higher inequality is somewhat connected with imperial outreach. As Kevin de Bruxelles noted in comment to What collapsing empire looks like - Glenn Greenwald - Salon.com
I’m surprised a thoughtful guy like Glenn Greenwald would make such an unsubstantiated link between collapsing public services for American peasants and a collapse of America’s global (indirect) imperial realm. Is there really a historic link between the quality of a nation’s services to its citizens and its global power? If so the Scandinavian countries would have been ruling the world for the past fifty years. If anything there is probably a reverse correlation. None of the great historic imperial powers, such as the British, Roman, Spanish, Russian, Ottoman, Mongolian, Chinese, Islamic, or Persian, were associated with egalitarian living conditions for anyone outside of the elite. So from a historic point of view, the ability to divert resources away from the peasants and towards the national security state is a sign of elite power and should be seen as a sign increased American imperial potential.
Now if America’s global power was still based on economic production then an argument could be made that closing libraries and cancelling the 12th grade would lower America’s power potential. But as we all know that is no longer the case and now America’s power is as the global consumer of excess production. Will a dumber peasantry consume even more? I think there is a good chance that the answer is yes.
Now a limit could be reached to how far the elite can lower their peasant’s standard of living if these changes actually resulted in civil disorder that demanded much energy for American elites to quell. But so far that is far from the case. Even a facile gesture such as voting for any other political party except the ruling Republicrats seems like a bridge too far for 95% of the peasants to attempt. No, the sad truth is that American elites, thanks to their exceptional ability to deliver an ever increasing amount of diverting bread and circuses, have plenty of room to further cut standards of living and are nowhere near reaching any limits.
What the reductions in economic and educational options will result in are higher quality volunteers into America’s security machinery, which again obviously raise America’s global power potential. This, along with an increasingly ruthless elite, should assure that into the medium term America’s powerful position will remain unchallenged. If one colors in blue on a world map all the countries under de facto indirect US control then one will start to realize the extent of US power. The only major countries outside of US control are Iran, North Korea, Syria, Cuba, and Venezuela. Iraq and Afghanistan are recent converts to the blue column but it far from certain whether they will stay that way. American elites will resist to the bitter end any country falling from the blue category. But this colored world map is the best metric for judging US global power.
In the end it’s just wishful thinking to link the declining of the American peasant’s standard of living with a declining of the American elite’s global power. I wouldn’t be surprised to see this proven in an attack on Iran in the near future.
### High inequality and organized crime
Higher pay inequality feeds organized crime (and here we assume that banksters are different from the organized crime, which is probably a very weak hypothesis ;-). That's why Peter Drucker was probably right. He thought that top execs shouldn't get more than 25 times the average salary in the company (which would cap it around $2 millions). I would suggest a metric based on multiple from the average of lower 50% full time jobs for a particular firm (for example in Wal Mart that would cashers and cleaners, people who are living in Latin American style poverty, if they are single mothers as many are). One of the particular strengths of the idea of the maximum wage base on average of lower 50% of salaries is that if senior managers want to increase their own pay, they have to increase that of the lower-paid employees too. And in a way financial industry itself became an organized crime. The notion of exorbitant wages prevalent in financial industry (and, before it, pioneered by in high-tech companies during dot-com boom via stock options) is based on the idea that some people are at least hundred times more productive then the others. In some professions like programming this is true and such people do exists. But any sufficiently large company is about team work. No matter what job a person does and no matter how many hours they work, there is no possible way that an single individual will create a whole product. It's a team effort. That means that neither skill nor expertise or intelligence can justify the payment of 200, 300 or even 400 times the wages of the lowest-paid 20% workers in any large organization. This is especially questionable for financial professionals because by and large they are engaged in non-productive. often harmful for the society as whole redistribution activities, the same activities that organized crime performs. Moreover, modern traders are actually play a tremendously destructive role as subprime crisis (and before it saving and loans debacle) aptly demonstrated. which make them indistinguishable in this societal roles from cocaine pushers on the streets. Drucker's views on the subject are probably worth revisiting. Rick Wartzman wrote in his Business Week article Put a Cap on CEO Pay' that "those who understand that what comes with their authority is the weight of responsibility, not "the mantle of privilege," as writer and editor Thomas Stewart described Drucker's view. It's their job "to do what is right for the enterprise—not for shareholders alone, and certainly not for themselves alone." Large pay also attracts sociopathic personalities. Sociopathic personalities at the top of modern organizations is another important but rarely discussed danger. "I'm not talking about the bitter feelings of the people on the plant floor," Drucker told a reporter in 2004. "They're convinced that their bosses are crooks anyway. It's the mid-level management that is incredibly disillusioned" by CEO compensation that seems to have no bounds. " This is especially true, Drucker explained in an earlier interview, when CEOs pocket huge sums while laying off workers. That kind of action, he said, is "morally unforgivable." There can be exceptions but they should be in middle management not in top management ranks. Put it all together, and the picture became really discouraging. We have an ill-informed or misinformed electorate, politicians who gleefully add to the misinformation, watchdogs who are afraid to bark and guards on each and every corner. Mousetrap is complete. ### Recommended Books #### Winner-Take-All Politics How Washington Made the Rich Richer -- and Turned Its Back on the Middle Class by Paul Pierson, Jacob S. Hacker Henry J. Farrell Transforming American politics, September 16, 2010 This review is from: Winner-Take-All Politics: How Washington Made the Rich Richer--and Turned Its Back on the Middle Class (Hardcover) This is a transformative book. It's the best book on American politics that I've read since Rick Perlstein's Before the Storm. Not all of it is original (the authors seek to synthesize others' work as well as present their own, but provide due credit where credit is due). Not all of its arguments are fully supported (the authors provide a strong circumstantial case to support their argument, but don't have smoking gun evidence on many of the relevant causal relations). But it should transform the ways in which we think about and debate the political economy of the US. The underlying argument is straightforward. The sources of American economic inequality are largely political - the result of deliberate political decisions to shape markets in ways that benefit the already-privileged at the expense of a more-or-less unaware public. The authors weave a historical narrative which Kevin Drum (who says the same things that I am saying about the book's importance) summarizes cogently here. This is not necessarily original - a lot of leftwing and left-of-center writers have been making similar claims for a long time. What is new is both the specific evidence that the authors use, and their conscious and deliberate effort to reframe what is important about American politics. First - the evidence. Hacker and Pierson draw on work by economists like Picketty and Saez on the substantial growth in US inequality (and on comparisons between the US and other countries), but argue that many of the explanations preferred by economists (the effects of technological change on demand for skills) simply don't explain what is going on. First, they do not explain why inequality is so top-heavy - that is, why so many of the economic benefits go to a tiny, tiny minority of individuals among those with apparently similar skills. Second, they do not explain cross national variation - why the differences in the level of inequality among advanced industrialized countries, all of which have gone through more-or-less similar technological shocks, are so stark. While Hacker and Pierson agree that technological change is part of the story, they suggest that the ways in which this is channeled in different national contexts is crucial. And it is here that politics plays a key role. Many economists are skeptical that politics explains the outcome, suggesting that conventional forms of political intervention are not big enough to have such dramatic consequences. Hacker and Pierson's reply implicitly points to a blind spot of many economists - they argue that markets are not natural,' but instead are constituted by government policy and political institutions. If institutions are designed one way, they result in one form of market activity, whereas if they are designed another way, they will result in very different outcomes. Hence, results that appear like natural' market operations to a neo-classical economist may in fact be the result of political decisions, or indeed of deliberate political inaction. Hacker and Pierson cite e.g. the decision of the Clinton administration not to police derivatives as an example of how political coalitions may block reforms in ways that have dramatic economic consequences. Hence, Hacker and Pierson turn to the lessons of ongoing political science research. This is both a strength and a weakness. I'll talk about the weakness below - but I found the account of the current research convincing, readable and accurate. It builds on both Hacker and Pierson's own work and the work of others (e.g. the revisionist account of American party structures from Zaller et al. and the work of Bartels). This original body of work is not written in ways that make it easily accessible to non-professionals - while Bartels' book was both excellent and influential, it was not an easy read. Winner-Take-All Politics pulls off the tricky task of both presenting the key arguments underlying work without distorting them and integrating them into a highly readable narrative. As noted above, the book sets out (in my view quite successfully) to reframe how we should think about American politics. It downplays the importance of electoral politics, without dismissing it, in favor of a focus on policy-setting, institutions, and organization. • First and most important - policy-setting. Hacker and Pierson argue that too many books on US politics focus on the electoral circus. Instead, they should be focusing on the politics of policy-setting. Government is important, after all, because it makes policy decisions which affect people's lives. While elections clearly play an important role in determining who can set policy, they are not the only moment of policy choice, nor necessarily the most important. The actual processes through which policy gets made are poorly understood by the public, in part because the media is not interested in them (in Hacker and Pierson's words, "[f]or the media, governing often seems like something that happens in the off-season"). • And to understand the actual processes of policy-making, we need to understand institutions. Institutions make it more or less easy to get policy through the system, by shaping veto points. If one wants to explain why inequality happens, one needs to look not only at the decisions which are made, but the decisions which are not made, because they are successfully opposed by parties or interest groups. Institutional rules provide actors with opportunities both to try and get policies that they want through the system and to stymie policies that they do not want to see enacted. Most obviously in the current administration, the existence of the filibuster supermajority requirement, and the willingness of the Republican party to use it for every significant piece of legislation that it can be applied to means that we are seeing policy change through "drift." Over time, policies become increasingly disconnected from their original purposes, or actors find loopholes or ambiguities through which they can subvert the intention of a policy (for example - the favorable tax regime under which hedge fund managers are able to treat their income at a low tax rate). If it is impossible to rectify policies to deal with these problems, then drift leads to policy change - Hacker and Pierson suggest that it is one of the most important forms of such change in the US. • Finally - the role of organizations. Hacker and Pierson suggest that organizations play a key role in pushing through policy change (and a very important role in elections too). They typically trump voters (who lack information, are myopic, are not focused on the long term) in shaping policy decisions. Here, it is important that the organizational landscape of the US is dramatically skewed. There are many very influential organizations pushing the interests of business and of the rich. Politicians on both sides tend to pay a lot of attention to them, because of the resources that they have. There are far fewer - and weaker - organizations on the other side of the fight, especially given the continuing decline of unions (which has been hastened by policy decisions taken and not taken by Republicans and conservative Democrats). In Hacker and Pierson's account, these three together account for the systematic political bias towards greater inequality. In simplified form: Organizations - and battles between organizations over policy as well as elections - are the structuring conflicts of American politics. The interests of the rich are represented by far more powerful organizations than the interests of the poor and middle class. The institutions of the US provide these organizations and their political allies with a variety of tools to promote new policies that reshape markets in their interests. This account is in some ways neo-Galbraithian (Hacker and Pierson refer in passing to the notion of `countervailing powers'). But while it lacks Galbraith's magisterial and mellifluous prose style, it is much better than he was on the details. Even so (and here begin the criticisms) - it is not detailed enough. The authors set the book up as a whodunit: Who or what is responsible for the gross inequalities of American economic life? They show that the other major suspects have decent alibis (they may inadvertently have helped the culprit, but they did not carry out the crime itself. They show that their preferred culprit had the motive and, apparently, the means. They find good circumstantial evidence that he did it. But they do not find a smoking gun. For me, the culprit (the American political system) is like OJ. As matters stand, I'm pretty sure that he committed the crime. But I'm not sure that he could be convicted in a court of law, and I could be convinced that I was wrong, if major new exculpatory evidence was uncovered. The lack of any smoking gun (or, alternatively, good evidence against a smoking gun) is the direct result of a major failure of American intellectual life. As the authors observe elsewhere, there is no field of American political economy. Economists have typically treated the economy as non-political. Political scientists have typically not concerned themselves with the American economy. There are recent efforts to change this, coming from economists like Paul Krugman and political scientists like Larry Bartels, but they are still in their infancy. We do not have the kinds of detailed and systematic accounts of the relationship between political institutions and economic order for the US that we have e.g. for most mainland European countries. We will need a decade or more of research to build the foundations of one. Hence, while Hacker and Pierson show that political science can get us a large part of the way, it cannot get us as far as they would like us to go, for the simple reason that political science is not well developed enough yet. We can identify the causal mechanisms intervening between some specific political decisions and non-decisions and observed outcomes in the economy. We cannot yet provide a really satisfactory account of how these particular mechanisms work across a wider variety of settings and hence produce the general forms of inequality that they point to. Nor do we yet have a really good account of the precise interactions between these mechanisms and other mechanisms. None of this is to discount the importance of this book. If it has the impact it deserves, it will transform American public arguments about politics and policymaking. I cannot see how someone who was fair minded could come away from reading this book and not be convinced that politics plays a key role in the enormous economic inequality that we see. And even if it is aimed at a general audience, it also challenges academics and researchers in economics, political science and economic sociology both to re-examine their assumptions about how economics and politics work, and to figure out ways better to engage with the key political debates of our time as Hacker and Pierson have done. If you can, buy it. Great Faulkner's Ghost (Washington, DC) This review is from: Winner-Take-All Politics: How Washington Made the Rich Richer--and Turned Its Back on the Middle Class (Hardcover) Many people have observed that American politics and the American economy reached some kind of turning point around 1980, which conveniently marks the election of Ronald Reagan. Some also pointed to other factors such as the deregulation of stock brokerage commissions in 1975 and the high inflation of the 1970s. Other analysts have put the turning point back in 1968, when Richard Nixon became President on the back of a wave of white, middle-class resentment against the 1960s. Hacker and Pierson, however, point the finger at the 1970s. As they describe in Chapter 4, the Nixon presidency saw the high-water market of the regulatory state; the demise of traditional liberalism occurred during the Carter administration, despite Democratic control of Washington, when highly organized business interests were able to torpedo the Democratic agenda and begin the era of cutting taxes for the rich that apparently has not yet ended today. Why then? Not, as popular commentary would have it, because public opinion shifted. Hacker and Pierson cite studies showing that public opinion on issues such as inequality has not shifted over the past thirty years; most people still think society is too unequal and that taxes should be used to reduce inequality. What has shifted is that Congressmen are now much more receptive to the opinions of the rich, and there is actually a negative correlation between their positions and the preferences of their poor constituents (p. 111). Citing Martin Gilens, they write, "When well-off people strongly supported a policy change, it had almost three times the chance of becoming law as when they strongly opposed it. When median-income people strongly supported a policy change, it had hardly any greater chance of becoming law than when they strongly opposed it" (p. 112). In other words, it isn't public opinion, or the median voter, that matters; it's what the rich want. That shift occurred in the 1970s because businesses and the super-rich began a process of political organization in the early 1970s that enabled them to pool their wealth and contacts to achieve dominant political influence (described in Chapter 5). To take one of the many statistics they provide, the number of companies with registered lobbyists in Washington grew from 175 in 1971 to nearly 2,500 in 1982 (p. 118). Money pouring into lobbying firms, political campaigns, and ideological think tanks created the organizational muscle that gave the Republicans a formidable institutional advantage by the 1980s. The Democrats have only reduced that advantage in the past two decades by becoming more like Republicans-more business-friendly, more anti-tax, and more dependent on money from the super-rich. And that dependency has severely limited both their ability and their desire to fight back on behalf of the middle class (let alone the poor), which has few defenders in Washington. At a high level, the lesson of Winner-Take-All Politics is similar to that of 13 Bankers: when looking at economic phenomena, be they the financial crisis or the vast increase in inequality of the past thirty years, it's politics that matters, not just abstract economic forces. One of the singular victories of the rich has been convincing the rest of us that their disproportionate success has been due to abstract economic forces beyond anyone's control (technology, globalization, etc.), not old-fashioned power politics. Hopefully the financial crisis and the recession that has ended only on paper (if that) will provide the opportunity to teach people that there is no such thing as abstract economic forces; instead, there are different groups using the political system to fight for larger shares of society's wealth. And one group has been winning for over thirty years. Citizen John (USA) In Winner-Take-All Politics, two political science professors explain what caused the Middle Class to become vulnerable. Understanding this phenomenon is the Holy Grail of contemporary economics in the U.S. Some may feel this book is just as polarizing as the current state of politics and media in America. The decades-long decline in income taxes of wealthy individuals is cited in detail. Wage earners are usually subjected to the FICA taxes against all their ordinary income (all or almost their entire total income). But the top wealthy Americans may have only a small percentage (or none) of their income subjected to FICA taxes. Thus Warren Buffett announced that he pays a lower tax rate than his secretary. Buffett has cited income inequality for "poisoning democracy." When you search the Net for Buffett quotes on inequality, you get a lot of results showing how controversial he became for stating the obvious. Drawing attention to the inequity of the tax regime won him powerful enemies. Those same people are not going to like the authors for writing Winner-Take-All. They say these political science people are condescending because they presume to tell people their political interests. Many of studies of poverty show how economic and political policies generally favor the rich throughout the world, some of which are cited in this book. Military spending and financial bailouts in particular favor the wealthy. Authors Jacob Hacker and Paul Pierson document a long U.S. policy trend favoring wealthy Americans. This trend resulted in diminished middle class access to quality healthcare and education, making it harder to keep up with the wealthy in relative terms. Further, once people have lost basic foundations of security, they are less willing and able to take on more risk in terms of investing or starting a business. The rise of special interests has been at the expense of the middle class, according to the authors. Former President Carter talked about this and was ridiculed. Since then government has grown further from most of us. Even federal employees are not like most of us anymore. In its August 10, 2010 issue, USA Today discussed government salaries: "At a time when workers' pay and benefits have stagnated, federal employees' average compensation has grown to more than double what private sector workers earn, a USA TODAY analysis finds." An excellent documentary showing how difficult it is to address income inequality is One Percent, by Jamie Johnson of the Johnson & Johnson family. Collapse: How Societies Choose to Fail or Succeed, by Pulitzer Prize-winner Jared Diamond Collapse: How Societies Choose to Fail or Succeed shows examples of what can happen when a society disregards a coming disaster until too late. I hope that Winner-Take-All will prompt people to demand more of elected officials and to arrest the growing income gap for the sake of our democracy. Michael Emmett Brady "mandmbrady" (Bellflower, California ,United States) 4.5 stars-Wall Street speculators control both parties, September 19, 2010 See all my reviews This review is from: Winner-Take-All Politics: How Washington Made the Rich Richer--and Turned Its Back on the Middle Class (Hardcover) This book basically argues that Wall Street controls both political parties through the use of massive campaign contributions and lobbyists who buy off both the Republicans and Democrats in the White House,Senate and House.This is essentially correct but obvious.Anyone can go back to the 1976 Jimmy Carter campaign and simply verify that the majority of his campaign funds and advisors came from Wall Street.This identical conclusion also holds with respect to Ronald Reagan,George H W Bush,Bill Clinton,George W Bush and Barack Obama. The only Presidents/Presidential candidates not dominated by Wall Street since 1976 were Gerald Ford, Walter Mondale, Ross Perot, Ralph Nader and Pat Buchanan. For instance,it is common knowledge to anyone who carefully checks to see where the money is coming from that Wall Street financiers, hedgefunds, private equity firms and giant commercial banks are calling the shots. For example, one could simply read the July 9,2007 issue of FORTUNE magazine to discover who the major backers of John McCain, Hillary Clinton and Barack Obama were. One could also have read Business Week(2-25-2008) or the Los Angeles Times of 3-21-2008.Through February, 2008 the major donors to the McCain campaign were 1)Merrill Lynch, 2) Citigroup, 3)Goldman Sachs, 4)J P Morgan Chase and 5)Credit Suisse The major donors to the Hillary Clinton campaign were 1)Goldman Sachs, 2)Morgan Stanley, 3)Citigroup, 4)Lehman Brothers and 5)J P Morgan Chase. Guess who were the major donors to the Obama campaign ? If you guessed 1)Goldman Sachs,2)UBS Ag,3)J P Morgan Chase ,4)Lehman Brothers and 5)Citigroup, then you are correct. It didn't matter who became President-Hillary Clinton,Barack Obama or John McCain.All three had been thoroughly vetted by Wall Street. The campaign staffs of all three candidates ,especially their economic and finance advisors, were all Wall Street connected. Wall Street would have been bailed out regardless of which party won the 2008 election. Obama is not going to change anything substantially in the financial markets. Neither is Rep. Barney Frank, Sen. Chris Dodd, Sen. Kerry or Sen. Schumer, etc. Nor is any Republican candidate going to make any changes, simply because the Republican Party is dominated even more so by Wall Street(100%) than the Democratic Party(80%). The logical solution would be to support a Third Party candidate, for example, Ross Perot . One aspect of the book is deficient. True conservatives like Ross Perot, Pat Buchanan and Lou Dobbs have been warning about the grave dangers of hallowing out and downsizing the American Manufacturing -Industrial sector, with the consequent offshoring and/or loss of many millions of American jobs, for about 20 years at the same time that the " financial services " sector has exploded from 3% of the total service sector in 1972 to just under 40% by 2007. This is what is causing the great shrinkage in the middle class in America . Matt Milholland (California) An Important Book, October 9, 2010 See all my reviews This review is from: Winner-Take-All Politics: How Washington Made the Rich Richer--and Turned Its Back on the Middle Class (Hardcover) This is a phenomenal book and everyone interested in how American politics works (or more accurately, doesn't work) should pick it up. It's both really smart and really accessible to a lay audience, which is rare for a political science book. Extreme economic inequality and the near paralysis of our governing institutions has lead to a status-quo that is almost entirely indifferent to the needs of working families. Hacker & Pierson chronicle the rise of this corrupt system and the dual, yet distinct, roles the Republican and Democratic Parties have played in abetting it. Seriously, it's top-notch. Read this book. Loyd E. Eskildson "Pragmatist" By(Phoenix, AZ.) 4.0 out of 5 stars Interesting and Timely, but Also Off-Base in Some Regards, September 15, 2010 See all my reviews This review is from: Winner-Take-All Politics: How Washington Made the Rich Richer--and Turned Its Back on the Middle Class (Hardcover) The thirty-eight biggest Wall Street companies earned$140 billion in 2009, a record that all taxpayers who contributed to their bailouts can be proud of. Among those, Goldman Sachs paid its employees an average $600,000, also a record, and at least partially attributable to our bailout of AIG, which promptly gave much of the money to Goldman. Prior to that, the top 25 hedge fund managers earned an average of$892 million in 2007. "Winner-Take-All Politics" is framed as a detective story about how we got to inequality levels where the top 300,000 (0.1%) receive over 20% of national income, vs. 13.5% for the bottom 180 million (60% of the population).
Between 1947 and 1973, real family median income essentially doubled, and the growth percentage was virtually the same for all income levels. In the mid-1970s, however, economic inequality began to increase sharply and middle-incomes lagged. Increased female workforce participation rates and more overtime helped cushion the stagnation or decline for many (they also increased the risk of layoffs/family), then growing credit card debt shielded many families from reality. Unfortunately, expectations of stable full-time employment also began shrinking, part-time, temporary, and economic risk-bearing (eg. taxi drivers leasing vehicles and paying the fuel costs; deliverymen 'buying' routes and trucks) work increased, workers covered by employer-sponsored health insurance fell from 69% in 1979 to 56% in 2004, and retirement coverage was either been dropped entirely or mostly converted to much less valuable fix-contribution plans for private sector employees. Some exceptions have occurred that benefit the middle and lower-income segments - Earned Income Tax Credit (EITC), Medicaid, and Medicare were initiated or expanded, but these have not blunted the overall trend. Conversely, welfare reform, incarceration rates rising 6X between 1970 and 2000, bankruptcy reform, and increased tax audits for EITC recipients have also added to their burden, Social Security is being challenged again (despite stock market declines, enormous transition costs, and vastly increased overhead costs and fraud opportunity), and 2009's universal health care reform will be aggressively challenged both in the courts and Washington.
Authors Hacker and Pierson contend that growing inequality is not the 'natural' product of market rewards, but mostly the artificial result of deliberate government policies, strongly influenced by industry lobbyists and donations, new and expanded conservative 'think tanks,' and inadequate media coverage that focused more on the 'horse race' aspects of various initiatives than their content and impact. First came the capital gains tax cuts under President Carter, then deregulation of the financial industry under Clinton, the Bush tax cuts of 2001 and 2003, and the financial bailouts in 2008-09. The authors contend that if the 1970 tax structure remained today, the top gains would be considerably less.
But what about the fact that in 1965 CEOs of large corporations only earned about 24X the average worker, compared to 300+X now? Hacker and Pierson largely ignore the role of board-room politics and malfeasance that have mostly allowed managers to serve themselves with payment without regard to performance and out of proportion to other nations. In 2006, the 20 highest-paid European managers made an average $12.5 million, only one-third as much as the 20 highest-earning U.S. executives. Yet, the Europeans led larger firms -$65.5 billion in sales vs. $46.5 billion for the U.S. Asian CEOs commonly make only 10X-15X what their base level employees make. Jiang Jianqing, Chairman of the Industrial and Commercial Bank of China (world's largest), made$234,700 in 2008, less than 2% of the $19.6 million awarded Jamie Dimon, CEO of the world's fourth-largest bank, JPMorgan Chase. "Winner-Take-All Politics" also provides readers with the composition of 2004 taxpayers in the top 0.1% of earners (including capital gains). Non-finance executives comprised 41% of the group, finance professionals 18.4%, lawyers 6%, real estate personages 5%, physicians 4%, entrepreneurs 4%, and arts and sports stars 3%. The authors assert that this shows education and skills levels are not the great dividers most everyone credits them to be - the vast majority of Americans losing ground to the super-rich includes many well-educated individuals, while the super-rich includes many without a college education (Sheldon Adelson, Paul Allen, Edgar Bronfman, Jack Kent Cook, Michael Dell, Walt Disney, Larry Ellison, Bill Gates, Wayne Huizenga, Steve Jobs, Rush Limbaugh, Steve Wozniak, and Mark Zuckerberg). Authors Hacker and Pierson are political science professors and it is understandable that they emphasize political causes (PACs, greater recruitment of evangelical voters, lobbying - eg.$500 million on health care lobbying in 2009, filibusters that allow senators representing just 10% of the population to stop legislation and make the other side look incompetent, etc.) for today's income inequality. However, their claim that foreign trade is "largely innocent" as a cause is neither substantiated nor logical. Foreign trade as practiced today pads corporate profits and executive bonuses while destroying/threatening millions of American jobs and lowering/holding down the incomes of those affected. Worse yet, the authors don't even mention the impact of millions of illegal aliens depressing wage rates while taking jobs from Americans, nor do they address the canard that tax cuts for and spending by the super-wealthy are essential to our economic success (refuted by Moody's Analytics and Austan Goolsbee, Business Week - 9/13/2010). They're also annoyingly biased towards unions, ignoring their constant strikes and abuses in the 1960s and 1970s, major contributions to G.M., Chrysler, and legacy airline bankruptcies, and current school district, local, and state financial difficulties.
Bottom-Line: It is a sad commentary on the American political system that growing and record levels of inequality are being met by populist backlash against income redistribution and expanding trust in government, currently evidenced by those supporting extending tax cuts for the rich and railing against reforming health care to reduce expenditures from 17.3+% of GDP to more internationally competitive levels (4-6%) while improving patient outcomes. "Winner-Take-All Politics" is interesting reading, provides some essential data, and point out some evidence of the inadequacy of many voters. However, the authors miss the 'elephant in the room' - American-style democracy is not viable when at most 10% of citizens are 'proficient' per functional literacy tests ([...]), and only a small proportion of them have the time and access required to sift through the flood of half-truths, lies, and irrelevancies to objectively evaluate 2,000+ page bills and other political activity. (Ideology-dominated economic professionals and short-term thinking human rights advocates are two others.) Comments (2)
Brian Kodi
"Americans live in Russia, but they think they live in Sweden." - Chrystia Freeland, March 26, 2011 See all my reviews
This review is from: Winner-Take-All Politics: How Washington Made the Rich Richer--and Turned Its Back on the Middle Class (Hardcover)
No one should doubt the rising income inequality in America, which the authors trace back to the late 1970s since the latter part of Carter's presidency in what they call the "30 Year War". Zachary Roth, in a March 4th Time magazine article stated "A slew of conservative economists of unimpeachable academic credentials--including Martin Feldstein of Harvard, Glenn Hubbard, who was President Bush's top economic adviser, and Federal Reserve chair Ben Bernanke--have all acknowledged that inequality is on the rise."
And why should we care that most of the after tax income growth since 30 years ago has gone the way of the richest Americans in a "winner-take-all" economy? Because as Supreme Court justice biographer Melvin Urofsky stated, "in a democratic society the existence of large centers of private power is dangerous to the continuing vitality of a free people." (p. 81) Because if unchecked, a new economic aristocracy may replace the old hereditary aristocracy America's Founders fought to defeat (p. 298). Because unequal societies are unhappy societies, and inequality can foster individual resentment that may lead to a pervasive decline in civility and erosion of culture.
And why should we be concerned that this trend in rising inequality may not experience the period of renewal the authors are optimistic about? Because unlike the shock of the 1930s' Great Depression that served as the impetus for the politics of middle class democracy, the potential shockwaves of the 2008 Great Recession were tempered by massive government stimulus, resulting in no meaningful financial reform, and an extension of the tax cuts for the wealthy. And because of the lottery mentality of a large swath of the population which opposes tax increases on the rich. One day, they or their children too can share in the American dream. According to an October 2000 Time-CNN poll, 19 percent of Americans were convinced they belonged to the richest 1 percent. Another 20 percent thought they'd make the rank of the top 1 percent at some point in their lives. That's quite a turnover in the top 1 percent category to accommodate 20 percent of the population passing through.
Mr. Hacker and Mr. Pierson have put together powerful arguments on the root causes of income inequality in the U.S., its political and economic ramifications, and to a lesser extent, a roadmap to returning democracy to the masses. This is an eye opening and disturbing, yet informative book, even for readers who may disagree with their opinions.
J. Strauss (NYC)
3.0 out of 5 stars great history of big money influence on policy but needs more analysis of the ways policy affects the winner-take-all economy, September 21, 2011 See all my reviews
Amazon Verified Purchase(What's this?)
This review is from: Winner-Take-All Politics: How Washington Made the Rich Richer--and Turned Its Back on the Middle Class (Hardcover)
Writing:
A bit hokey and repetitive for the first couple chapters. Much better after that. Stick with it if you're interested in the subject.
Content:
This book does a very good job explaining how and why certain special interest groups (notably those that represent the wealthiest .1%) have come to have such a stranglehold on government, particularly Congress. I come away with a clear understanding of how the wealthiest citizens are able to exert their influence over legislative policy and enforcement at the federal level.
What I would have liked more of are better explanations of the mechanisms through which government policies exacerbate the winner-take-all economy. Tax policy (rates and loopholes) is the most obvious answer, and the book provides plenty of stats on the regression of tax policy over the past 30 years.
But complicated, interesting, and largely missing from public discourse is why PRE-TAX incomes have become so much more radically skewed during that time. This is certainly touched on - the authors are deliberate in saying it's not JUST tax policy that's contributing to increased inequality - but I would've liked much more analysis of the other policy-driven factors. "Deregulation" is too general an explanation to paint a clear picture.
The authors make it clear that they believe the increasing divide in pre-tax incomes (the winner-take-all economy) is not the inevitable result of technological changes and of differences in education ("the usual suspects"), but of policy decisions made at the state and, especially, federal levels. Personally, I wasn't fully convinced that technological change has little or nothing to do with the skew (though I agree that while education goes a long way toward explaining the gap between poor and middle class, it doesn't explain much of the gap between middle class and super rich). But I do believe, as they do, that public policy plays a large role in influencing the extent of inequality in pre-tax incomes, even beyond more obvious market-impacting factors like union influence, and mandates including the minimum wage, restrictions on pollution, workplace safety and fairness laws, etc.
Off the top of my head, here are some regulatory issues that affect market outcomes and can influence the extent of winner-take-all effects in the marketplace (a few of these may have been mentioned in the book, but none were discussed in detail):
• the enforcement of antitrust laws and other means of encouraging pro-consumer competition in the marketplace, such as cracking down on explicit or implicit price-fixing and collusion schemes [concentration of market share and/or collusion will certainly contribute to winner-take-all effects at the expense of consumers, small businesses and the dynamics of the economy as a whole.]
• regulations that seek to minimize conflicts of interest in the corporate world, particularly those with far-reaching effects [i.e. some policy makers and regulators are in a position to decide whether it makes sense for bond ratings agencies with the authority they have over so many investment decisions to be paid, in negotiable fashion, by the companies whose bonds they rate. i'd wager the status quo exacerbates winner-take-all and not in a way that rewards the right things - but i'd be glad to hear an intellectually honest counter-argument]
• net neutrality [should internet service providers be allowed to favor their corporate partners' websites to the point that eventually you'll no longer be able to publish a blog and expect that anyone will be able to access it expediently?]
• insurance regulation [should we rely on reputation threat alone to discourage insurer's from stiffing their policyholders' legitimate claims? status quo we don't, but there are those who argue against regulation of insurers]
• broad macroeconomic goals, such as relative balance between imports and exports, or attempts to encourage educational institutions to help align workforce skills with projected job opportunities for instance - enforced preferably through various incentives rather than mandates [the U.S. isn't big on this at the moment but many other rich countries are, in varying forms]
• preferential treatment of small businesses to help them compete with "the big boys", thereby increasing competition in the market and job-creation
• preferential treatment of businesses who do various things deemed to be in the public interest
• securities law, including bans on insider trading, front-running, etc
• food safety and labeling laws
• allocation and extent of government-sponsored R&D in industries deemed important or potentially beneficial to the public
• restrictions on what can be bought and sold [almost no one would argue judge's decisions should be for sale to the highest bidder. how about cigarette sales to kids, should that be allowed? heroin to anyone? spots in the class of a competitive public university?]
And many more. I know regulatory issues like that play huge roles in the distribution of pre-tax "market" incomes, but I'd like to have a better understanding of how, and also to be better able to articulate how in response to those who seem to believe taxes (and perhaps obvious restrictions, such as on pollution or the minimum wage) are the only significant means through which governments influence wealth disparities.
There wasn't a whole lot of discussion of these or similar regulatory issues in the book. I would like to see another edition, or perhaps another book entirely, that does. Please let me know if you have any recommendations.
Top Visited
Your browser does not support iframes.
Switchboard Latest Past week Past month
## Old News ;-)
2015 2014 2013 2012 2011 2010 2009 2008 2007 2006
Another day older and deeper in debt
Saint Peter don'tcha call me 'Cause-I can't go…
I owe my soul to the Company Store"
-- "Sixteen Tons"
#### [Feb 15, 2019] Losing a job in your 50s is especially tough. Here are 3 steps to take when layoffs happen by Peter Dunn
##### "... Grab your bank statement, a marker, and a calculator. As much as you want to pretend its business as usual, you shouldn't. Identify expenses that don't make sense if you don't have a job. Circle them. Add them up. Resolve to eliminate them for the time being, and possibly permanently. While this won't necessarily lengthen your fuse, it could lessen the severity of a potential boom. ..."
###### Feb 15, 2019 | finance.yahoo.com
... ... ...
Losing a job in your 50s is a devastating moment, especially if the job is connected to a long career ripe with upward mobility. As a frequent observer of this phenomenon, it's as scary and troublesome as unchecked credit card debt or an expensive chronic health condition. This is one of the many reasons why I believe our 50s can be the most challenging decade of our lives.
Assuming you can clear the mental challenges, the financial and administrative obstacles can leave you feeling like a Rube Goldberg machine.
Income, health insurance, life insurance, disability insurance, bills, expenses, short-term savings and retirement savings are all immediately important in the face of a job loss. Never mind your Parent PLUS loans, financially-dependent aging parents, and boomerang children (adult kids who live at home), which might all be lurking as well.
From the shocking moment a person learns their job is no longer their job, the word "triage" must flash in bright lights like an obnoxiously large sign in Times Square. This is more challenging than you might think. Like a pickpocket bumping into you right before he grabs your wallet, the distraction is the problem that takes your focus away from the real problem.
This is hard to do because of the emotion that arrives with the dirty deed. The mind immediately begins to race to sources of money and relief. And unfortunately that relief is often found in the wrong place.
The first thing you should do is identify the exact day your job income stops arriving . That's how much time you have to defuse the bomb. Your fuse may come in the form of a severance package, or work you've performed but have't been paid for yet.
When do benefits kick in?
Next, and by next I mean five minutes later, explore your eligibility for unemployment benefits, and then file for them if you're able. However, in some states severance pay affects your immediate eligibility for unemployment benefits. In other words, you can't file for unemployment until your severance payments go away.
Assuming you can't just retire at this moment, which you likely can't, you must secure fresh employment income quickly. But quickly is relative to the length of your fuse. I've witnessed way too many people miscalculate the length and importance of their fuse. If you're able to get back to work quickly, the initial job loss plus severance ends up enhancing your financial life. If you take too much time, by your choice or that of the cosmos, boom.
The next move is much more hands-on, and must also be performed the day you find yourself without a job.
What nonessentials do I cut?
Grab your bank statement, a marker, and a calculator. As much as you want to pretend its business as usual, you shouldn't. Identify expenses that don't make sense if you don't have a job. Circle them. Add them up. Resolve to eliminate them for the time being, and possibly permanently. While this won't necessarily lengthen your fuse, it could lessen the severity of a potential boom.
The idea of diving into your spending habits on the day you lose your job is no fun. But when else will you have such a powerful reason to do so? You won't. It's better than dipping into your assets to fund your current lifestyle. And that's where we'll pick it up the next time.
We've covered day one. In my next column we will tackle day two and beyond.
Peter Dunn is an author, speaker and radio host, and he has a free podcast: "Million Dollar Plan." Have a question for Pete the Planner? Email him at AskPete@petetheplanner.com. The views and opinions expressed in this column are the author's and do not necessarily reflect those of USA TODAY.
#### [Feb 13, 2019] Microsoft patches 0-day vulnerabilities in IE and Exchange
##### It is unclear how long this vulnerability exists, but this is pretty serious staff that shows how Hillary server could be hacked via Abedin account. As Abedin technical level was lower then zero, to hack into her home laptop just just trivial.
###### Feb 13, 2019 | arstechnica.com
Microsoft also patched Exchange against a vulnerability that allowed remote attackers with little more than an unprivileged mailbox account to gain administrative control over the server. Dubbed PrivExchange, CVE-2019-0686 was publicly disclosed last month , along with proof-of-concept code that exploited it. In Tuesday's advisory , Microsoft officials said they haven't seen active exploits yet but that they were "likely."
#### [Feb 13, 2019] Death of the Public University Uncertain Futures for Higher Education in the Knowledge Economy
##### "... This assault on academic freedom by neoliberalism justifies itself by calling for "transparency" and "accountability" to the taxpayer and the public. But it operates used utter perversion of those terms. In the Neoliberal context, they mean "total surveillance" and "rampant rent-seeking. ..."
###### Feb 11, 2019 | www.amazon.com
skeptic, February 11, 2019
The eyes opening, very important for any student or educator book
This book is the collection of more than dozen of essays of various authors, but even the Introduction (Privatizing the Public University: Key Trends, Countertrends, and Alternatives) is worth the price of the book
Trends in neo-liberalization of university education are not new. But recently they took a more dangerous turn. And they are not easy to decipher, despite the fact that they are greatly affect the life of each student or educator. In this sense this is really an eyes-opening book.
In Europe previously higher education as assessable for free or almost free, but for talented student only. Admission criteria were strict and checked via written and oral entrance exams on key subjects. Now the tend is to view university as business that get customers, charge them exorbitant fees and those customers get diploma as hamburgers in McDonalds at the end for their money. Whether those degree are worth money charged, or not and were suitable for the particular student of not (many are "fake" degrees with little or no chances for getting employment) is not university business. On the contrary, marketing is used to attract as many students as possible and many of those student now remain in debt for large part of their adult life.
In other words, the neoliberalization of the university in the USA creates new, now dominant trend -- the conversion of the university into for-profit diploma mills, which are essentially a new type of rent-seeking (and they even attract speculative financial capital and open scamsters, like was in case of "Trump University" ). Even old universities with more than a century history more and more resemble diploma mills.
This assault on academic freedom by neoliberalism justifies itself by calling for "transparency" and "accountability" to the taxpayer and the public. But it operates used utter perversion of those terms. In the Neoliberal context, they mean "total surveillance" and "rampant rent-seeking. "
Neoliberalism has converted education from a public good to a personal investment in the future, a future conceived in terms of earning capacity. As this is about your future earning potential, it is logical that for a chance to increase it you need to take a loan.
Significantly, in the same period per capita, spending on prisons increased by 126 percent (Newfield 2008: 266). Between the 1970s and 1990s there was a 400 percent increase in charges in tuition, room, and board in U.S. universities and tuition costs have grown at about ten times the rate of family income (ibid.). What these instances highlight is not just the state's retreat from direct funding of higher education but also a calculated initiative to enable private companies to capture and profit from tax-funded student loans.
The other tendency is also alarming. Funds now are allocated to those institutions that performed best in what has become a fetishistic quest for ever-higher ratings. That creates the 'rankings arms-race.' It has very little or nothing to do with the quality of teaching of students in a particular university. On the contrary, the curriculums were "streamlined" and "ideologically charged courses" such as neoclassical economics are now required for graduation even in STEM specialties.
In the neoliberal university professors are now under the iron heel of management and various metrics were invented to measure the "quality of teaching." Most of them are very perverted, or can be perverted as when a measurement becomes a target teachers start to focus their resources and activities primarily on what 'counts' rather than on their wider competencies, professional ethics and societal goals (see Kohn and Shore, this volume).
Administration bloat and academic decline is another prominent feature of the neoliberal university. University presidents now view themselves as CEO and want similar salaries. The same is true for the growing staff of university administrators. The recruitment of administrators has far outpaced the growth in the number of faculty – or even students. Meanwhile, universities claim to be struggling with budget crises that force to reduce permanent academic posts, and widely use underpaid and overworked adjunct staff – the 'precariat' paid just a couple of thousand dollars per course and often existing on the edge of poverty, or in real poverty.
Money now is the key objective and the mission changed from cultural to "for profit" business including vast expenses on advancement of the prestige and competitiveness of the university as an end in itself. Ability to get grants is now an important criteria of getting the tenure.
#### [Feb 12, 2019] Older Workers Need a Different Kind of Layoff A 60-year-old whose position is eliminated might be unable to find another job, but could retire if allowed early access to Medicare
##### "... One policy might be treating unemployed older workers differently than younger workers. Giving them unemployment benefits for a longer period of time than younger workers would be one idea, as well as accelerating the age of Medicare eligibility for downsized employees over the age of 55. The latter idea would help younger workers as well, by encouraging older workers to accept buyout packages -- freeing up career opportunities for younger workers. ..."
###### Feb 12, 2019 | www.bloomberg.com
The proposed merger between SunTrust and BB&T makes sense for both firms -- which is why Wall Street sent both stocks higher on Thursday after the announcement. But employees of the two banks, especially older workers who are not yet retirement age, are understandably less enthused at the prospect of downsizing. In a nation with almost 37 million workers over the age of 55, the quandary of SunTrust-BB&T workforce will become increasingly familiar across the U.S. economy.
But what's good for the firms isn't good for all of the workers. Older workers often struggle to get rehired as easily as younger workers. Age discrimination is a well-known problem in corporate America. What's a 60-year-old back office worker supposed to do if downsized in a merger? The BB&T-SunTrust prospect highlights the need for a new type of unemployment insurance for some of the workforce.
One policy might be treating unemployed older workers differently than younger workers. Giving them unemployment benefits for a longer period of time than younger workers would be one idea, as well as accelerating the age of Medicare eligibility for downsized employees over the age of 55. The latter idea would help younger workers as well, by encouraging older workers to accept buyout packages -- freeing up career opportunities for younger workers.
The economy can be callous toward older workers, but policy makers don't have to be. We should think about ways of dealing with this shift in the labor market before it happens.
#### [Feb 12, 2019] The Neoliberal University
##### "... The position of financial and credit institutions as the financiers of America's productive infrastructure has far-reaching consequences for social institutions like universities with the potential to absorb surplus capital in the form of credit or produce the 21st-century 'information' workforce. Students, and faculty at universities like Northeastern will struggle against market pressures on universities to attract outside investors while downsizing education for as long as the U.S. economy is dominated by finance. ..."
Last month at Northeastern University, the adjunct union reached a tentative agreement with the university administration to avert a planned walkout after more than a year of unsuccessful negotiations. Those familiar with the adjunct campaign know that adjunct professors are contingent workers who comprise more than half of the teaching staff at Northeastern and are paid a couple thousand dollars for each class that they teach.[1] From a budgetary standpoint, contingent workers are economical because they are easily replaced and therefore can be paid less. Still, at a school like Northeastern University with an operating budget of more than 2.2 billion, it is hard to argue that more than half of all professors need to earn poverty wages for the school to remain profitable.[2] In today's neoliberal landscape -- a term which refers to the coordinated effort by capital and financial interests after the 1980s to privatize public institutions and deregulate markets -- Northeastern is not unusual in its treatment of adjunct professors. The neoliberal university model of high tuitions, bloated administrative departments, and upscale student facilities -- along with assaults on the job security and pay of professors -- is the new norm. It is the image of a thoroughly financialized economy that has transformed the relationship between universities and the state. From the 19th century through the 1970s, the relationship between universities and the state remained constant. There was an informal arrangement of mutual independence: Academics operated autonomously with state funding on the understanding that they were willing to pursue research in which the state had an interest, such as medicine or space exploration.[3] Underlying this arrangement was the assumption that as a social good, education should drive public research and development. The story of how universities became neoliberalized begins with the economic crisis of the 1970s and the subsequent free-market discourse that invoked capitalism's insatiable need for economic growth in order to equate the interests of working people with the interests of financiers. In the three decades after World War II, the U.S. established economic hegemony over the global capitalist world. The Fordist compromise between strong manufacturers and a strong, suburbanizing working class yielded unprecedented wage growth.[4] However, the Fordist model could not last forever. As a general rule, whenever compound economic growth falls below three percent, people begin to get scared . In order to sustain three percent compound growth, there must be no barriers to the continuous expansion and reinvestment of capital. The suburbanization of postwar America did sustain high demand for American-made automobiles and home products, but reinvestment in manufacturing eventually became difficult for capital because a widely-unionized and militant working class created a labor shortage (i.e. near-full employment) which drove up wages and hurt profitability.[5][6] To the extent that productivity could be improved by technological innovations, organized labor insisted on "productivity agreements" that ensured that machines would not be used to undermine wages or benefits. To make matters worse for U.S. manufacturers, monopolies like the Big Three auto companies were broken by foreign imports from a newly rebuilt Europe and Japan.[7] In The Grundrisse , Karl Marx remarked that "every limit [to capital accumulation] appears as a barrier to be overcome."[8] For Marx, sustained capital accumulation requires an "industrial reserve army" to keep the cost of labor (i.e. wages) from impeding profitability. To restore profits, American capital had to discipline labor by drawing from the global working population. The Immigration and Nationality Act of 1965 addressed U.S. labor scarcity by abolishing immigration quotas based on nationality so that cheap labor would flood the market and drive down wages.[9] However, it proved more effective for manufacturing capital to simply relocate to countries with cheaper labor, and throughout the 1970s and 1980s capital did just that -- first to South Korea and Thailand, and then to China as wages in those countries became too high.[10] "Globalization" entailed removing barriers to international capital relocation such as tariffs and quotas in order to construct a global market where liquid money capital could flow internationally to wherever it yielded the most profits. Of course, wage suppression eventually lowers consumer demand. The neoliberal solution was for financial institutions to sustain middle-class purchasing power through credit. In The Enigma of Capital , David Harvey writes that "the demand problem was temporarily bridged with respect to housing by debt-financing the developers as well as the buyers. The financial institutions collectively controlled both the supply of, and demand for, housing!"[11] The point of this history though, is that the financialization of the American economy, through which financial markets came to dominate other forms of industrial and agricultural capital, served as the backdrop for the transformation of higher education into what it is today. Neoliberal ideology reframed the social value of higher education as a tool for building the next workforce to serve the new "information economy" -- a term that emerged in the midst of globalization to describe the role of U.S. suburban professionals in the global economy. Simultaneously, finance capital repurposed universities as points of capital accumulation and investment. The discourse around the information economy sought to rationalize the offshoring of manufacturing from the U.S. The idea was that due to globalization, America has reached a stage of development where its participation in the global economy is as a white-collar work force, specializing in technology and the spread of information.[12] In this telling, there is nothing to critique about the deindustrialization of the American economy because it was inevitable. It was then simple to realign the social goals of universities with the economic goals of Wall Street because the state repression of radical civil rights movements on the Left and the emergent free-market discourse of the Right formed a widespread perception of the state as inherently problematic . State research and development at universities was easily dismissed as inefficient, which cleared space for a neoliberal redefinition of higher education. Neoliberalism has transformed education from a social good into a production process where the final product is a reserve army of workers for the information economy. What David Harvey calls the "state-finance nexus" pushes universities to play the part by withholding state funds until they expand their enrollment and increase the number of college graduates entering the workforce.[13] In 2012, the Obama Administration identified increasing the number of undergraduate STEM degrees by one million over the next decade as a 'Cross-Agency Priority Goal' on the recommendation of the President's Council of Advisors on Science and Technology (PCAST). At the same time that neoliberalism transforms education into a production process for high-tech workers, it transforms the university itself into a site for surplus capital absorption through the construction of new labs, facilities, and houses to draw wealthy students and faculty capable of attracting federal grants. In December 2015, Northeastern filed a letter of intent with the Boston Redevelopment Authority to propose building a residence hall for approximately 800 students. The Boston Globe reported that the project is currently under review by American Campus Communities, the largest developer of private student housing in the U.S. To an economizing university administrator, private developers are very appealing because they assume the debt generated by construction projects. The circular process whereby a large university endowment comprised of financial assets is used to contract a debt-financed independent developer reveals how neoliberalism integrates universities into the circulatory system of capital as circuits of accumulation and investment.[14] The present relationship between the university and the state flows from the dynamics of financialization. As financialization transforms the role of the United States in the global economy, it appropriates higher education to suit the needs of finance capital. Compared to the ever-expanding administrative apparatus responsible for managing contracts and investments, programs outside of STEM and business fields are considered superfluous. Humanities programs are often downsized and tenure tracks closed to push professors into permanent part-time employment arrangements.[15] Meanwhile, schools like Northeastern and MIT are surrounded by high-tech and business firms that rely on students and research facilities for cheap labor and productive capital. The position of financial and credit institutions as the financiers of America's productive infrastructure has far-reaching consequences for social institutions like universities with the potential to absorb surplus capital in the form of credit or produce the 21st-century 'information' workforce. Students, and faculty at universities like Northeastern will struggle against market pressures on universities to attract outside investors while downsizing education for as long as the U.S. economy is dominated by finance. #### [Feb 12, 2019] The neoliberal university is making us sick Who's to blame by Jodie-Lee Trembath ###### Feb 12, 2019 | thefamiliarstrange.com June 14, 2018 Trigger warning: This post contains the discussion of depression and other mental health issues, and suicide. If you or anyone you know needs help or support for a mental health concern, please don't suffer in silence. Many countries have confidential phone helplines (in Australia you can call Lifeline on 13 11 14, for example); this organisation provides worldwide support, while this website compiles a number of helpline sites from around the world. I am writing today from a place of anger; from a rage that sits, simmering on the surface of a deep well of sadness. I didn't know Dr. Malcolm Anderson, the senior accountancy lecturer from Cardiff University whose death, after falling from the roof of his university building, was last week ruled a suicide . I obviously have no way to know the complexity of his feelings or what sequence of events led up to his decision to end his own life. However, according to the results of an inquest, we can know what Dr. Anderson wanted his university to understand about his death – that it was, at least in part, because of the pressures of his academic work. The media reports that Dr. Anderson had recently been appointed to Deputy Head of his department, significantly increasing his administrative load. Nonetheless, he was still teaching 418 students and needed to mark their work within a 20-day turnaround. To meet that deadline, he would have needed to work approximately 9 hours a day without food or toilet breaks, for 20 days straight, and not do ANY other kind of work during that time (such as the admin that comes with being a Deputy Head). Practically impossible, given he was also a human being, with a home life, and physical needs like food, in addition to work responsibilities. His wife, Diane, has been quoted saying that Dr. Anderson worked very long hours and often took marking to family events. She has said that although he was a passionate educator who won teaching awards every year, he had been showing signs of stress and had spoken to his managers about his difficulty meeting deadlines. A colleague told the inquest that he was given the same response each time he asked for help, and staffing cuts had continued. A Marked Problem ... ... ... And look, I get it. To someone outside the academy, I'm sure the perception remains that academics sit in leather armchairs, gazing out the gilded windows of our ivory towers, thinking all day. That has not been my experience, nor that of anyone I know. My colleagues and peers have, however , experienced levels of anxiety and depression that are six times higher than experienced in the general population (Evans et al. 2018). They report higher levels of workaholism , the kind that has a negative and unwanted effect on relationships with loved ones (Torp et al. 2018). The picture is often even bleaker for women , people of colour , and other non-White, non-middle-class, non-males. So whether you think academics are 'delicate woeful souls' or not, it's difficult to deny that there is a real problem to be tackled here. Obviously, marking load is only one issue amongst many faced in universities the world over. But it's not bad as an illustration, partly because it's quantifiable . It's somewhat ironic that the neoliberal metrics that we rail against, the audit culture that causes these kinds of examples to happen, could also help us describe to others why they are a problem for us. So quantifiability brings us to neoliberalism. How did neoliberalism become so pervasive that it's almost impossible to imagine how the world could look different? Neoliberalism, then and now These last two weeks I've been working out of the Stockholm Centre for Organisational Research in Sweden, which, by coincidence, is where Professor Cris Shore , anthropologist of policy and the guest on our next podcast episode is currently based. I was chatting to him the other day about the interview we recorded last December, which centres around many of the ideas I'm discussing in this blog post. I had to admit, I hadn't realised until we did that interview how angry many people still feel towards the Thatcher government for introducing neoliberal ideologies and practices into the public sector. Despite doing a Ph.D. about modern university life, it hadn't fully registered for me that events of the past , specifically the histories of politics and economics in 'the West', were such active players in the theatre of higher education's present . To understand today's neoliberal universities, let's explore a little history in the UK and the US, two of the biggest influencers in the global higher education sector today. In 1979, Margaret Thatcher rose to power on a platform of reviving the stagnant British economy by introducing market-style competition into the public sector. This way, she claimed, she was ensuring, that "the state's power [was] reduced and the power of the people, enhanced" (Edwards, 2017) . For universities, this meant increased "accountability" and quality assurance measures that would drag universities out of their complacency . Meanwhile, in the US, Ronald Reagan was also arriving at neoliberalism via a different path. Americans historically don't trust central government (Roberts, 2007) , so in 1981, Reagan introduced tax cuts (especially for the rich) for the first time in American history, therefore "protecting" the American people from the rapacious spending habits of the state (Prasad, 2012) . In American universities, this manifested over the next 30 years in reduced public spending on higher education, transferring the costs for tuition to student-consumers, and encouraging partnerships with industry and endorsements from philanthropists (often with agendas) to cover research costs (Shumway, 2017) . Then in the 90s, there was a moral panic about the public sector caused by scandals such as " the collapse of Barings Bank in 1995 , the failures of the medical profession revealed by investigations into the serial murders by Dr Harold Shipman , and the numerous cases of child abuse that have plagued the Catholic Church " (Shore, 2008) . Frankly, it seems pretty understandable that people were looking for greater transparency, a bit of accountability, and a whole lot less of, "leave it to the professionals, they seem like alright blokes, don't they?" from their public sector. However, an ideology that had originally looked so promising to the public began, over time, to create a new set of problems. As Cris Shore points out in his seminal 2008 article, ' Audit culture and Illiberal governance: Universities and the politics of accountability ': The official rationale for [neoliberal ideologies and actions] appears benign and incontestable: to improve efficiency and transparency and to make these institutions more accountable to the taxpayer and public (and no reasonable person could seriously challenge such commonsensical and progressive objectives). The problem, however, is that audit confuses 'accountability' with 'accountancy' so that 'being answerable to the public' is recast in terms of measures of productivity, 'economic efficiency' and delivering 'value for money' (VFM). The trouble with neoliberalism and its offshoot, New Public Management , is that much like the Newspeak of Orwell's 1984 , the words that were used to sell it – quality, accountability, transparency etc. – in practice, mean the opposite of what they appear to mean. For example, as Chris Lorenz (2012) points out in an article that convincingly compares New Public Management in universities to the outcomes of a Communist regime , there has been no evidence, statistical or otherwise, that increasing 'quality control measures' in universities has actually improved quality in universities by any objective criteria – and often just the opposite. What has "improved" in universities because of neoliberal practices is efficiency, often through measures like restructures and reviews. Again, taking steps to save money and time sounds like a positive. However, the problem with 'efficiency' is that, unlike its counterpart 'effectiveness' (the ability to bring about a specific effect), 'efficiency' has no end point – it is a goal unto itself. As Lorenz phrases it, "efficient, therefore, is never efficient enough," (2012, p. 607). Bringing this back, then, to issues of mental health and increasing workloads on campus. Liz Morrish of Academic Irregularities pointed out last week that when tragedies such as the death of Malcolm Anderson occur in universities, the most common response is for said university to announce a review. As anticipated, two days after the results of Dr. Anderson's inquest were first reported in the media, Cardiff University announced that they would be reviewing the 'support, information, advice and specialist counselling' available to all staff, but also urged any academic "who has any concerns regarding workload, to raise them with their line manager, in the first instance, so all available advice and support can be offered." This platitude has been taken by many online as exactly that – a platitude. Several commenters on Twitter have pointed out that providing more mental health support doesn't actually reduce workload, while others have noted that there has been no discussion by Cardiff U of attempting to fix the underlying cause. I agree with them, and it's part of the reason I'm so angry. Malcolm Anderson could easily be any one of us. Yet, I have to admit, I'd also hate to be part of the executive team at Cardiff University right now. Can you imagine the anguish of knowing that someone had taken their life, and held you directly responsible? You'd have to feel so helpless, so powerless in the shadow of neoliberal forces that permeate every last aspect of the global higher education sector. I don't know, I haven't been a Vice Chancellor, maybe you wouldn't have to feel that way. But it's easy to imagine how one could. The path to neoliberal hell is paved with good intentions So, what's the answer? I wish I knew. What I do know is that anthropological thinking has a lot to offer in the exploration of big immutable mobiles 2 like neoliberalism. As Sherry Ortner asks in her 2016 article " Dark anthropology and its others: Theory since the eighties ", who better to question the power structures inherent in 'dark' topics such as neoliberalisation or colonialism than anthropologists? Yet, she urges an approach that also acknowledges the possibility of goodness in the world, quoting from the opening to Michael Lambek's Ordinary Ethics as rationale: Ethnographers commonly find that the people they encounter are trying to do what they consider right or good, are being evaluated according to criteria of what is right and good, or are in some debate about what constitutes the human good. Yet anthropological theory tends to overlook all this in favor of analyses that emphasize structure, power, and interest. (Lambeck, 2010, p. 1) And this is where I have to deviate from the majority of the neoliberal university critiques I've read. In these pieces, it's all too common to read criticisms of academic managers, or administrators, or university 'service providers' as if they are The Reason that neoliberal ideologies get enacted in university contexts. But usually, they're just human beings too, also subject to KPIs and managerial demands and neoliberal ideologies. Having worked at different times as an educator, a researcher, and a communications manager in various universities for more than 10 years, and now having conducted fieldwork at a university for my PhD, I have had the chance to observe and conduct research on at least nine different university campuses, in at least five countries. Based on those experiences, I am in complete agreement with Lambek: the majority 3 of non-academics that I have encountered, in every type of department, and at every level of universities from Level 1 administrative officers to Presidents and Vice Chancellors, "are trying to do what they consider right or good" (2010, p. 1). They demonstrate, both through words and their actions, their beliefs that education is valuable, and that students are important as human beings, not just as cash cows. They are often working long hours themselves, trying to keep up with the demands that neoliberal university life is placing on them. I just can't get on board with the idea that they are, universally, the villains of the neoliberal horror story. It seems much more likely, to me, that neoliberal ideologies continue to get enacted and reinforced by academic managers because these practices have become the norm. Throughout and because of the historical growth pattern neoliberalism has experienced, these ideologies have put down roots, and these roots have become so entangled with other aspects of university life as to be inseparable. For many working-aged people, neoliberalism is the water we were born swimming in. Even presented with its inadequacies, it's difficult to imagine an alternative. What I can agree with the critics about, however, is that non-academics often don't understand or appreciate – or perhaps remember (if they had worked in that capacity in the past) – the demands of being an academic, just like academics don't tend to understand or appreciate the demands that non-academics within the university are facing. In their recently published book Death of the Public University (2017), Susan Wright and Cris Shore refer to the idea of 'Faculty Land' – a place synonymous with 'La La Land', where non-academic employees of universities think academics live. This really resonates with what I saw on fieldwork at an international university in Vietnam, but not only from administrators – academics too. As I've said in a previous post , all the actors in universities are trying to abrogate responsibility sideways or upwards until they can only blame 'the neoliberal agenda', and once they get there, all they can see is a towering, monolithic idea , and it becomes like trying to have a fist fight with a cloud. Most people don't ever get to that point though, because the world feels more controllable if we believe that there is another human to blame . The thing is though, blaming others almost never works . It doesn't make things better, it just creates a greater divide between groups, encourages isolationism and othering, and decreases the likelihood that either side will ever want to work together to fix the problem. Dr Anderson's tragic death, and the similarly tragic statistics that tell us that the collective mental health of our academics is in crisis, should be a wake up call to all of us who work or study in universities, in any capacity. Whether it will be remains to be seen. Again: If you or anyone you know needs help or support for a mental health concern, please don't suffer in silence . Sometimes talking about things with an objective outsider can help. • If you work in a university with a counselling service, consider seeking them out. Many have emergency sessions set aside each day. • Many countries have confidential phone helplines (in Australia you can call Lifeline on 13 11 14, for example). • Befrienders.org provides worldwide support. • The International Bipolar Foundation has compiled a number of helpline sites from around the world . • If you (or your department) have the financial means, psychologists who specialise in working with HDR students and academics, such as Dr Shari Walsh of Growth Psychology sometimes offer Skype appointments. (I have had Skype sessions with Shari myself; she's lovely. PS. I don't get anything out of plugging her services, I just think what she offers is valuable.) Yes, I know, this is a structural problem and we shouldn't have to take care of it as individuals (see Grace Krause's moving poem about this here ). But in the meantime, while we work on that, please seek help if you need it . #### [Feb 12, 2019] Death of the Public University Uncertain Futures for Higher Education in the Knowledge Economy (Higher Education ##### Notable quotes: ##### "... Series editors: ..." ##### "... CRIS SHORE AND SUSAN WRIGHT ..." ##### "... 1. State Disinvestment in Universities – or Risk-free Profits for Private Providers? ..." ##### "... 2. New Regimes for Promoting Competitiveness ..." ##### "... 3. Rise of Audit Culture: Performance and Output Measures ..." ##### "... When a measurement becomes a target, institutional environments are restructured so that they focus their resources and activities primarily on what 'counts' to funders and governors rather than on their wider professional ethics and societal goals (see Kohn and Shore, this volume). ..." ##### "... 4. Administrative Bloat, Academic Decline ..." ##### "... The Fall of the Faculty ..." ##### "... One of the weaknesses in these statistics is that they fail to distinguish between administrative staff who support the teaching and research and those who do not. ..." ##### "... From the perspective of many university managers and human resources (HR) departments, academics are increasingly portrayed as a reluctant, unruly and undisciplined workforce that needs to be incentivized or cajoled to meet management's targeted outputs and performance indicators. ..." ##### "... 5. Institutional Capture: the Power of the 'Administeriat' ..." ##### "... Whereas in the past the main cleavage in universities was between the arts and the sciences, or what C.P. Snow (1956) famously termed 'the two cultures', today the main division is between academics and managers. ..." ##### "... Professor of Critical Management Studies Rebecca Boden compares the way that university managers expand their increasingly onerous regulations to the way that 'cuckoos lay their eggs in the nests of other birds, and how the young cuckoos then evict the nest-builders' offspring' (cited in Havergal 2015). This cuckoo-in-the-nest metaphor might seem somewhat overblown, but it highlights the important fact that managers and administrators have usurped power in what were formerly more collegial, self-governing institutions ..." ##### "... Today, rather than being treated as core members of a professional community, academics are constantly being told by managers and senior administrators what 'the university' expects of them, as if they were somehow peripheral or subordinate to 'the university'. ..." ##### "... 6. New Income Streams and the Rise of the 'Entrepreneurial University' ..." ##### "... Equally important has been the raising of student tuition fees and the relentless drive to recruit more high-fee-paying international students ..." ##### "... The relentless pursuit of these new income streams has had a transformative effect on universities. Almost two decades ago Marginson and Considine (2000) coined the term the 'enterprise university' to describe the model in which: the economic and academic dimensions are both subordinated to something else. Money is a key objective, but it is also the means to a more fundamental mission: to advance the prestige and competitiveness of the university as an end in itself (ibid. 2000: 5). ..." ##### "... Times Higher Education ..." ##### "... 7. Higher Education as Private Investment Versus Public Good ..." ##### "... most students and their families can only afford to pay for the costs of their higher education through the kinds of debt-financing that governments across the world now condemn as reckless and inappropriate for themselves. ..." ##### "... Yet students and parents are encouraged to take out what is effectively a 'subprime loan', in the gamble that it will eventually pay off by enhancing their future job prospects and earning power: it is a 'hedge against their future security' (Vernon 2008). In other words, higher education is now being modelled on the same types of financial speculation that produced the 2010 global financial crisis. ..." ##### "... The Postmodern Condition: A Report on Knowledge ..." ##### "... The University in Ruins ..." ##### "... But on the other hand, universities and their staff have been subjected to an almost continuous process of reforms and restructurings designed both to recast higher education institutions as transnational business corporations and to open up the sector to more private-sector involvement. ..." ##### "... One of the greatest threats to the university today lies in the 'unbundling' of its various research, teaching and degree-awarding functions into separate, profit-making activities that can then be outsourced and privatized. ..." ##### "... Universities no longer hold a monopoly over knowledge production and distribution and face growing competition from the emergence of new universities and from 'entirely new models of university' that Pearson itself has been spearheading to exploit the new environment of globalization and the digital revolution (ibid. 2013: 9–21). ..." ##### "... London Metropolitan's near-bankruptcy opened the possibility of a second method of privatization; a 'fire sale' of a university and its prized degree-awarding powers, to one of the many U.S. for profit education providers that had been seeking entry into the market ..." ###### Feb 12, 2019 | www.amazon.com Higher Education in Critical Perspective: Practices and Policies Series editors: Susan Wright, Aarhus University; Penny Welch, Wolverhampton University INTRODUCTION Privatizing the Public University: Key Trends, Countertrends and Alternatives CRIS SHORE AND SUSAN WRIGHT Since the 1980s, public universities have undergone a seemingly unending series of reforms designed to make them more responsive both to markets and to government priorities. Initially, the aim behind these reforms was to render universities more economic, efficient and effective. However, by the 1990s, prompted by the Organization for Economic Cooperation and Development (OECD 1998) and other international agencies, many national governments adopted the idea that the future lay in a 'global knowledge economy'. To these ends, they implemented policies to repurpose higher education as the engine for producing the knowledge, skills and graduates to generate the intellectual property and innovative products that would make their countries more globally competitive. These reforms were premised on neoliberal ideas about turning universities into autonomous and entrepreneurial 'knowledge organizations' by promoting competition, opening them up to private investors, making educational services contribute to economic competitiveness, and enabling individuals to maximize their skills in global labour markets. These policy narratives position universities as static entities within an all-encompassing market economy, but alternatively, the university can be seen as a dynamic and fluid set of relations within a wider 'ecology' of diverse interests and organizations (Hansen this volume; Wright 2016). The boundaries of the university are constantly being renegotiated as its core values and distinctive purpose rub up against those predatory market forces, or what Slaughter and Leslie (1997) term 'academic capitalism'. Under pressure to produce 'excellence', quality research and innovative teaching, improve world rankings, forge business links and attract elite, fee-paying students, many universities struggle to maintain their traditional mandate to be 'inclusive', foster social cohesion, improve social mobility and challenge received wisdom – let alone improve the poor records on gender, diversity and equality. This book examines how public universities engage with these dilemmas and the implications for the future of the public university as an ideal and set of institutional practices. The book has arisen from a four-year programme of knowledge exchange between three research groups in Europe and the Asia Pacific, which focused on the future of public universities in contexts of globalization and regionalization. 1 The groups were based in the U.K. and Denmark, chosen as European countries whose public universities have quite different histories and current reform policies, and New Zealand, as a country at the forefront of developing 'entrepreneurial' public universities, and with networks to other university researchers in Australia and Asia. Through a series of six workshops, four conferences and over thirty individual exchange visits, the project developed an extended discussion between the three groups of researchers. This enabled us to generate a new approach and methodology for analysing the challenges facing public universities. As a result, this book asks: • How are higher education institutions being reconfigured as 'entrepreneurial' and as 'knowledge' organizations, and with what effects? • In what ways are new management systems and governance regimes transforming the culture of academia? • How are universities responding to these often contradictory policy agendas? • How are national and international reforms impacting on the social purposes of the university and its relationship to society? • What possibilities are there for challenging current trends and developing alternative university futures? Mapping the Major Trends Nowhere are the above trends more evident than in the English-speaking universities, particularly in the U.K., Australia and New Zealand. These countries have been a laboratory for testing out a new model of the neoliberal entrepreneurial university. At least seven key features characterize these reforms. 1. State Disinvestment in Universities – or Risk-free Profits for Private Providers? The first feature is a progressive withdrawal of government support for higher education. In the U.K., for example, the Dearing Report (1997) showed that during the previous twenty years, a period of massive university expansion, state funding per student had declined by 40 percent. While Tony Blair's New Labour government of 1997 proclaimed 'education, education, education' as its key priority, it did so by introducing cost-sharing, in the form of student tuition fees, as a way to reduce the annual deficit in the funding of university teaching. In 2010, the British Conservative–Liberal government under David Cameron went even further by removing all state funding for teaching except in the STEM subjects (science, technology, engineering and mathematics). Instead, students were now to pay fees of £9,000 per annum (a three-fold increase) for which state-funded loans were made available. From the government's perspective, the genius of this shifting of state funding from teaching to loans was that private for-profit education providers could now access taxpayers' money – and this transfer of funds was further justified ideologically as providing competition and creating a 'level playing field' between public and private education providers. Other countries have also decided to withdraw state funding for higher education. For example, in September 2015, Japan's education minister Hakobyan Shimomura wrote to all of the country's eighty-six national universities calling on them to 'take active steps to abolish [social science and humanities] organizations or to convert them to serve areas that better meet society's needs' (Grove 2015b). These measures echo the wider global trend set by advocates of Milton Friedman and the Chicago School's brand of neoliberal economics. In the 1980s, the 'Chicago boys' carried out their most radical experiments in Chile, removing the state's direct grants to universities, funding teaching only through students' tuition fees, and making government loans available to students so that they could pay those fees (Bekhradnia 2015). In the United States, the same policies have been adopted. For example, in California between 1984 and 2004, state spending per capita on higher education declined by 12 percent. Significantly, in the same period per capita spending on prisons increased by 126 percent (Newfield 2008: 266). Between the 1970s and 1990s there was a 400 percent increase in charges in tuition, room and board in U.S. universities and tuition costs have grown at about ten times the rate of family income (ibid.). What these instances highlight is not just the state's retreat from direct funding of higher education but also a calculated initiative to enable private companies to capture and profit from tax-funded student loans. 2. New Regimes for Promoting Competitiveness A second major trend that has reshaped higher education has been the creation of funding and assessment regimes designed to increase productivity and competition between universities, both nationally and globally. What began in the 1980s as an exercise to assure the 'quality' of research in British universities had morphed, by the end of the 1990s, into ever-more invasive systems for ranking institutions, disciplines, departments, and even individuals. The results were used to allocate funds to those institutions that performed best in what has become a fetishistic quest for ever-higher ratings and 'world class' status, or what Hazelkorn (2008: 209) has termed the 'rankings arms-race'. Where some rankings are focused on research performance (such as the U.K.'s Research Excellence Framework, the Excellence in Research for Australia, and New Zealand's Performance Based Research Framework), others rank whole institutions (the Shanghai Jiao Tong Index, the QS and THE World University Rankings). Significantly, these ranking systems have especially negative impacts on minority groups and women (see Blackmore, Curtis, Grant and Lucas, this volume). This obsession with auditing and measuring performance also includes systems for evaluating teaching quality, surveying student satisfaction and measuring student engagement. 2 Even though vice chancellors and university managers ridicule ranking methodologies, they have learned to their cost to take them extremely seriously, as the financial viability of a university increasingly hinges on the reputational effects of these measures of performance (Sauder and Espeland 2009; Wright 2012). 3. Rise of Audit Culture: Performance and Output Measures Third, running alongside the growth of these ranking systems has been the proliferation of performance and output measurements and indicators designed to foster transparency, efficiency and 'value for money'. This is part of a wider phenomenon called 'audit culture' and its growing presence throughout the public and private sectors, including higher education (Shore and Wright 2015; Strathern 2000). Driven by financial imperatives and the rhetoric of 'value for money' – and justified by a political discourse about the virtues of transparency and accountability – these technologies have been particularly instrumental in promoting the logics of risk management, financialization and managerialism (see Dale, and Lewis and Shore, this volume). In Denmark, time has become a key metric and instrument for the efficient throughput of students and the accountability of institutions, but as Nielsen and Sarauw (this volume) show, these measures affect the very nature of education. Audits do not simply or passively measure performance; they actively reshape the institutions into which they are introduced (Power 1997; Shore and Wright 2015). When a measurement becomes a target, institutional environments are restructured so that they focus their resources and activities primarily on what 'counts' to funders and governors rather than on their wider professional ethics and societal goals (see Kohn and Shore, this volume). 4. Administrative Bloat, Academic Decline The fourth key development during this period has been the extraordinary growth in the number and status of university managers and administrators. For the first time in history, as figures from the U.K.'s Higher Education Statistics Agency (HESA) show, support staff now outnumber academic staff at 71 percent of higher education institutions (Jump 2015). In Denmark, there has been an equally large increase in the number of administrators and the increased percentage of annual expenditure on administrators in just five years alone was equivalent to 746 new lectureships (Wright and Boden 2010). The figures from the U.S. are even more dramatic. Federal figures for the period 1987 to 2011/2012 show that the number of college and university administrators and professional employees has more than doubled in the last twenty-five years; an increase of 517,636 people – or an average of eight-seven new administrators every working day (Marcus 2014). The recruitment of administrators has far outpaced the growth in the number of faculty – or even students. Meanwhile, universities claim to be struggling with budget crises that force them to reduce permanent academic posts, and the temporarily employed teaching assistants – the 'precariat' – have undergone a massive increase in numbers. This astonishing increase in management and administration is partly due to the pressures universities now face to produce data and statistics for harvesting by the ranking industries. Universities themselves often attribute the growth of their administrative and technical units to the enormous rise in government regulations. As the President of the American Association of University Administrators recently explained, 'there are "thousands" of regulations governing the distribution of financial aid alone' and every university that is accredited probably has at least one person dedicated to that. However, the proliferation of administrators and managers has also been fuelled by the universities themselves, as they have taken on new functions and pursued new income streams. This is particularly evident in the U.S.: Since 1987, universities have also started or expanded departments devoted to marketing, diversity, disability, sustainability, security, environmental health, recruiting, technology and fundraising, and added new majors and graduate and athletics programs, satellite campuses, and conference centers (Marcus 2014). These trends are captured with exceptional clarity in Benjamin Ginsberg's book, The Fall of the Faculty (2011a). Ginsberg's thesis is that the new professional managers 'make administration their life's work', to the detriment of the universities' core functions. They have little or no faculty experience and promoting teaching and research is less important than expanding their own administrative domains: 'under their supervision, the means have become the end' (ibid.: 2). Every year, writes Ginsberg: hosts of administrators and staffers are added to college and university payrolls, even as schools claim to be battling budget crises that are forcing them to reduce the size of their full-time faculties. As a result, universities are filled with armies of functionaries -- vice presidents, associate vice presidents, assistant vice presidents, provosts, associate provosts, vice provosts, assistant provosts, deans, deanlets, deanlings, each commanding staffers and assistants -- who, more and more, direct the operations of every school. Backed by their administrative legions, university presidents and other senior administrators have been able, at most schools, to dispense with faculty involvement in campus management and, thereby to reduce the faculty's influence in university affairs (Ginsberg 2011a: 2). One of the weaknesses in these statistics is that they fail to distinguish between administrative staff who support the teaching and research and those who do not. Support staff are crucial to enabling academics to carry out effective research, teaching and scholarship – the traditional mission of the university. Likewise, universities need managers who support academics in fulfilling these key functions of the university, but the statistics are rarely sufficiently refined to make these distinctions. Interestingly, many universities have dropped the term 'support staff' in favour of terms like 'senior administrators' and 'professional staff'. This move reflects the way that many university managers now see their role – which is no longer to provide support for academics but, rather, to manage them as 'human capital' and a resource. From the perspective of many university managers and human resources (HR) departments, academics are increasingly portrayed as a reluctant, unruly and undisciplined workforce that needs to be incentivized or cajoled to meet management's targeted outputs and performance indicators. 5. Institutional Capture: the Power of the 'Administeriat' The budgetary reallocation from academic to administrative salaries is linked to a fifth major trend: the rise of the 'administeriat' as a new governing class and the corresponding shift in power relations within the university. Whereas in the past the main cleavage in universities was between the arts and the sciences, or what C.P. Snow (1956) famously termed 'the two cultures', today the main division is between academics and managers. Collini (2013) attributes this shift in power to the way all university activities are now reduced to a common managerial metric. As he puts it, the 'terms that suit [managers'] activities are the terms that have triumphed'. Scholars now spend increasing amounts of their working day accounting for their activities in the 'misleading' and 'alienating' language and categories of managers. This 'squeezing out' of the true use-value of scholarly labour accounts for the 'pervasive sense of malaise, stress and disenchantment within British universities' (Collini 2013). Professor of Critical Management Studies Rebecca Boden compares the way that university managers expand their increasingly onerous regulations to the way that 'cuckoos lay their eggs in the nests of other birds, and how the young cuckoos then evict the nest-builders' offspring' (cited in Havergal 2015). This cuckoo-in-the-nest metaphor might seem somewhat overblown, but it highlights the important fact that managers and administrators have usurped power in what were formerly more collegial, self-governing institutions . Yet many of these managers would not succeed as professionals in industry. Levin and Greenwood (2016) argue that, if universities were indeed business corporations, they would soon collapse, as their work organization currently violates nearly every one of the practices that characterize successful and dynamic high-tech areas and service industries. It is a short step from here to managers' appropriation of the identity of the university, with managers increasingly claiming not only to speak for the university but to be the university (Ørberg 2007; Readings 1996; Shore and Taitz 2010). Today, rather than being treated as core members of a professional community, academics are constantly being told by managers and senior administrators what 'the university' expects of them, as if they were somehow peripheral or subordinate to 'the university'. 6. New Income Streams and the Rise of the 'Entrepreneurial University' Faced with diminishing state funding and year-on-year cuts to national budgets for higher education, universities have been compelled to seek alternative income streams. This has entailed fostering more lucrative and entrepreneurial partnerships with industry; conducting commissioned research for businesses and government; partnering up with venture capitalists; commercializing the university's intellectual property through patents and licences; developing campus spin-out (and spin-in) companies; engaging proactively in city development programmes; and maximizing university assets including real estate, halls of residence, conference facilities and industrial parks. Equally important has been the raising of student tuition fees and the relentless drive to recruit more high-fee-paying international students . This project has given rise to the moniker 'export education', a sector of the economy and foreign-currency earner of growing importance to many countries. For example, in Canada, expenditures of international education students (tuition, accommodation, living costs and so on) infused6.5 billion into the Canadian economy, surpassing exports of coniferous lumber (CAN$5.1 billion) and coal (CAN$6.1 billion) and gave employment to 83,00 Canadians (Roslyn Kunin and Associates, Inc 2009). Similarly, 'educational services' has become one of Australia's leading export industries such that, by 2008, it had become Australia's third-largest generator of export earnings with over AU$12.6 billion (Olds 2008). Along with Australia and Canada, the U.S.A., U.K. and New Zealand dominate the trade in international students (OECD 2011; chart 3.3) and the global demand for international student places is estimated to rise to 5.8 million by 2020 (Bohm et al. 2004). The relentless pursuit of these new income streams has had a transformative effect on universities. Almost two decades ago Marginson and Considine (2000) coined the term the 'enterprise university' to describe the model in which: the economic and academic dimensions are both subordinated to something else. Money is a key objective, but it is also the means to a more fundamental mission: to advance the prestige and competitiveness of the university as an end in itself (ibid. 2000: 5). However, it would be misleading to suggest that all these changes are simply a consequence of the pressures that governments have placed on universities to refashion themselves as pseudo-business corporations. Some of the more entrepreneurially hawkish university rectors, vice chancellors and presidents have enthusiastically welcomed these changes. Many have benefitted from the enormous executive salaries that have become the norm for university 'CEOs', and they undoubtedly enjoy their vaulted status and the opportunities this provides to mingle with world leaders at prestigious summits and receptions, airport VIP lounges and gala fundraising events. For example, the Times Higher Education annual review of vice chancellors' pay shows that average salary and benefits for university vice chancellors in the U.K. rose by between £8,397 and £240,794 in 2013–2014. This constituted a 3.6 percent rise, whereas in the same period, other university staff received an increase of only 1 per cent (Grove 2015a). A study by economists Bachan and Reilly (2015), from Brighton Business School, found that in the past two decades, vice chancellors have seen their salaries soar by an eye-watering 59 percent (Henry 2015), but concluded that these increases could not be justified in terms of their university's performance criteria, such as widening participation or bringing in income such as grants for teaching and research and capital funding. Rather, the study found that the presence of other high-paid administrative staff was pushing up vice chancellors' pay. Both the U.K.'s House of Commons' Public Accounts Committee and the former Minister for Business and Employment, Vince Cable, have condemned this 'substantial upward drift' of salaries among vice chancellors. However, this annual ritual of chastisement has little perceivable impact. 7. Higher Education as Private Investment Versus Public Good The seventh major trend is recasting university education as a private and positional investment rather than a public good. The idea that gained prominence in the post-war era was that higher education was a public investment that benefits the economy and society as well as contributing to personal growth and social mobility (Morgan this volume). In the 1990s, this idea – and the Keynesian model that sustained it – was displaced by the Chicago School's economic doctrine and the notion that individuals, not the state, should take responsibility for repeatedly investing in their education and skills in order to sustain and improve their position in a fast-changing competitive and global labour market. This is what the OECD termed 'new human capital theory' (Henry et al. 2001), an idea that came to dominate government thinking about growth and investment. However, several recent studies challenge the premises upon which this model is based (Ashton, Lauder and Brown 2011; Wright and Ørberg this volume). Arising from this new way of conceptualizing higher education as a private individual good and the reduction of government funding for the sector, has been the replacement of student grants with loans. This has been coupled with a massive hike in student fees – or what is euphemistically called 'cost-sharing' by ministers and World Bank experts. There are several bizarre paradoxes in this way of financing higher education. First, as McGettigan (2013) shows, government funding of student loans to pay fees is likely to cost the taxpayer more than the previous system of funding universities directly for their teaching. Second, as Vernon (2010) points out, most students and their families can only afford to pay for the costs of their higher education through the kinds of debt-financing that governments across the world now condemn as reckless and inappropriate for themselves. Third, whereas the scale of national debt in many countries has become so severe that it has required emergency austerity measures to combat, the level of household debt is even more perilously high, peaking to 110 percent of GDP in 2009 in the U.K. (Jones 2013). This was before the government transferred even more of the costs of higher education to families and tripled university fees. These policies are justified on the grounds that degree-holders gain a lifetime premium in earning: hence the catchphrase 'learn to earn'. In New Zealand, however, which has the seventh-highest university fees among developed countries, the OECD survey found that the value of a university degree in terms of earning power is the lowest in the world. The net value of a New Zealand tertiary education for a man is just$63,000 over his working life (compared with $395,000 in the U.S.). For a woman, it is even lower:$38,000 over her working life (Edmunds 2012). As Brown and Hesketh (2004) also show for the U.S., graduates' imagined future incomes are largely illusory. Yet students and parents are encouraged to take out what is effectively a 'subprime loan', in the gamble that it will eventually pay off by enhancing their future job prospects and earning power: it is a 'hedge against their future security' (Vernon 2008). In other words, higher education is now being modelled on the same types of financial speculation that produced the 2010 global financial crisis.
The Death of the Public University?
Do the seven trends outlined above spell the end of the public university? From the earliest beginnings of these developments, there has been an extensive literature foretelling the demise of the university. According to historians Sheldon Rothblatt and Bjorn Wittrock (1993: 1), the university is the second-longest unbroken institution in Western civilization, after the Catholic Church. Today, however, the university – or what John Henry Newman termed the 'idea of a university' – does indeed look broken. Or is this an unduly pessimistic conclusion? Jean-Francoise Lyotard set the agenda with his provocative book The Postmodern Condition: A Report on Knowledge . Noting the collapse of the university's traditional authority in producing legitimate knowledge, he wrote:
The question (overt or implied) now asked by the professionalist student, the State, or institutions of higher education is no long 'Is it true?' but 'What use is it?' In the context of the mercantilization of knowledge, more often than not this question is equivalent to: 'Is it saleable?' And in the context of power-growth: 'Is it efficient?' (Lyotard 1994: 51).
The complaint often voiced by academics is that universities – like hospitals, libraries and other local community services – are undergoing a process of 'death by a thousand cuts'. But chronic underfunding of public institutions also reflects a wider and arguably more purposeful political agenda that aims to fundamentally transform the public sector. One of the greatest threats to the university today lies in the 'unbundling' of its various research, teaching and degree-awarding functions into separate, profit-making activities that can then be outsourced and privatized.
This agenda is articulated clearly in the recent report entitled 'An Avalanche is Coming: Higher Education and the Revolution Ahead' (Barber et al. 2013), published by the London-based think tank, the Institute for Public Policy Research. Its principal authors are Sir Michael Barber, Chief Education Advisor for Pearson PLC (a British-owned multinational education provider and publisher) and two of Pearson's executive directors. The report's central argument, captured in its 'avalanche' metaphor, is that the current system of higher education is untenable and will be swept away unless bold and radical steps are taken:
The next 50 years could see a golden age for higher education, but only if all the players in the system, from students to governments, seize the initiative and act ambitiously. If not, an avalanche of change will sweep the system away. Deep, radical and urgent transformation is required in higher education. The biggest risk is that as a result of complacency, caution or anxiety the pace of change is too slow and the nature of change is too incremental. The models of higher education that marched triumphantly across the globe in the second half of the 20th century are broken (Barber, Donnelly and Rizvi 2013: 5).
A series of forces that lie 'under the surface' threatens to transform the landscape of higher education. These include: a changing world economy in which the centre of gravity is shifting towards the Asia-Pacific region; a global economy still struggling to recover from the trauma of the global financial crash of 2007–2008; and the escalating costs of higher education, which are vastly outstripping inflation and household income. These are coupled with the declining value of a degree and a technological shift that makes information ubiquitous. Universities no longer hold a monopoly over knowledge production and distribution and face growing competition from the emergence of new universities and from 'entirely new models of university' that Pearson itself has been spearheading to exploit the new environment of globalization and the digital revolution (ibid. 2013: 9–21).
The Barber report is part of a growing literature which seeks to 'remake the university' as an altogether different kind of institution (see Bokor 2012). Epochal and prophetic in tone and often claiming to be diagnostic and neutral, this literature proposes solutions that are anything but impartial or disinterested. Pearson, for example, makes no secret of its ambition to acquire a larger share of the higher education market and the rents that can be captured from its various activities. In 2015, Pearson sold off its major publishing interests to restructure the company around for-profit educational provision both in England and worldwide. Pearson also has a primary listing on the London Stock Exchange and a secondary listing on the New York Stock Exchange. Writing in the preface to the Barber reports, former president of Harvard University Lawrence Summers underscores its central ambition when he writes that in this new 'phase of competitive intensity', all of the university's core functions can be 'unbundled and increasingly supplied, perhaps better, by providers that are not universities at all' (Barber 2013: 1). As John Morgan (this volume) shows, higher education has long been – and continues to be – a site of ideological struggle between competing interests and their vision of society.
Towards the Privatization of English Universities
In England, these processes have been taken to an extreme. Events since the Conservative–Liberal coalition took office in 2010 suggest a tipping point may have been reached in the transformation of the public university. Research by the legal firm Eversheds (2009) revealed that no legislation was needed for public universities to be transferred to the private for-profit sector, either by a management buyout or by outside interests buying-in (Wright 2015). London Metropolitan University was an early contender. It advertised a tender worth £74 million over five years for a partner who would create a for-profit 'special services vehicle' to deliver all the university's functions and services – everything except academic teaching and the Vice Chancellor's powers. Such 'special services vehicles' are a way for private investors to buy into the university's activities. This plan was only stymied because civil servants found major administrative failings, and the resulting fines and repayments pushed the university close to bankruptcy. But this 'special services vehicle' model has been implemented by other universities, including Falmouth and Exeter, where a private company runs not only catering, estate maintenance and services on the two campuses, but also its entire academic support services (libraries, IT, academic skills and disability support services) (University and College Union 2013).
London Metropolitan's near-bankruptcy opened the possibility of a second method of privatization; a 'fire sale' of a university and its prized degree-awarding powers, to one of the many U.S. for profit education providers that had been seeking entry into the market (Wright 2015). Privatization was only avoided thanks to the successful actions of its new Vice Chancellor. However, one university with a charter and degree-awarding powers has been transferred to the for-profit sector. In 2006, the Department of Business, Innovation and Science rushed through approval to give the College of Law in London degree-awarding powers and university status. This was just in time for its sale to finance company Montagu Private Equity. To maintain that university's charitable (tax-favourable) status and provide bursaries for students, the institution divided itself into a for-profit company with all the education and training activities, and an educational foundation. Montagu Private Equity made a leveraged buyout of the university: £177 million of the £200 million purchase price was borrowed and then put on the university's balance sheet, making it responsible for paying the debt and interest from its cash flow. A few years later, Montagu announced it was selling the university's buildings, in what was a clear case of asset stripping. The legal firm Eversheds recommended that other public universities follow this model and either sell stakes in their institution or be sold outright to financiers. As the University of Law example shows, such investors' prime interest is the short-term extraction of profit and liquidization of assets, rather the long-term future of higher education. Indeed, in June 2015, Montagu sold the University of Law to Aaron Etingen, founder and chief executive officer of Global University Systems (GUS), which owns a network of for-profit colleges worldwide (Morgan 2015).
#### [Feb 11, 2019] The current diploma mills> are the result of the consecutive waves of university reforms since the 1990s to ground knowledge production on market principles. If university employees behave like ruthless rent-seekers, it is because they are forced to do so by the incentive structures that have been imposed on them by Johan Söderberg
##### "... Thirty years of neoliberal politics have created the conditions under which categories such as "human capital" and "rent-seeking" start to make good sense... ..."
###### Feb 11, 2019 | lse.ac.uk
From: A response to Steve Fuller The differences between social democracy and neoliberalism by Johan Söderberg
... ... ...
The counterargument that I will elaborate here, is that neoliberalism and social democracy should be treated as two distinct and internally consistent thought and value systems. The integrity of the two ideologies must neither be reduced to practices/policies, which occasionally may overlap, nor to individual representatives, who, over the course of a lifetime, can move from one pole to the other.
Neoliberalism and the university system
Fuller's argument pivots on the mixed legacy of Lionel Robbins. On the one hand, Robbins' credentials as a neoliberal are firmly established by his decision to recruit Friedrich Hayek to the LSE. On the other hand, Robbins authored the government report whereby many regional universities in the UK were founded, in keeping with a classic social democratic agenda of enrolling more students from the working class. This encourages Fuller to draw an arc from the 1963 Robbins Report to university reforms of a more recent date (and with a more distinct, neoliberal flavour).
The common denominator of all the reforms, Fuller says, is the ambition to enhance human capital. Alas, the enhancement of human capital is blocked on all sides by incumbent traditions and rent-seeking monopolies. From this problem description – which Fuller attributes to the neoliberals, but which is also his own – follows the solution: to increase the competition between knowledge providers. Just as the monopoly that Oxbridge held over higher education was offset by the creation of regional universities in the 1960s, so is the current university system's monopoly over knowledge acquisition sidelined by reforms to multiply and diversify the paths to learning.
Underpinning this analysis is a bleak diagnosis of what purpose the university system and its employees serve. It is a diagnosis that Fuller, by his own admission, has gleaned from the Virginia-style neoliberal Gordon Tullock.
The task assigned to the university, i.e. to certify bodies of trustworthy knowledge, is not called for by any intrinsic property of that knowledge (it being true, safe etc.), but is rather a form of rent-seeking. The rent is extracted from the university's state-induced monopoly over the access rights to future employment opportunities. Rent-seeking is the raison-d'être of the university's claim to be the royal road to knowledge.
In this acid bath of cynicism, the notions of truth and falsehood are dissolved into the basic element that Tullock's world is made up of – self-interest. This reasoning lines up with a 19 th century, free market epistemology, according to which the evolutionary process will sift out the propositions that swim from those that sink. With a theory of knowledge like that, university-certified experts have no rationale for being. Their knowledge claims are just so many excuses for lifting a salary on the taxpayers' expense. It bears to stress that this argument can easily be given a leftist spin, by emphasising the pluralism of this epistemology. This resonates with statements that Steve Fuller has made elsewhere , concerning the claimants of alternative facts.
Granted, the cynical reading of the university system as a rent-seeking diploma-mill has a ring of truth to it when we, for instance, think of how students are asked to pay higher and higher tuition fees, while the curriculum is successively being hollowed-out. However, as was pointed out to Fuller by many in the audience in Lancaster, this is the result of the consecutive waves of university reforms since the 1990s to ground knowledge production on market principles. If university employees behave like self-interested rent-seekers, it is because they are forced to do so by the incentive structures that have been imposed on them.
Thirty years of neoliberal politics have created the conditions under which categories such as "human capital" and "rent-seeking" start to make good sense...
... ... ...
The author would like to thank Adam Netzén, Karolina Enquist Källgren and Eric Deibel for feedback given on early drafts of this blog post, and especially Steve Fuller, for having invited a response to his argument.
#### [Feb 11, 2019] Universities in the neoliberal age by Rafael Winkler
##### "... Neoliberalism has converted education from a public good to a personal investment in the future, a future conceived in terms of earning capacity. ..."
###### Sep 14, 2018 | mg.co.za
Many of the students I have taught in Britain and South Africa see higher education as a place where they "invest" in themselves in the financial sense of the word. "Going to university," one student said, was a way of "increasing" his "value" or employability in the labour market.
This perception of the university has not arisen by chance.
Capitalism entered a new phase with the Thatcher and Reagan governments in Britain and the United States during the 1980s. The managerial practices used to run businesses were applied to the public sector, in particular to education and healthcare.
This reform of the public sector (called "new public management") introduced a new way of thinking about the university.
Higher education was being made to conform to the norms of efficiency, value for money, customer service, audit and performance targets. One of the consequences of this was the substitution of the authority of the academic, which is based on his or her professional knowledge of the discipline, for the authority of the line manager.
Since then, everything has come to depend on audits and metric standards of so-called quality assessment (student satisfaction, pass rates, league tables, et cetera). Academics have little, if any, say on whether departments should continue to exist, what degrees and courses should be on offer and even what kind of assessment methods should be used.
I don't think that there has been a more sinister assault on academic freedom than this colonisation of higher education by neoliberalism. It justifies itself by calling for "transparency" and "accountability" to the taxpayer and the public. But it operates with a perverted sense of these words (since what it really means is "discipline and surveillance" and "value for money").
Its effect, if not its aim, has been to commodify higher education and produce a new kind of social identity. This is the identity of the self as entrepreneur.
Let me explain. One of the central aspects of neoliberalism is the disappearance of the distinction between the worker and the capitalist. In the neoliberal setting, the worker is not a partner of exchange with the capitalist. She does not sell her labour-power for a wage.
The labourer's ability to work, her skill, is an income stream. It is an investment on which she receives a return in the form of wages. The worker is capital for herself. She is a source of future earnings. In the neoliberal market, as Michel Foucault remarks, everyone is a capitalist.
Neoliberalism has converted education from a public good to a personal investment in the future, a future conceived in terms of earning capacity.
How did we get to this situation?
The modern university came into existence at the start of the 19th century as an extension of the state. The aim of the state during the colonial and imperial age was to constitute the identity of the national subject. As a public institution, the university was designed to teach students to see their life in a specific way. They would learn to see that it is only as members of a national community and culture that their individual life has a meaning and worth. This was the aim of the educational programme that German philosophers such as Wilhelm von Humboldt and Johann Gottlieb Fichte envisaged for the University of Berlin. For them, science was in the service of the moral and intellectual education of the nation.
Established in 1810, the University of Berlin was the first modern university. It was founded on the principles of academic freedom, the unity of research and teaching, and the primacy of research over vocational training. It functioned as the prototype for universities in both the United States and Europe during the second half of the 19th century.
Once transnational corporations started to control more capital than nation-states in the 1980s, the university ceased to be one of its principal organs. It lost its ideological mission and entered the market as a corporation. It started to encourage students to think of themselves as customers rather than as members of a nation. This history shows that the university is today the site of two competing social identities.
• On the one hand, because of globalisation, the student who enters university sees herself as someone who is there to increase her human capital, as an enterprise to invest in.
It must be remarked that, for the entrepreneur (taken as a social figure) who invests in herself, differences of class, religion, ethnicity or race are phantasms of a bygone age. The differences in the name of which wars were waged and social movements organised in the past have no more meaning in her eyes than cheap advertising.
There is, for her, something improper or inauthentic about them, as Giorgio Agamben says of the new petty bourgeoisie in The Coming Community. Like Britain's former prime minister, David Cameron, she is sceptical of multiculturalism.
• On the other hand, the university has not ceased to draw on its modern role as a producer, protector and inculcator of national identity and culture. Much of what is going on today in South African universities under the name of decolonisation and Africanisation draws on this heritage and understanding of the modern university, even if tacitly. That is why students will politicise themselves by identifying with an ethnicity or nationality.
Nationalism was an emancipatory political project during the anti-colonial struggles of the second half of the 20th century. It was not tribalist or communalist.
According to Eric Hobsbawm in Nations and Nationalism since 1780, its aim was to extend the size of the social, cultural and political group. It was not to restrict it or to separate it from others. Nationalism was a political programme divorced from ethnicity.
Is this political nationalism a viable way of resisting neoliberalism today? Can it gainsay the primacy of economic rationality and the culture of narcissist consumerism, and restore meaning to the political question concerning the common good? Or has nationalism irreversibly become an ethnic, separatist project? It is not easy to say. So far, we have witnessed one kind of response to the social insecurities generated by the global spread of neoliberalism. This is a return to ethnicity and religion as havens of safety and security.
When society fails us owing to job insecurity, and, concomitantly, with regard to housing and healthcare, one tends to fall back on one's ethnicity or religious identity as an ultimate guarantee.
Moreover, nationalism as a political programme depends on the idea of the state. It holds that a group defined as a "nation" has the right to form a territorial state and exercise sovereign power over it. But given the decline of the state, there are reasons to think that political nationalism has withdrawn as a real possibility.
By the "decline of the state" I do not mean that it no longer exists. The state has never been more present in the private life of individuals. It regulates the relations between men and women. It regulates their birth and death, the rearing of children, the health of individuals and so forth. The state is, today, ubiquitous.
What some people mean by the "decline of the state" is that, with the existence of transnational corporations, it is no longer the most important site of the reproduction of capital. The state has become managerial. Its function is to manage obstacles to liberalisation and free trade.
Perhaps that is one of the challenges of the 21st century. How is a "nation" possible, a "national community" that is not defined by ethnicity, on the one hand, and, on the other, that forsakes the desire to exercise sovereign power in general and, in particular, over a territorial state?
The university is perhaps the place where such a community can begin to be thought.
Rafael Winkler is an associate professor in the philosophy department at the University of Johannesburg
#### [Feb 05, 2019] Capitalists need their options regulated and their markets ripped from their control by the state. Profits must be subject to use it to a social purpose or heavily taxed. Dividends executive comp and interest payments included
###### Feb 05, 2019 | economistsview.typepad.com
Mr. Bill -> Mr. Bill... , January 31, 2019 at 08:22 PM
Is anyone else tired of the longest, least productive waste of war in American history ? What have we achieved, where are we going with this ? More war.
Mr. Bill -> Mr. Bill... , January 31, 2019 at 08:31 PM
We are being fed a fairy tale of war about what men, long dead, did. And the reason they did it. America is being strangled by the burden of belief that now is like then.
Mr. Bill -> Mr. Bill... , January 31, 2019 at 08:46 PM
By the patrician men and women administrators, posturing as soldiers like the WW2 army, lie for self profit. Why does anyone believe them ? Korea, Vietnam, Iraq, each an economic decision, rather than a security issue.
Mr. Bill -> Mr. Bill... , January 31, 2019 at 08:48 PM
America is dying on the same sword as Rome, for the same reason.
Plp -> JF... , January 31, 2019 at 07:28 AM
Capitalists need their options regulated and their markets ripped from their control by the state. Profits must be subject to use it to a social purpose or heavily taxed. Dividends executive comp and interest payments included
Julio -> mulp ... , January 31, 2019 at 08:58 AM
Well done! Much clearer than your usual. There are several distinct motivations for taxes. We have been far enough from fairness to workers, for so long, that we need to use the tax system to redistribute the accumulated wealth of the plutocrats.
So I would say high marginal rates are a priority, which matches both objectives. Wealth tax is needed until we reverse the massive inequality supported by the policies of the last 40 years.
Carbon tax and the like are a different thing, use of the tax code to promote a particular policy and reduce damage to the commons.
Gerald -> Julio ... , January 31, 2019 at 04:14 PM
"...we need to use the tax system to redistribute the accumulated wealth of the plutocrats. So I would say high marginal rates are a priority..."
Forgive me, but high marginal rates (which I hugely favor) don't "redistribute the accumulated wealth" of the plutocrats. If such high marginal rates are ever enacted, they'll apply only to the current income of such plutocrats.
Julio -> Gerald... , January 31, 2019 at 06:22 PM
You merged paragraphs, and elided the next one. The way I see it, high rates are a prerequisite to prevent the reaccumulation of obscene wealth, and its diversion into financial gambling.
But yes that would be a very slow way to redistribute what has already accumulated.
Gerald -> Julio ... , February 01, 2019 at 04:48 AM
Didn't mean to misinterpret what you were saying, sorry. High rates are not only "a prerequisite to prevent the reaccumulation of obscene wealth," they are also a reimposition of fair taxation on current income (if it ever happens, of course).
Global Groundhog -> Julio ... , February 02, 2019 at 01:39 PM
Wealth tax is needed until we reverse the massive inequality supported by the policies of the last 40 years. Carbon tax and the like are a different thing, use of the tax code to promote a particular policy and reduce damage to the commons.
"
more wisdom as usual!
Although wealth tax will be unlikely, it could be a stopgap; could also be a guideline to other taxes as well. for example, Elizabeth points out that billionaires pay about 3% of their net worth into their annual tax bill whereas workers pay about 7% of their net worth into their annual tax bill. Do you see how that works?
it doesn't? this Warren argument gives us a guideline. it shows us where other taxes should be adjusted to even out this percentage of net worth that people are taxed for. Ceu, during the last meltdown 10 years or so ago, We were collecting more tax from the payroll than we were from the income tax. this phenomenon was a heavy burden on those of low net worth. All this needs be resorted. we've got to sort this out.
and the carbon tax? may never be; but it indicates to us what needs to be done to make this country more efficient. for example some folks, are spending half a million dollars on the Maybach automobile, about the same amount on a Ferrari or a Alfa Romeo Julia quadrifoglio, but the roads are built for a mere 40 miles an hour, full of potholes.
What good is it to own a fast car like that when you can't drive but 40 -- 50 miles an hour? and full of traffic jams. something is wrong with taxation incentives. we need to get a better grid-work of roads that will get people there faster.
Meanwhile most of those sports cars just sitting in the garage. we need a comprehensive integrated grid-work of one way streets, roads, highways, and interstates with no traffic lights, no stop signs; merely freeflow ramp-off overpass interchanges.
thanks, Julio! thanks
again
.!
JF -> Global Groundhog... , February 04, 2019 at 05:42 AM
Wonderful to see the discussion about public finance shifting to use net worth proportions as the focus and metric.
Wonderful. Let us see if press/media stories and opinion pieces use this same way of talking about the financing of self-government.
Mr. Bill -> anne... , February 03, 2019 at 08:15 PM
Jesus Christ said, in so many words, that a man's worth will be judged by his generosity and his avarice.
" 24And the disciples were amazed at His words. But Jesus said to them again, "Children, how hard it is to enter the kingdom of God! 25It is easier for a camel to pass through the eye of a needle than for a rich man to enter the kingdom of God." 26They were even more astonished and said to one another, "Who then can be saved?"
#### [Jan 31, 2019] Linus Torvalds and others on Linux's systemd by By Steven J. Vaughan-Nichols
##### "... As a result, many traditional GNOME users have moved over to Cinnamon, XFCE, KDE, etc. But as systemd starts subsuming new functions, components like network-manager will only work on systemd or other components that are forced to be used due to a network of interlocking dependencies; and it may simply not be possible for these alternate desktops to continue to function, because there is [no] viable alternative to systemd supported by more and more distributions. ..."
###### Sep 19, 2014 | www.zdnet.com
So what do Linux's leaders think of all this? I asked them and this is what they told me.
Linus Torvalds said:
"I don't actually have any particularly strong opinions on systemd itself. I've had issues with some of the core developers that I think are much too cavalier about bugs and compatibility, and I think some of the design details are insane (I dislike the binary logs, for example) , but those are details, not big issues."
Theodore "Ted" Ts'o, a leading Linux kernel developer and a Google engineer, sees systemd as potentially being more of a problem. "The bottom line is that they are trying to solve some real problems that matter in some use cases. And, [that] sometimes that will break assumptions made in other parts of the system."
Another concern that Ts'o made -- which I've heard from many other developers -- is that the systemd move was made too quickly: "The problem is sometimes what they break are in other parts of the software stack, and so long as it works for GNOME, they don't necessarily consider it their responsibility to fix the rest of the Linux ecosystem."
This, as Ts'o sees it, feeds into another problem:
" Systemd problems might not have mattered that much, except that GNOME has a similar attitude; they only care for a small subset of the Linux desktop users, and they have historically abandoned some ways of interacting the Desktop in the interest of supporting touchscreen devices and to try to attract less technically sophisticated users.
If you don't fall in the demographic of what GNOME supports, you're sadly out of luck. (Or you become a second class citizen, being told that you have to rely on GNOME extensions that may break on every single new version of GNOME.) "
Ts'o has an excellent point. GNOME 3.x has alienated both users and developers . He continued,
" As a result, many traditional GNOME users have moved over to Cinnamon, XFCE, KDE, etc. But as systemd starts subsuming new functions, components like network-manager will only work on systemd or other components that are forced to be used due to a network of interlocking dependencies; and it may simply not be possible for these alternate desktops to continue to function, because there is [no] viable alternative to systemd supported by more and more distributions. "
Of course, Ts'o continued, "None of these nightmare scenarios have happened yet. The people who are most stridently objecting to systemd are people who are convinced that the nightmare scenario is inevitable so long as we continue on the same course and altitude."
Ts'o is "not entirely certain it's going to happen, but he's afraid it will.
What I find puzzling about all this is that even though everyone admits that sysvinit needed replacing and many people dislike systemd, the distributions keep adopting it. Only a few distributions, including Slackware , Gentoo , PCLinuxOS , and Chrome OS , haven't adopted it.
It's not like there aren't alternatives. These include Upstart , runit , and OpenRC .
If systemd really does turn out to be as bad as some developers fear, there are plenty of replacements waiting in the wings. Indeed, rather than hear so much about how awful systemd is, I'd rather see developers spending their time working on an alternative.
#### [Jan 29, 2019] The Language of Neoliberal Education by Henry Giroux
##### "... As a movement, it produces and legitimates massive economic inequality and suffering, privatizes public goods, dismantles essential government agencies, and individualizes all social problems. In addition, it transforms the political state into the corporate state, and uses the tools of surveillance, militarization, and law and order to discredit the critical press and media, undermine civil liberties while ridiculing and censoring critics. ..."
###### Dec 25, 2018 | www.counterpunch.org
This interview with Henry Giroux was conducted by Mitja Sardoč, of the Educational Research Institute, in the Faculty of the Social Sciences, at University of Ljubljana, Slovenia.
Mitja Sardoč: For several decades now, neoliberalism has been at the forefront of discussions not only in the economy and finance but has infiltrated our vocabulary in a number of areas as diverse as governance studies, criminology, health care, jurisprudence, education etc. What has triggered the use and application ofthis'economistic'ideologyassociatedwith the promotion of effectiveness and efficiency?
Henry Giroux: Neoliberalism has become the dominant ideology of the times and has established itself as a central feature of politics. Not only does it define itself as a political and economic system whose aim was to consolidate power in the hands of a corporate and financial elite, it also wages a war over ideas. In this instance, it has defined itself as a form of commonsense and functions as a mode of public pedagogy that produces a template for structuring not just markets but all of social life.
In this sense, it has and continues to function not only through public and higher education to produce and distribute market-based values, identities, and modes of agency, but also in wider cultural apparatuses and platforms to privatize, deregulate, economize, and subject all of the commanding institutions and relations of everyday life to the dictates of privatization, efficiency, deregulation, and commodification.
Since the 1970s as more and more of the commanding institutions of society come under the control of neoliberal ideology, its notions of common sense – an unchecked individualism, harsh competition, an aggressive attack on the welfare state, the evisceration of public goods, and its attack on all models of sociality at odds with market values – have become the reigning hegemony of capitalist societies.
What many on the left have failed to realize is that neoliberalism is about more than economic structures, it is also is a powerful pedagogical force – especially in the era of social media – that engages in full-spectrum dominance at every level of civil society. Its reach extends not only into education but also among an array of digital platforms as well as in the broader sphere of popular culture. Under neoliberal modes of governance, regardless of the institution, every social relation is reduced to an act of commerce.
Neoliberalism's promotion of effectiveness and efficiency gives credence to its ability to willingness and success in making education central to politics. It also offers a warning to progressives, as Pierre Bourdieu has insisted that the left has underestimated the symbolic and pedagogical dimensions of struggle and have not always forged appropriate weapons to fight on this front."
Mitja Sardoč: According to the advocates of neoliberalism, education represents one of the main indicators of future economic growth and individual well-being.How – and why – education became one of the central elements of the 'neoliberal revolution'?
Henry Giroux: Advocates of neoliberalism have always recognized that education is a site of struggle over which there are very high stakes regarding how young people are educated, who is to be educated, and what vision of the present and future should be most valued and privileged. Higher education in the sixties went through a revolutionary period in the United States and many other countries as students sought to both redefine education as a democratic public sphere and to open it up to a variety of groups that up to that up to that point had been excluded. Conservatives were extremely frightened over this shift and did everything they could to counter it. Evidence of this is clear in the production of the Powell Memo published in 1971 and later in The Trilateral Commission's book-length report, namely, The Crisis of Democracy, published in 1975. From the 1960s on the, conservatives, especially the neoliberal right, has waged a war on education in order to rid it of its potential role as a democratic public sphere. At the same time, they sought aggressively to restructure its modes of governance, undercut the power of faculty, privilege knowledge that was instrumental to the market, define students mainly as clients and consumers, and reduce the function of higher education largely to training students for the global workforce.
At the core of the neoliberal investment in education is a desire to undermine the university's commitment to the truth, critical thinking, and its obligation to stand for justice and assume responsibility for safeguarding the interests of young as they enter a world marked massive inequalities, exclusion, and violence at home and abroad. Higher education may be one of the few institutions left in neoliberal societies that offers a protective space to question, challenge, and think against the grain.
Neoliberalism considers such a space to be dangerous and they have done everything possible to eliminate higher education as a space where students can realize themselves as critical citizens, faculty can participate in the governing structure, and education can be define itself as a right rather than as a privilege.
Mitja Sardoč: Almost by definition, reforms and other initiatives aimed to improve educational practice have been one of the pivotal mechanisms to infiltrate the neoliberal agenda of effectiveness and efficiency. What aspect of neoliberalism and its educational agenda you find most problematic? Why?
Henry Giroux: Increasingly aligned with market forces, higher education is mostly primed for teaching business principles and corporate values, while university administrators are prized as CEOs or bureaucrats in a neoliberal-based audit culture. Many colleges and universities have been McDonalds-ized as knowledge is increasingly viewed as a commodity resulting in curricula that resemble a fast-food menu. In addition, faculty are subjected increasingly to a Wal-Mart model of labor relations designed as Noam Chomsky points out "to reduce labor costs and to increase labor servility". In the age of precarity and flexibility, the majority of faculty have been reduced to part-time positions, subjected to low wages, lost control over the conditions of their labor, suffered reduced benefits, and frightened about addressing social issues critically in their classrooms for fear of losing their jobs.
The latter may be the central issue curbing free speech and academic freedom in the academy. Moreover, many of these faculty are barely able to make ends meet because of their impoverished salaries, and some are on food stamps. If faculty are treated like service workers, students fare no better and are now relegated to the status of customers and clients.
Moreover, they are not only inundated with the competitive, privatized, and market-driven values of neoliberalism, they are also punished by those values in the form of exorbitant tuition rates, astronomical debts owed to banks and other financial institutions, and in too many cases a lack of meaningful employment. As a project and movement, neoliberalism undermines the ability of educators and others to create the conditions that give students the opportunity to acquire the knowledge and the civic courage necessary to make desolation and cynicism unconvincing and hope practical.
As an ideology, neoliberalism is at odds with any viable notion of democracy which it sees as the enemy of the market. Yet, Democracy cannot work if citizens are not autonomous, self-judging, curious, reflective, and independent – qualities that are indispensable for students if they are going to make vital judgments and choices about participating in and shaping decisions that affect everyday life, institutional reform, and governmental policy.
Mitja Sardoč: Why large-scale assessments and quantitative data in general are a central part of the 'neo-liberal toolkit' in educational research?
Henry Giroux: These are the tools of accountants and have nothing to do with larger visions or questions about what matters as part of a university education. The overreliance on metrics and measurement has become a tool used to remove questions of responsibility, morality, and justice from the language and policies of education. I believe the neoliberal toolkit as you put it is part of the discourse of civic illiteracy that now runs rampant in higher educational research, a kind of mind-numbing investment in a metric-based culture that kills the imagination and wages an assault on what it means to be critical, thoughtful, daring, and willing to take risks. Metrics in the service of an audit culture has become the new face of a culture of positivism, a kind of empirical-based panopticon that turns ideas into numbers and the creative impulse into ashes. Large scale assessments and quantitative data are the driving mechanisms in which everything is absorbed into the culture of business.
The distinction between information and knowledge has become irrelevant in this model and anything that cannot be captured by numbers is treated with disdain. In this new audit panopticon, the only knowledge that matters is that which can be measured. What is missed here, of course, is that measurable utility is a curse as a universal principle because it ignores any form of knowledge based on the assumption that individuals need to know more than how things work or what their practical utility might be.
This is a language that cannot answer the question of what the responsibility of the university and educators might be in a time of tyranny, in the face of the unspeakable, and the current widespread attack on immigrants, Muslims, and others considered disposable. This is a language that is both afraid and unwilling to imagine what alternative worlds inspired by the search for equality and justice might be possible in an age beset by the increasing dark forces of authoritarianism.
Mitja Sardoč: While the analysis of the neoliberal agenda in education is well documented, the analysis of the language of neoliberal education is at the fringes of scholarly interest. In particular, the expansion of the neoliberal vocabulary with egalitarian ideas such as fairness, justice, equality of opportunity, well-being etc. has received [at best]only limited attention. What factors have contributed to this shift of emphasis?
Henry Giroux: Neoliberalism has upended how language is used in both education and the wider society. It works to appropriate discourses associated with liberal democracy that have become normalized in order to both limit their meanings and use them to mean the opposite of what they have meant traditionally, especially with respect to human rights, justice, informed judgment, critical agency, and democracy itself. It is waging a war over not just the relationship between economic structures but over memory, words, meaning, and politics. Neoliberalism takes words like freedom and limits it to the freedom to consume, spew out hate, and celebrate notions of self-interest and a rabid individualism as the new common sense.
Equality of opportunity means engaging in ruthless forms of competition, a war of all against all ethos, and a survival of the fittest mode of behavior.
The vocabulary of neoliberalism operates in the service of violence in that it reduces the capacity for human fulfillment in the collective sense, diminishes a broad understanding of freedom as fundamental to expanding the capacity for human agency, and diminishes the ethical imagination by reducing it to the interest of the market and the accumulation of capital. Words, memory, language and meaning are weaponized under neoliberalism.
Certainly, neither the media nor progressives have given enough attention to how neoliberalism colonizes language because neither group has given enough attention to viewing the crisis of neoliberalism as not only an economic crisis but also a crisis of ideas. Education is not viewed as a force central to politics and as such the intersection of language, power, and politics in the neoliberal paradigm has been largely ignored. Moreover, at a time when civic culture is being eradicated, public spheres are vanishing, and notions of shared citizenship appear obsolete, words that speak to the truth, reveal injustices and provide informed critical analysis also begin to disappear.
This makes it all the more difficult to engage critically the use of neoliberalism's colonization of language. In the United States, Trump prodigious tweets signify not only a time in which governments engage in the pathology of endless fabrications, but also how they function to reinforce a pedagogy of infantilism designed to animate his base in a glut of shock while reinforcing a culture of war, fear, divisiveness, and greed in ways that disempower his critics.
Mitja Sardoč: You have written extensively on neoliberalism's exclusively instrumental view of education, its reductionist understanding of effectiveness and its distorted image of fairness. In what way should radical pedagogy fight back neoliberalism and its educational agenda?
Henry Giroux: First, higher education needs to reassert its mission as a public good in order to reclaim its egalitarian and democratic impulses. Educators need to initiate and expand a national conversation in which higher education can be defended as a democratic public sphere and the classroom as a site of deliberative inquiry, dialogue, and critical thinking, a site that makes a claim on the radical imagination and a sense of civic courage. At the same time, the discourse on defining higher education as a democratic public sphere can provide the platform for a more expressive commitment in developing a social movement in defense of public goods and against neoliberalism as a threat to democracy. This also means rethinking how education can be funded as a public good and what it might mean to fight for policies that both stop the defunding of education and fight to relocate funds from the death dealing military and incarceration budgets to those supporting education at all levels of society. The challenge here is for higher education not to abandon its commitment to democracy and to recognize that neoliberalism operates in the service of the forces of economic domination and ideological repression.
Second, educators need to acknowledge and make good on the claim that a critically literate citizen is indispensable to a democracy, especially at a time when higher education is being privatized and subject to neoliberal restructuring efforts. This suggests placing ethics, civic literacy, social responsibility, and compassion at the forefront of learning so as to combine knowledge, teaching, and research with the rudiments of what might be called the grammar of an ethical and social imagination. This would imply taking seriously those values, traditions, histories, and pedagogies that would promote a sense of dignity, self-reflection, and compassion at the heart of a real democracy. Third, higher education needs to be viewed as a right, as it is in many countries such as Germany, France, Norway, Finland, and Brazil, rather than a privilege for a limited few, as it is in the United States, Canada, and the United Kingdom. Fourth, in a world driven by data, metrics, and the replacement of knowledge by the overabundance of information, educators need to enable students to engage in multiple literacies extending from print and visual culture to digital culture. They need to become border crossers who can think dialectically, and learn not only how to consume culture but also to produce it. Fifth, faculty must reclaim their right to control over the nature of their labor, shape policies of governance, and be given tenure track lines with the guarantee of secure employment and protection for academic freedom and free speech.
Mitja Sardoč: Why is it important to analyze the relationship between neoliberalism and civic literacy particularly as an educational project?
Henry Giroux: The ascendancy of neoliberalism in American politics has made visible a plague of deep-seated civic illiteracy, a corrupt political system and a contempt for reason that has been decades in the making.
It also points to the withering of civic attachments, the undoing of civic culture, the decline of public life and the erosion of any sense of shared citizenship. As market mentalities and moralities tighten their grip on all aspects of society, democratic institutions and public spheres are being downsized, if not altogether disappearing.
As these institutions vanish – from public schools and alternative media to health care centers– there is also a serious erosion of the discourse of community, justice, equality, public values, and the common good. At the same time reason and truth are not simply contested, or the subject of informed arguments as they should be, but wrongly vilified – banished to Trump's poisonous world of fake news. For instance, under the Trump administration, language has been pillaged, truth and reason disparaged, and words and phrases emptied of any substance or turned into their opposite, all via the endless production of Trump's Twitter storms and the ongoing clown spectacle of Fox News. This grim reality points to a failure in the power of the civic imagination, political will, and open democracy. It is also part of a politics that strips the social of any democratic ideals and undermines any understanding of education as a public good. What we are witnessing under neoliberalism is not simply a political project to consolidate power in the hands of the corporate and financial elite but also a reworking of the very meaning of literacy and education as crucial to what it means to create an informed citizenry and democratic society. In an age when literacy and thinking become dangerous to the anti-democratic forces governing all the commanding economic and cultural institutions of the United States, truth is viewed as a liability, ignorance becomes a virtue, and informed judgments and critical thinking demeaned and turned into rubble and ashes. Under the reign of this normalized architecture of alleged common sense, literacy is regarded with disdain, words are reduced to data and science is confused with pseudo-science. Traces of critical thought appear more and more at the margins of the culture as ignorance becomes the primary organizing principle of American society.
Under the forty-year reign of neoliberalism, language has been militarized, handed over to advertisers, game show idiocy, and a political and culturally embarrassing anti-intellectualism sanctioned by the White House. Couple this with a celebrity culture that produces an ecosystem of babble, shock, and tawdry entertainment. Add on the cruel and clownish anti-public intellectuals such as Jordan Peterson who defend inequality, infantile forms of masculinity, and define ignorance and a warrior mentality as part of the natural order, all the while dethroning any viable sense of agency and the political.
The culture of manufactured illiteracy is also reproduced through a media apparatus that trades in illusions and the spectacle of violence. Under these circumstances, illiteracy becomes the norm and education becomes central to a version of neoliberal zombie politics that functions largely to remove democratic values, social relations, and compassion from the ideology, policies and commanding institutions that now control American society. In the age of manufactured illiteracy, there is more at work than simply an absence of learning, ideas or knowledge. Nor can the reign of manufactured illiteracy be solely attributed to the rise of the new social media, a culture of immediacy, and a society that thrives on instant gratification. On the contrary, manufactured illiteracy is political and educational project central to a right-wing corporatist ideology and set of policies that work aggressively to depoliticize people and make them complicitous with the neoliberal and racist political and economic forces that impose misery and suffering upon their lives. There is more at work here than what Ariel Dorfman calls a "felonious stupidity," there is also the workings of a deeply malicious form of 21 st century neoliberal fascism and a culture of cruelty in which language is forced into the service of violence while waging a relentless attack on the ethical imagination and the notion of the common good. In the current historical moment illiteracy and ignorance offer the pretense of a community in doing so has undermined the importance of civic literacy both in higher education and the larger society.
Mitja Sardoč: Is there any shortcoming in the analysis of such a complex (and controversial) social phenomenon as neoliberalism and its educational agenda? Put differently: is there any aspect of the neoliberal educational agenda that its critics have failed to address?
Henry Giroux: Any analysis of an ideology such as neoliberalism will always be incomplete. And the literature on neoliberalism in its different forms and diverse contexts is quite abundant. What is often underplayed in my mind are three things.
First, too little is said about how neoliberalism functions not simply as an economic model for finance capital but as a public pedagogy that operates through a diverse number of sites and platforms.
Second, not enough has been written about its war on a democratic notion of sociality and the concept of the social.
Third, at a time in which echoes of a past fascism are on the rise not enough is being said about the relationship between neoliberalism and fascism, or what I call neoliberal fascism, especially the relationship between the widespread suffering and misery caused by neoliberalism and the rise of white supremacy.
I define neoliberal fascism as both a project and a movement, which functions as an enabling force that weakens, if not destroys, the commanding institutions of a democracy while undermining its most valuable principles.
Consequently, it provides a fertile ground for the unleashing of the ideological architecture, poisonous values, and racist social relations sanctioned and produced under fascism. Neoliberalism and fascism conjoin and advance in a comfortable and mutually compatible project and movement that connects the worse excesses of capitalism with fascist ideals – the veneration of war, a hatred of reason and truth; a populist celebration of ultra-nationalism and racial purity; the suppression of freedom and dissent; a culture which promotes lies, spectacles, a demonization of the other, a discourse of decline, brutal violence, and ultimately state violence in heterogeneous forms. As a project, it destroys all the commanding institutions of democracy and consolidates power in the hands of a financial elite.
As a movement, it produces and legitimates massive economic inequality and suffering, privatizes public goods, dismantles essential government agencies, and individualizes all social problems. In addition, it transforms the political state into the corporate state, and uses the tools of surveillance, militarization, and law and order to discredit the critical press and media, undermine civil liberties while ridiculing and censoring critics.
What critics need to address is that neoliberalism is the face of a new fascism and as such it speaks to the need to repudiate the notion that capitalism and democracy are the same thing, renew faith in the promises of a democratic socialism, create new political formations around an alliance of diverse social movements, and take seriously the need to make education central to politics itself.
#### [Jan 29, 2019] Bilderberg 2015: where criminals mingle with ministers by Charlie Skelton
##### "... That one group of almost-certainly-criminals meets another group of almost-certainly-criminals is hardly surprising. That the whole shebang is protected by the host's police force is even less so ..."
Convicted criminals. Such as disgraced former CIA boss, David Petraeus, who's just been handed a $100,000 (£64,000) fine and two years' probation for leaking classified information. Petraeus now works for the vulturous private equity firm KKR, run by Henry Kravis, who does arguably Bilderberg's best impression of Gordon Gecko out of Wall Street. Which he cleverly combines with a pretty good impression of an actual gecko. ... ... ... "Can I go now?" Another no. So I continued my list of criminals. I moved on to someone closer to home: René Benko, the Austrian real estate baron, who had a conviction for bribery upheld recently by the supreme court. Which didn't stop him making the cut for this year's conference. "You know Benko?" The cop nodded. It wasn't easy to see in the glare of the searchlight, but he looked a little ashamed. ... ... ... I decided to reward their vigilance with a chat about HSBC. The chairman of the troubled banking giant, Douglas Flint, is a regular attendee at Bilderberg, and he's heading here again this year, along with a member of the bank's board of directors, Rona Fairhead. Perhaps most tellingly, Flint is finding room in his Mercedes for the bank's busiest employee: its chief legal officer, Stuart Levey. A Guardian editorial this week branded HSBC "a bank beyond shame" after it announced plans to cut 8,000 jobs in the UK, while at the same time threatening to shift its headquarters to Hong Kong. And having just been forced to pay £28m in fines to Swiss regulators investigating money-laundering claims. The big question, of course, is how will the chancellor of the exchequer, George Osborne, respond to all this? Easy – he'll go along to a luxury Austrian hotel and hole up with three senior members of HSBC in private. For three days. High up on this year's conference agenda is "current economic issues", and without a doubt, one of the biggest economic issues for Osborne at the moment is the future and finances of Europe's largest bank. Luckily, the chancellor will have plenty of time at Bilderberg to chat all this through through with Flint, Levey and Fairhead. And the senior Swiss financial affairs official, Pierre Maudet, a member of the Geneva state council in charge of the department of security and the economy. It's all so incredibly convenient. ... ... ... consumersunite -> MickGJ 12 Jun 2015 15:23 Let's see, maybe because we have read over their leaked documents from the 1950s in which they discussed currency manipulation and GATT. Everything they have discussed in their meetings over the past decades has almost come to fruition. There are elected officials meeting with criminals such as HSBC. Did you even read the article? If you did, and you are not het up or whatever you call it, then you are of a peasant mentality, and there is no use talking to you. The Bilderberg set call people like you either their "dogs" (if you are in politics or the military) or the "dead." I won't be looking for your response because you have confirmed that you do not matter. Carpasia -> MickGJ 12 Jun 2015 10:52 Thank you for your comment, my good man. Hatred is human, and helps us all to avoid pain, for pain, especially unnecessary pain, is allowed to be hated by the agreement of all, if nothing else is. I would hate to be beaten by Nazis. Thus, I would avoid going to a place where that could occur. That is how hatred works for me. It is the only way it can work, and not be pernicious to the self and others. I distrust the international order as it is the means, harnessed by money, whether corporate or state or individual or monarchical, by which this world is being destroyed. Could things have been better? Jesus is on one end of the spectrum, and Lord Acton on the other, of the spectrums of viewpoints from which that could be properly assessed. If the corruption at the heart of the international order is not regulated properly, this world will come to an end, not the end of the world itself, but the end of the world as we know it. This is happening now. The world is finite. I am not a xenophobe. In my experience, the people that are most likely to hurt me, and thus deserve fear, are those closest. Perhaps that is a cynical way of describing it, but anyone who thinks honestly about it would accede to the notion that it is the people who "love" us that hurt us the most, for we agree too be vulnerable to them. It is the matrix of love. As for Austria and Bavaria, I have visited both places and they were, both, the cleanest locales I have ever seen, with Switzerland having to be mentioned in the same breath, of course. I take a certain liberty in writing. I am not damning the human race, or strangers to me. If I did not entertain, but caused offence, I apologize to you. I do not possess omniscience, and my words will have to speak for themselves. Thank you, again. DemonicWarlordSlayer 12 Jun 2015 08:02 "How Geo Bush's Grandfather Helped Hitler's Rise to Power" in the UK Guardian > "Did Geo H W Bush Coordinate a JFK Hit Team" at Veterans Today > "9/11 Conspiracy Solved, Names, Connections, Details" on youtube....dot-to-dot of the Demonic Warlord's Crimes Against Humanity....end feudalism. Carpasia 12 Jun 2015 07:09 Excellent article. I visited Austria once, and I know of what he speaks. It was the one place I have ever visited that I thought I would be jailed if I littered. I was wandering at the time, but I tentatively had a meal of chicken and departed henceforth. Austrians are an interesting lot, to be sure. That they are perfect goes without saying. Their main virtue is that they do not travel, and that strangers, which we call tourists these days, are not welcomed. If only we were all like that, the world would be a far better place. Austrians do everything well, including crime. Some of the greatest crimes in the world have been committed by Austrians, but their crimes did not include not having their papers. During World War 2, and I pass over Hitler, the German machine of death had an unusually high proportion of Austrians in commanding roles assisting it. It can not be explained away by saying they were some kind of faux Germans, and so it matters not. Indeed, if anything, Germans are faux Austrians, looked at in the broad brush of history. Men of many nations joined the Germans and adorned themselves with the Death's Head, but many Austrians might as well have tattooed it onto their foreheads. I know of what I speak, for I read on it, and will justify if questioned. Reinhard Heydrich is an epitome of this, in the true sense of the word. Kurt Waldheim was another, too young too rise too far before the Ragnarok of May of 1945, but government of the world was not out of his reach, a man who had materially assisted the transportation of the Jews of Thessaloniki to the gas chambers of Auschwitz and, when challenged, was unrepentant, not as a racist, but as something worse even, as a man whose great virtue was that he followed orders. It is order that the Austrians value over everything. Even crime is ordered. In the common-law west we think criminals are disordered beasts to be locked up. We do not give them papers. They are registered only to warn us of their existence, and we do not like to let them travel, as much as we could benefit by their absence, because we think they flee to license, and we think it wrong to inflict them upon innocents abroad. In Austria, the criminal is the man with no papers. If he has papers, all is well, and he is no criminal, whatever he has done. colingorton 12 Jun 2015 03:19 What do you mean "where criminals mingle with ministers". That is assuming that ministers are not criminals. Considering that there will be ministers from the USA, Canada, France, Germany, Italy, Japan and the UK, I'd suggest that there is a near 100% certainty that some, if not all, the ministers there are criminals. That one group of almost-certainly-criminals meets another group of almost-certainly-criminals is hardly surprising. That the whole shebang is protected by the host's police force is even less so. How far can all this mutual back scratching go? It seems that the only alternative left is far too drastic, but there really seems to be no place for a legal alternative, does there? #### [Jan 29, 2019] 7th Circuit Rules Age Discrimination Law Does Not Include Job Applicants ##### Notable quotes: ##### "... By Jerri-Lynn Scofield, who has worked as a securities lawyer and a derivatives trader. She is currently writing a book about textile artisans. ..." ##### "... Kleber filed suit, pursuing claims for both disparate treatment and disparate impact under the ADEA. The Chicago Tribune notes in Hinsdale man loses appeal in age discrimination case that challenged experience caps in job ads that "Kleber had out of work and job hunting for three years" when he applied for the CareFusion job. ..." ##### "... Unfortunately, the seventh circuit has now held that the disparate impact section of the ADEA does not extend to job applicants. .Judge Michael Scudder, a Trump appointee, wrote the majority 8-4 opinion, which reverses an earlier 2-1 panel ruling last April in Kleber's favor that had initially overruled the district court's dismissal of Kleber's disparate impact claim. ..." ##### "... hiring discrimination is difficult to prove and often goes unreported. Only 3 percent have made a formal complaint. ..." ##### "... The decision narrowly applies to disparate impact claims of age discrimination under the ADEA. It is important to remember that job applicants are protected under the disparate treatment portion of the statute. ..." ##### "... I forbade my kids to study programming. ..." ##### "... I'm re reading the classic of Sociology Ain't No Makin It by Jay MacLeod, in which he studies the employment prospects of youths in the 1980s and determined that even then there was no stable private sector employment and your best option is a government job or to have an excellent "network" which is understandably hard for most people to achieve. ..." ##### "... I think the trick is to study something and programming, so the programming becomes a tool rather than an end. ..." ##### "... the problem is it is almost impossible to exit the programming business and join another domain. Anyone can enter it. (evidence – all the people with "engineering" degrees from India) Also my wages are now 50% of what i made 10 years ago (nominal). Also I notice that almost no one is doing sincere work. Most are just coasting, pretending to work with the latest toy (ie, preparing for the next interview). ..." ##### "... I am an "aging" former STEM worker (histology researcher) as well. Much like the IT landscape, you are considered "over-the-hill" at 35, which I turn on the 31st. ..." ##### "... Most of the positions in science and engineering fields now are basically "gig" positions, lasting a few months to a year. ..." ###### Jan 29, 2019 | www.nakedcapitalism.com By Jerri-Lynn Scofield, who has worked as a securities lawyer and a derivatives trader. She is currently writing a book about textile artisans. The US Court of Appeals for the Seventh Circuit decided in Kleber v. CareFusion Corporation last Wednesday that disparate impact liability under the Age Discrimination in Employment Act (ADEA) applies only to current employees and does not include job applicants. The case was brought by Dale Kleber, an attorney, who applied for a senior position in CareFusion's legal department. The job description required applicants to have "3 to 7 years (no more than 7 years) of relevant legal experience." Kleber was 58 at the time he applied and had more than seven years of pertinent experience. CareFusion hired a 29-year-old applicant who met but did not exceed the experience requirement. Kleber filed suit, pursuing claims for both disparate treatment and disparate impact under the ADEA. The Chicago Tribune notes in Hinsdale man loses appeal in age discrimination case that challenged experience caps in job ads that "Kleber had out of work and job hunting for three years" when he applied for the CareFusion job. Some Basics Let's start with some basics, as the US Equal Employment Opportunity Commission (EEOC) set out in a brief primer on basic US age discrimination law entitled Questions and Answers on EEOC Final Rule on Disparate Impact and "Reasonable Factors Other Than Age" Under the Age Discrimination in Employment Act of 1967 . The EEOC began with a brief description of the purpose of the ADEA: The purpose of the ADEA is to prohibit employment discrimination against people who are 40 years of age or older. Congress enacted the ADEA in 1967 because of its concern that older workers were disadvantaged in retaining and regaining employment. The ADEA also addressed concerns that older workers were barred from employment by some common employment practices that were not intended to exclude older workers, but that had the effect of doing so and were unrelated to job performance. It was with these concerns in mind that Congress created a system that included liability for both disparate treatment and disparate impact. What's the difference between these two concepts? According to the EEOC: [The ADEA] prohibits discrimination against workers because of their older age with respect to any aspect of employment. In addition to prohibiting intentional discrimination against older workers (known as "disparate treatment"), the ADEA prohibits practices that, although facially neutral with regard to age, have the effect of harming older workers more than younger workers (known as "disparate impact"), unless the employer can show that the practice is based on an [Reasonable Factor Other Than Age (RFAO)] The crux: it's much easier for a plaintiff to prove disparate impact, because s/he needn't show that the employer intended to discriminate. Of course, many if not most employers are savvy enough not to be explicit about their intentions to discriminate against older people as they don't wish to get sued. District, Panel, and Full Seventh Circuit Decisions The district court dismissed Kleber's disparate impact claim, on the grounds that the text of the statute- (§ 4(a)(2))- did not extend to outside job applicants. Kleber then voluntarily dismissed his separate claim for disparate treatment liability to appeal the dismissal of his disparate impact claim. No doubt he was aware – either because he was an attorney, or because of the legal advice received – that it is much more difficult to prevail on a disparate treatment claim, which would require that he establish CareFusion's intent to discriminate. Or at least that was true before this decision was rendered. Unfortunately, the seventh circuit has now held that the disparate impact section of the ADEA does not extend to job applicants. .Judge Michael Scudder, a Trump appointee, wrote the majority 8-4 opinion, which reverses an earlier 2-1 panel ruling last April in Kleber's favor that had initially overruled the district court's dismissal of Kleber's disparate impact claim. The majority ruled: By its terms, § 4(a)(2) proscribes certain conduct by employers and limits its protection to employees. The prohibited conduct entails an employer acting in any way to limit, segregate, or classify its employees based on age. The language of § 4(a)(2) then goes on to make clear that its proscriptions apply only if an employer's actions have a particular impact -- "depriv[ing] or tend[ing] to deprive any individual of em- ployment opportunities or otherwise adversely affect[ing] his status as an employee." This language plainly demonstrates that the requisite impact must befall an individual with "status as an employee." Put most simply, the reach of § 4(a)(2) does not extend to applicants for employment, as common dictionary definitions confirm that an applicant has no "status as an employee." (citation omitted)[opinion, pp. 3-4] By contrast, in the disparate treatment part of the statute (§ 4(a)(1)): Congress made it unlawful for an employer "to fail or refuse to hire or to discharge any individual or otherwise discriminate against any individual with respect to his compensation, terms, conditions, or privi- leges of employment, because of such individual's age."[opinion, p.6] The court compared the disparate treatment section – § 4(a)(1) – directly with the disparate impact section – § 4(a)(2): Yet a side-by-side comparison of § 4(a)(1) with § 4(a)(2) shows that the language in the former plainly covering appli-cants is conspicuously absent from the latter. Section 4(a)(2) says nothing about an employer's decision "to fail or refuse to hire any individual" and instead speaks only in terms of an employer's actions that "adversely affect his status as an employee." We cannot conclude this difference means nothing: "when 'Congress includes particular language in one section of a statute but omits it in another' -- let alone in the very next provision -- the Court presumes that Congress intended a difference in meaning." (citations omitted)[opinion, pp. 6-7] The majority's conclusion: In the end, the plain language of § 4(a)(2) leaves room for only one interpretation: Congress authorized only employees to bring disparate impact claims.[opinion, p.8] Greying of the Workforce Older people account for a growing percentage of the workforce, as Reuters reports in Age bias law does not cover job applicants: U.S. appeals court : People 55 or older comprised 22.4 percent of U.S. workers in 2016, up from 11.9 percent in 1996, and may account for close to one-fourth of the labor force by 2022, according to the Bureau of Labor Statistics. The greying of the workforce is "thanks to better health in older age and insufficient savings that require people to keeping working longer," according to the Chicago Tribune. Yet: numerous hiring practices are under fire for negatively impacting older applicants. In addition to experience caps, lawsuits have challenged the exclusive use of on-campus recruiting to fill positions and algorithms that target job ads to show only in certain people's social media feeds. Unless Congress amends the ADEA to include job applicants, older people will continue to face barriers to getting jobs. The Chicago Tribune reports: The [EEOC], which receives about 20,000 age discrimination charges every year, issued a report in June citing surveys that found 3 in 4 older workers believe their age is an obstacle in getting a job. Yet hiring discrimination is difficult to prove and often goes unreported. Only 3 percent have made a formal complaint. Allowing older applicants to challenge policies that have an unintentionally discriminatory impact would offer another tool for fighting age discrimination, Ray Peeler, associate legal counsel at the EEOC, has said. How will these disparate impact claims now fare? The Bottom Line FordHarrison, a firm specialising in human relations law, noted in Seventh Circuit Limits Job Applicants' Age Discrimination Claims : The decision narrowly applies to disparate impact claims of age discrimination under the ADEA. It is important to remember that job applicants are protected under the disparate treatment portion of the statute. There is no split among the federal appeals courts on this issue, making it an unlikely candidate for Supreme Court review, but the four judges in dissent read the statute as being vague and susceptible to an interpretation that includes job applicants. Their conclusion: "a decision finding disparate impact liability for job applicants under the ADEA is unlikely in the near future." Alas, for reasons of space, I will not consider the extensive dissent. My purpose in writing this post is to discuss the majority decision, not to opine on which side made the better arguments. antidlc , January 27, 2019 at 3:28 pm 8-4 opinion. Which judges ruled for the majority? Which judges ruled for the minority opinion? Sorry,,,don't have time to research right now. It says a Trump appointee wrote the majority opinion. Who were the other 7? grayslady , January 27, 2019 at 6:09 pm There were 3 judges who dissented in whole and one who dissented in part. Of the three full dissensions, two were Clinton appointees (including the Chief Justice, who was one of the dissenters) and one was a Reagan appointee. The partial dissenter was also a Reagan appointee. run75441 , January 27, 2019 at 11:25 pm ant: Not your law clerk, read the opinion. Easterbook and Wood dissented. Find the other two and and you can figure out who agreed. YankeeFrank , January 27, 2019 at 3:58 pm "depriv[ing] or tend[ing] to deprive any individual of employment opportunities or otherwise adversely affect[ing] his status as an employee." –This language plainly demonstrates that the requisite impact must befall an individual with "status as an employee." So they totally ignore the first part of the sentence -- "depriv[ing] or tend[ing] to deprive any individual of employment opportunities " -- "employment opportunities" clearly applies to applicants. Its as if these judges cannot make sense of the English language. Hopefully the judges on appeal will display better command of the language. Alfred , January 27, 2019 at 5:56 pm I agree. "Employment opportunities," in the "plain language" so meticulously respected by the 7th Circuit, must surely refer at minimum to 'the chance to apply for a job and to have one's application fairly considered'. It seems on the other hand a stretch to interpret the phrase to mean only 'the chance to keep a job one already has'. Both are important, however; to split them would challenge even Solomonic wisdom, as I suppose the curious decision discussed here demonstrates. I am less convinced that the facts as presented here establish a clear case of age discrimination. True, they point in that direction. But a hypothetical 58-year old who only earned a law degree in his or her early 50s, perhaps after an earlier career in paralegal work, could have legitimately applied for a position requiring 3 to 7 years of "relevant legal experience." That last phrase, is of course, quite weasel-y: what counts as "relevant" and what counts as "legal" experience would under any circumstances be subject to (discriminatory) interpretation. The limitation of years of experience in the job announcement strikes me as a means to keep the salary within a certain budgetary range as prescribed either by law or collective bargaining. Almost like the willful misunderstanding of "A well regulated militia being necessary to the security of a free State "? Of course, that militia also meant slave patrols and the occasional posse to put down the native "savages," but still. > "depriv[ing] or tend[ing] to deprive any individual of employment opportunities or otherwise adversely affect[ing] his status as an employee." Says "or." Not "and." Magic Sam , January 27, 2019 at 5:53 pm They are failing to find what they don't want to find. Magic Sam , January 27, 2019 at 5:58 pm Being pro-Labor will not get you Federalist Society approval to be nominated to the bench by Trump. This decision came down via the ideological makeup of the court, not the letter of the law. Their stated pretext is obviously b.s.. It contradicts itself. Mattie , January 27, 2019 at 6:05 pm Yep. That is when their Utah et al property mgt teams began breaking into homes, tossing contents – including pets – outside & changing locks Even when borrowers were in approved HAMP, etc. pipelines PLUG: If you haven't yet – See "The Florida Project" nothing but the truth , January 27, 2019 at 7:18 pm as an aging "stem" (cough coder) worker who typically has to look for a new "gig" every few years, i am trembling at this. Luckily, i bought a small business when I had a few saved up, so I won't starve. Health insurance is another matter. I forbade my kids to study programming. Plumbing. Electrical work. Permaculture. Get those kids Jackpot-ready! Joe Well , January 28, 2019 at 11:40 am I'm re reading the classic of Sociology Ain't No Makin It by Jay MacLeod, in which he studies the employment prospects of youths in the 1980s and determined that even then there was no stable private sector employment and your best option is a government job or to have an excellent "network" which is understandably hard for most people to achieve. So I'm genuinely interested in what possible options there are for anyone entering the job market today or God help you, re-entering. I am guessing the barriers to entry to those trades are quite high but would love to be corrected. what is the point of being jackpot ready if you can't even support yourself today? To fantasize about collapse while sleeping in a rented closet and driving for Uber? In that case one's personal collapse has already happened, which will matter a lot more to an individual than any potential jackpot. Plumbers and electricians can make money now of course (although yea barriers to entry do seem high, don't you kind of have to know people to get in those industries?). But permaculture? Ford Prefect , January 28, 2019 at 1:00 pm I think the trick is to study something and programming, so the programming becomes a tool rather than an end. A couple of my kids used to ride horses. One of the instructors and stable owners said that a lot of people went to school for equine studies and ended up shoveling horse poop for a living. She said the thing to do was to study business and do the equestrian stuff as a hobby/minor. That way you came out prepared to run a business and hire the equine studies people to clean the stalls. Do you actually see that many jobs requiring something and programming though? I haven't really. There seems no easy transition out of software work which that would make possible either. Might as well just study the "something". Programming is a means to an end, not the end itself. If all you do is program, then you are essentially a machine lathe operator, not somebody creating the products the lathe operators turn out. Understanding what needs to be done helps with structured programs and better input/output design. In turn, structured programming is a good tool to understand the basics of how to manage tasks. At the higher level, Fred Brooks book "The Mythical Man-Month" has a lot of useful project management information that can be re-applied for non computer program development. We are doing a lot of work with mobile computing and data collection to assist in our regular work. The people doing this are mainly non-computer scientists that have learned enough programming to get by. The engineering programs that we use are typically written more by engineers than by programmers as the entire point behind the program is to apply the theory into a numerical computation and presentation system. Programmers with a graphic design background can assist in creating much better user interfaces. If you have some sort of information theory background (GIS, statistics, etc.) then big data actually means something. nothing but the truth , January 28, 2019 at 7:02 pm the problem is it is almost impossible to exit the programming business and join another domain. Anyone can enter it. (evidence – all the people with "engineering" degrees from India) Also my wages are now 50% of what i made 10 years ago (nominal). Also I notice that almost no one is doing sincere work. Most are just coasting, pretending to work with the latest toy (ie, preparing for the next interview). Now almost every "interview" requires writing a coding exam. Which other profession will make you write an exam for 25-30 year veterans? Can you write your high school exam again today? What if your profession requires you to write it a couple of times almost every year? Hepativore , January 28, 2019 at 2:56 pm I am an "aging" former STEM worker (histology researcher) as well. Much like the IT landscape, you are considered "over-the-hill" at 35, which I turn on the 31st. While I do not have children and never intend to get married, many biotech companies consider this the age at which a worker is getting long in the tooth. This is because there is the underlying assumption that is when people start having familial obligations. Most of the positions in science and engineering fields now are basically "gig" positions, lasting a few months to a year. A lot of people my age are finding how much harder it is to find any position at all in these areas as there is a massive pool of people to choose from, even for permatemp work simply because serfs in their mid-30s might get uppity about benefits like family health plans or 401k I am 59 and do not mind having employers discriminate against me due to age. ( I also need a job) I had my own business and over the years got quite damaged. I was a contractor specializing in older (historical) work. I was always the lead worker with many friends and other s working with me. At 52 I was given a choice of very involved neck surgery or quit. ( no small businesses have disability insurance!) I shut down everything and helped my friends who worked for me take some of the work or find something else. I was also a nationally published computer consultant a long time ago and graphic artist. Reality is I can still do many things but I do nothing as well as I did when I was younger and the cost to employers for me is far higher than a younger person. I had my chance and I chose poorly. Younger people, if that makes them abetter fit, deserve a chance now more than I do. Joe Well , January 27, 2019 at 7:49 pm I'm sorry for your predicament. Do you mean you chose poorly when you chose not to get neck surgery? What was the choice you regret? My career choices. Choosing to close my business to possibly avoid the surgery was actually a good choice. Joe Well , January 28, 2019 at 11:47 am I'm sorry for your challenges but I don't think there were many good careers you could have chosen and it would have required a crystal ball to know which were the good ones. Americans your age entered the job market just after the very end of the Golden Age of labor conditions and have been weathering the decline your entire working lives. At least I entered the job market when everyone knew for years things were falling apart. It's not your fault. You were cheated plain and simple. > I had my chance and I chose poorly. I don't see how it's possible to predict the labor market years in advance. Why blame yourself for poor choices when so much chance is involved? With a Jobs Guarantee, such questions would not arise. I also don't think it's only a question of doing, but a question of sharing ("experience, strength, and hope," as AA -- a very successful organization! -- puts it, in a way of thinking that has wide application). Dianne Shatin , January 27, 2019 at 7:46 pm Unelected plutocrat and his international syndicate funded by former IBM artificial intelligence developer and social darwinian. data manipulation electronic platforms and social media are at the levels of power in the USA. Anti justice, anti enlightenment, etc. Since the installation of GW Bush by the Supreme Court, almost 20 yrs. ago, they have tunneled deeply, speaking through propaganda machines such as Rush Limbaugh gaining traction .making it over the finish line with KGB and Russian oligarch backing. The net effect on us? The loss of all built on the foundation of the enlightenment and an exceptional nation no king, a nation of, for and by the people, and the rule of law. There is nothing Judeo-Christian about social darwinism but is eerily similar to National Socialism (Nazis). The ruling againt the plaintiff by the 7th circuit in the U.S. and their success in creating chaos in Great Britain vis a vis "Brexit" by fascist Lafarge Inc. are indicators how easy their ascent. ows how powerful they have become. anon y'mouse , January 27, 2019 at 9:19 pm They had better get ready to lower the SSI retirement age to 55, then. Or I predict blood in the streets. I wish it was so. They just expect the older crowd to die quietly. How is it legal , January 27, 2019 at 10:04 pm Where are the Bipartisan Presidential Candidates and Legislators on oral and verbal condemnation of Age Discrimination , along with putting teeth into Age Discrimination Laws, and Tax Policy. – nowhere to be seen , or heard, that I've noticed; particularly in Blue ™ California, which is famed for Age Discrimination of those as young as 36 years of age, since Mark Zuckerberg proclaimed anyone over 35, over the hill in the early 2000's , and never got crushed for it by the media, or the Politicians, as he should have (particularly in Silicon Valley). I know those Republicans are venal, but I dare anyone to show me a meaningful Age Discrimination Policy Proposal, pushed by Blue Obama, Hillary, even Sanders and Jill Stein. Certainly none of California's Nationally known (many well over retirement age) Gubernatorial and Legislative Democratic Politicians: Jerry Brown, Gavin Newsom, Dianne Feinstein, Barbara Boxer, Nancy Pelosi, Kamala Harris, and Ro Khanna (or the lesser known California Federal State and Local Democratic Politicians) have ever addressed it; despite the fact that homelessness deaths of those near 'retirement age' have been frighteningly increasing in California's obscenely wealthy homelessness 'hotspots,' such as Silicon Valley. Such a tragic issue, which has occurred while the last over a decade of Mainstream News and Online Pundits, have Proclaimed 50 to be the new 30. Sadistic. I have no doubt this is linked to the ever increasing Deaths of Despair and attempted and successful suicides of those under, and just over retirement age– while the US has an average Senate age of 65, and a President and 2020 Presidential contenders, over 70 (I am not at all saying older persons shouldn't be elected, nor that younger persons shouldn't be elected, I'm pointing out the imbalance, insanity, and cruelty of it). Further, age discrimination has been particularly brutal to single, divorced, and widowed females , whom have most assuredly made far, far less on the dollar than males (if they could even get hired for the position, or leave the kids alone, and housekeeping undone, to get a job): Patrick Button, an assistant economics professor at Tulane University, was part of a research project last year that looked at callback rates from resumes in various entry-level jobs. He said women seeking the positions appeared to be most affected. "Based on over 40,000 job applications, we find robust evidence of age discrimination in hiring against older women, especially those near retirement age, but considerably less evidence of age discrimination against men," according to an abstract of the study. Jacquelyn James, co-director of the Center on Aging and Work at Boston College, said age discrimination in employment is a crucial issue in part because of societal changes that are forcing people to delay retirement. Moves away from defined-¬benefit pension plans to less assured forms of retirement savings are part of the reason. > "Based on over 40,000 job applications, we find robust evidence of age discrimination in hiring against older women, especially those near retirement age, but considerably less evidence of age discrimination against men," according to an abstract of the study. Well, these aren't real women, obviously. If they were, the Democrats would already be taking care of them. From the article: The greying of the workforce is "thanks to better health in older age and insufficient savings that require people to keeping working longer," according to the Chicago Tribune. Get on the clue train Chicago Tribune, because your like W and Trump not knowing how a supermarket works, that's how dense you are. Even if one saved, and even if one won the luck lottery in terms of job stability and adequate income to save from, healthcare alone is a reason to work, either to get employer provided if lucky, or to work without it and put most of one's money toward an ACA plan or the like if not lucky. Yes the cost of almost all other necessities has also increased greatly, but even parts of the country without a high cost of living have unaffordable healthcare. Enquiring Mind , January 27, 2019 at 11:07 pm Benefits may be 23-30% or so of payroll and represent another expense management opportunity for the diligent executive. One piece of low-hanging fruit is the age-related healthcare cost. If you hire young people, who under-consume healthcare relative to older cohorts, you save money, ceteris paribus. They have lower premiums, lower loss experience and they rebound more quickly, so you hit a triple at your first at-bat swinging at that fruit. Yes, metaphors are fungible along with every line on the income statement. If your company still has the vestiges of a pension or similar blandishment, you may even back-load contributions more aggressively, of course to the extent allowable. That added expense diligence will pay off when those annuated employees leave before hitting the more expensive funding years. NB, the above reflects what I saw and heard at a Fortune 500 company. Another good reason for a Canadian style single payer system. That turns a deciding factor into a non-factor. Jack Hayes , January 28, 2019 at 8:15 am A reason why the court system is overburdened is lack of clarity in laws and regulations. Fix the disparity between the two sections of the law so that courts don't have to decide which section rules. Polarization has made tweaks and repairs of laws impossible. Jeff N , January 28, 2019 at 10:17 am Yep. Many police departments *legally* refuse to hire anyone over 35 years old (exceptions for prior police experience or certain military service) Joe Well , January 28, 2019 at 12:36 pm It amazes me how often the government will give itself exemptions to its own laws and principles, and also how often "progressive" nonprofits and political groups will also give themselves such exemptions, for instance, regarding health insurance, paid overtime, paid training, etc. that they are legally required to provide. Ford Prefect , January 28, 2019 at 2:27 pm There are specific physical demands in things like policing. So it doesn't make much sense to hire 55 year old rookie policemen when many policemen are retiring at that age. Arthur Dent , January 28, 2019 at 2:59 pm Its an interesting quandary. We have older staff that went back to school and changed careers. They do a good job and get paid at a rate similar to the younger staff with similar job-related experience. However, they will be retiring at about the same time as the much more experienced staff, so they will not be future succession replacements for the senior staff. So we also have to hire people in their 20s and 30s because that will be the future when people like me retire in a few years. That could very well be the reason for the specific wording of the job opening (I haven't read the opinion). I know of current hiring for a position where the firm is primarily looking for somebody in their 20s or early 30s for precisely that reason. The staff currently doing the work are in their 40s and 50s and need to start bringing up the next generation. If somebody went back to school late and was in their 40s or 50s (so would be at a lower billing rate due to lack of job related experience), they would be seriously considered. But the firm would still be left with the challenge of having to hire another person at the younger age within a couple of years to build the succession. Once people make it past 5 years at the firm, they tend to stay for a long time with senior staff generally having been at the firm for 20 years or more, so hiring somebody really is a long-term investment. #### [Jan 25, 2019] Davos Elites Love to Advocate for Equality - So Long As Nothing Gets Done - ##### Notable quotes: ##### "... The return to the industrial relations and tax policies of the early 19th century has been spearheaded by people who speak the language of equality, respect, participation, and transparency. ..." ##### "... Branko Milanovic is the author of Global Inequality: A New Approach for the Age of Globalization and of the forthcoming Capitalism, Alone, both published by Harvard University Press. He is senior scholar at the Stone Center on Socio-Economic Inequality at the Graduate Center, City University of New York. An earlier version of this post has previously appeared in Milanovic's blog . ..." ###### Jan 25, 2019 | promarket.org The return to the industrial relations and tax policies of the early 19th century has been spearheaded by people who speak the language of equality, respect, participation, and transparency. You will find me eager to help you, but slow to take any step. -- Euripides, Hecuba Thousands of people with a combined wealth of several hundred billion dollars, perhaps even close to a trillion, are gathering this week in Davos. Never in world history, quite possibly, has the amount of wealth per square foot been so high. This year, for the seventh or eighth consecutive time, one of the principal topics addressed by these captains of industry, billionaires, employers of thousands of people across the four corners of the globe, is inequality. Even the new "hot" topics of the day -- trade wars and populism -- are in turn related, or even caused by inequality of income, wealth, or political power. Only in passing, and probably on the margins of the official program, will the global elites gathered in Davos get into a discussion of the tremendous monopoly and monopsony power their companies have. Neither will they publicly mention companies' ability to play one jurisdiction against another in order to avoid taxes, ban organized labor within their ranks, use government ambulance services to carry workers who have fainted from the heat (to save expenses on air conditioning), make their workforce complement its wages through private charity donations, or perhaps pay an average tax rate of between 0-12 percent. Some participants, if they are from the emerging market economies, can also exchange experiences on how to delay payments of wages for several months while investing these funds at high interests rates, save on labor protection standards, or buy privatized companies for a song and then set up shell companies in the Caribbean or Channel Islands. It is just that somehow, the "masters of the universe" who gather annually in Davos never managed to find enough money, or time, or perhaps willing lobbyists to help with the policies many will agree, during the official sessions, should be adopted. For example, increasing taxes on the top 1 percent and on large estates, providing decent wages or not impounding wages, reducing gaps between CEO compensation and average pay, spending more money on public education, making access to financial assets more attractive to the middle and working class, equalizing taxes on capital and labor, reducing corruption in government contracts and privatizations. Actually, when policies that are supposed to make some headway in counteracting rising inequality are finally proposed, such as the one made by Rep. Alexandria Ocasio-Cortez (D-NY) of 70 percent marginal tax on extra high-incomes (above$10 million annually), they are quick to argue that such policies will do more harm than good. One is left, to put it mildly, puzzled: If they are against a most obvious and rather modest proposal, what policies do they have in mind to fight inequality? In reality they have none, except for the vacuous talk of "social inclusion," "prosperity for all" and "trickle-down economics."
Not surprisingly, nothing has been done since the Global Financial Crisis to address inequality. Rather, the opposite has happened. Donald Trump has, as promised, passed a historic tax cut for the wealthy; Emmanuel Macron has discovered the attraction of latter-day Thatcherism; the Chinese government has slashed taxes on the rich and imprisoned the left-wing students at Peking University who supported striking workers. In Brazil, Jair Bolsonaro seems to consider praise for torture and the rising stock market as the ideal mélange of modern capitalism.
Bizarrely, this return to the industrial relations and tax policies of the early 19th century has been spearheaded by people who speak the language of equality, respect, participation, and transparency. The annual gathering in Davos, in that regard, is not just a display of the elites' financial superiority. It is also supposed to showcase their moral superiority. This is in line with a longstanding trend: Over the past fifty years, the language of equality has been harnessed in the pursuit of the most structurally inegalitarian policies. It is much easier (and profitable), apparently, to call journalists and tell them about nebulous schemes whereby 90 percent of wealth will be -- over an unknown number of years and under unknowable accounting practices -- given away as charity than to pay suppliers and workers reasonable rates or stop selling user data. They are loath to pay a living wage, but they will fund a philharmonic orchestra. They will ban unions, but they will organize a workshop on transparency in government.
And next year, as inequality continues to rise and the state of Western middle classes continues to deteriorate, the same elites will be back in Davos, talking about inequality and populism in grave tones. A new record in dollar wealth per square foot may be reached, but the topics of discussion within the conference halls, and on the margins, will remain the same.
Branko Milanovic is the author of Global Inequality: A New Approach for the Age of Globalization and of the forthcoming Capitalism, Alone, both published by Harvard University Press. He is senior scholar at the Stone Center on Socio-Economic Inequality at the Graduate Center, City University of New York. An earlier version of this post has previously appeared in Milanovic's blog .
#### [Jan 20, 2019] Note on students debt peonage
RW: Well at this point I think it really depends on what indexes you're looking at. The biggest thing that's kept this economy going in the last few years should make everybody tremble. It's called debt, let me give you just a couple of examples. Ten years ago, at the height of the crash, the total debt carried by students in the United States was in the neighborhood of $700 billion, an enormous sum. What is it today? Over twice that, one-and-a-half trillion dollars. The reason part of our economy hasn't collapsed is that students have taken up an enormous amount of debt that they cannot afford, in order to get degrees which will let them get jobs whose incomes will not allow them to pay back the debts. And forget about getting married, forget about having a family. We have paid an enormous price in hobbling the generation of people who would have otherwise lifted this economy and made us more productive. It is a disastrous mistake historically, and if you face that, and if you add to it the increased debt of our businesses, and the increased debt of our government, you see an economy that is held up by a monstrous increase in debt, not in underlying productivity, not in more jobs that really produce anything, but in debt. That should frighten us because it was the debt bubble that burst in 2008 and brought us the crash. It is as if we cannot learn in our system to do other than we've always done and that's taking us into another crash coming now. LC: Yeah. This is the land of the free, but it seems like most of us are chained down by debt peonage. #### [Jan 20, 2019] Degeneration of the US neoliberal elite can be partially attributed to the conversion of neoliberal universities into indoctrination mechanism, rather then institutions for fostering critical thinking ##### Notable quotes: ##### "... An excellent piece. I would add only that the so-called elites mentioned by Mr Bacevich are largely the products of the uppermost stratum of colleges and universities, at least in the USA, and that for a generation or more now, those institutions have indoctrinated rather than educated. ..." ##### "... As their more recent alumni move into government, media and cultural production, the primitiveness of their views and their inability to think -- to say nothing of their fundamental ignorance about our civilization other than that it is bad and evil -- begin to have real effect. ..." ###### Jan 20, 2019 | www.theamericanconservative.com Paul Reidinger, January 17, 2019 at 2:03 pm An excellent piece. I would add only that the so-called elites mentioned by Mr Bacevich are largely the products of the uppermost stratum of colleges and universities, at least in the USA, and that for a generation or more now, those institutions have indoctrinated rather than educated. As their more recent alumni move into government, media and cultural production, the primitiveness of their views and their inability to think -- to say nothing of their fundamental ignorance about our civilization other than that it is bad and evil -- begin to have real effect. The new dark age is no longer imminent. It is here, and it is them. I see no way to rectify the damage. When minds are ruined young, they remain ruined. #### [Jan 17, 2019] Elizabeth Warren is demanding that Wells Fargo be kicked off college campuses, a market the bank has said is among its fastest-growing ##### Notable quotes: ##### "... The inquiry follows a Consumer Financial Protection Bureau report said that Wells Fargo charged students the highest fees of 573 banks examined. ..." ##### "... "When granted the privilege of providing financial services to students through colleges, Wells Fargo used this access to charge struggling college students exorbitant fees," Warren said in a statement. "These high fees, which are an outlier within the industry, demonstrate conclusively that Wells Fargo does not belong on college campuses." ..." ###### Jan 17, 2019 | www.bloomberg.com Elizabeth Warren is demanding that Wells Fargo & Co. be kicked off college campuses, a market the bank has said is among its fastest-growing. The Democratic senator from Massachusetts and likely presidential candidate said Thursday that she requested more information from Wells Fargo Chief Executive Officer Tim Sloan and from 31 colleges where the bank does business. The inquiry follows a Consumer Financial Protection Bureau report said that Wells Fargo charged students the highest fees of 573 banks examined. "When granted the privilege of providing financial services to students through colleges, Wells Fargo used this access to charge struggling college students exorbitant fees," Warren said in a statement. "These high fees, which are an outlier within the industry, demonstrate conclusively that Wells Fargo does not belong on college campuses." Warren has been a vocal critic of Wells Fargo -- including repeatedly calling for Sloan's ouster -- since a series of consumer issues at the company erupted more than two years ago with a phony-accounts scandal. Wells Fargo is "continually working to improve how we serve our customers," a bank spokesman said in an emailed statement Thursday. "Before and since the CFPB's review on this topic, we have been pursuing customer-friendly actions that support students," including waiving service fees on some checking accounts offered to them. A reputation for overcharging students could further harm Wells Fargo's consumer-banking strategy. The San Francisco-based bank has identified college-age consumers as a growth opportunity, and John Rasmussen, head of personal lending, said last year that Wells Fargo may expand into the refinancing of federal student loans. #### [Jan 17, 2019] The financial struggles of unplanned retirement ##### People who are kicked out of their IT jobs around 55 now has difficulties to find even full-time McJobs... Only part time jobs are available. With the current round of layoff and job freezes, neoliberalism in the USA is entering terminal phase, I think. ###### Jan 17, 2019 | finance.yahoo.com A survey by Transamerica Center for Retirement Studies found on average Americans are retiring at age 63, with more than half indicating they retired sooner than they had planned. Among them, most retired for health or employment-related reasons. ... ... ... On April 3, 2018, Linda LaBarbera received the phone call that changed her life forever. "We are outsourcing your work to India and your services are no longer needed, effective today," the voice on the other end of the phone line said. ... ... ... "It's not like we are starving or don't have a home or anything like that," she says. "But we did have other plans for before we retired and setting ourselves up a little better while we both still had jobs." ... ... ... Linda hasn't needed to dip into her 401(k) yet. She plans to start collecting Social Security when she turns 70, which will give her the maximum benefit. To earn money and keep busy, Linda has taken short-term contract editing jobs. She says she will only withdraw money from her savings if something catastrophic happens. Her husband's salary is their main source of income. "I am used to going out and spending money on other people," she says. "We are very generous with our family and friends who are not as well off as we are. So we take care of a lot of people. We can't do that anymore. I can't go out and be frivolous anymore. I do have to look at what we spend - what I spend." Vogelbacher says cutting costs is essential when living in retirement, especially for those on a fixed income. He suggests moving to a tax-friendly location if possible. Kiplinger ranks Alaska, Wyoming, South Dakota, Mississippi, and Florida as the top five tax-friendly states for retirees. If their health allows, Vogelbacher recommends getting a part-time job. For those who own a home, he says paying off the mortgage is a smart financial move. ... ... ... Monica is one of the 44 percent of unmarried persons who rely on Social Security for 90 percent or more of their income. At the beginning of 2019, Monica and more than 62 million Americans received a 2.8 percent cost of living adjustment from Social Security. The increase is the largest since 2012. With the Social Security hike, Monica's monthly check climbed$33. Unfortunately, the new year also brought her a slight increase in what she pays for Medicare; along with a $500 property tax bill and the usual laundry list of monthly expenses. "If you don't have much, the (Social Security) raise doesn't represent anything," she says with a dry laugh. "But it's good to get it." #### [Jan 14, 2019] Beware of billionaires and bankers bearing gifts: In education, philanthropy means Billionaires buying the policies they want ##### that's how neoliberalism was installed in the USA ##### Notable quotes: ##### "... quelle surprise ..." ###### Jan 14, 2019 | www.nakedcapitalism.com Paradox of Privilege "Winners Take All" is one of several recently published books raising difficult questions about how the world's biggest donors approach their giving. As someone who studies, teaches and believes in philanthropy, I believe these writers have started an important debate that could potentially lead future donors to make make a bigger difference with their giving. Giridharadas to a degree echoes Ford Foundation President Darren Walker , who has made a stir by denouncing a " paradox of privilege " that "shields (wealthy people) from fully experiencing or acknowledging inequality, even while giving us more power to do something about it." Like Walker , Giridharadas finds it hard to shake the words of Martin Luther King Jr., who spoke of "the circumstances of economic injustice which make philanthropy necessary." To avoid changes that might endanger their privileges, mega-donors typically seek what they call win-win solutions. But however impressive the quantifiable results of those efforts may seem, according to this argument, those outcomes will always fall short. Fixes that don't threaten the powers that be leave underlying issues intact. Avoiding Win-Lose Solutions In Giridharadas's view, efforts by big funders , such as The Bill and Melinda Gates Foundation and the Walton Family Foundation , to strengthen public K-12 education systems by funding charter schools look past the primary reason why not all students learn at the same pace: inequality . As long as school systems are funded locally, based on property values, students in wealthy communities will have advantages over those residing in poorer ones. However, creating a more equal system to pay for schools would take tax dollars and advantages away from the rich. The wealthy would lose, and the disadvantaged would win. So it's possible to see the nearly$500 million billionaires and other rich people have pumped into charter schools and other education reform efforts over the past dozen years as a way to dodge this problem.
Charters have surely made a difference for some kids, such as those in rural Oregon whose schools might otherwise have closed. But since the bid to expand charters doesn't address childhood poverty or challenge the status quo – aside from diluting the power of teacher unions and raising the stakes in school board elections – this approach seems unlikely to help all schoolchildren.
Indeed, years into the quest to fix this problem without overhauling school Paying for Tuition
Bloomberg's big donation raises a similar question.
He aims to make a Johns Hopkins education more accessible for promising low-income students. When so many Hopkins alumni have enjoyed success in a wide range of careers, what can be wrong with that?
Well, paying tuition challenges millions of Americans, not just the thousands who might attend Hopkins . Tuition, fees, room and board at the top-ranked school cost about $65,000 a year. Only 5 percent of colleges and universities were affordable , according to the Institute for Higher Education Policy, a nonpartisan global research and policy center, for students from families earning$69,000 a year or less.
Like Giridharadas, the institute argues paying for college is "largely a problem of inequity."
Bloomberg's gift will certainly help some people earn a Hopkins degree. But it does nothing about the bigger challenge of making college more affordable for all in a country where student debt has surpassed $1.5 trillion . One alternative would be to finance advocacy for legislative remedies to address affordability and inequity. For affluent donors, Giridharadas argues, this could prove to be a nonstarter. Like most of what he calls " win-lose solutions ," taking that route would lead to higher taxes for the wealthy. Subsidies for Gifts from the Rich Similarly, who could quibble with Bezos spending$2 billion to fund preschools and homeless shelters? Although he has not yet made clear what results he's after, I have no doubt they will make a difference for countless Americans.
No matter how he goes about it, the gesture still raises questions. As Stanford University philanthropy scholar Rob Reich explains in his new book " Just Giving ," the tax break rich Americans get when they make charitable contributions subsidizes their favorite causes.
Or, to phrase it another way, the federal government gives initiatives supported by Bezos and other wealthy donors like him preferential treatment. Does that make sense in a democracy? Reich says that it doesn't.
me title=
The elected representatives in democracies should decide how best to solve problems with tax dollars, not billionaires who are taken with one cause or another, the Stanford professor asserts.
That's why I think it's so important to ask the critical questions that Giridharadas and Reich are raising, and why the students taking my philanthropy classes this semester will be reading "Winners Take All" and "Just Giving."
Editor's note: Johns Hopkins University Press provides funding as a member of The Conversation US, which also has a grant from the Walton Family Foundation. The Gates Foundation is a funder of The Conversation Media Group.
tongorad , January 11, 2019 at 10:35 am
In education, philanthropy means Billionaires buying the policies they want. Re Bill Gates, Eli Broad, DeVos, etc,
Adam Eran , January 11, 2019 at 12:47 pm
None of the common tactics of the "reformers" have scientific backing. So (union-busting) charter schools, merit pay (because teachers are motivated by money), and testing kids until their eyeballs bleed are all bogus, and do not have an impact on educational outcomes.
The plutocrats have even funded a propaganda film called "Waiting for Superman" in which Michelle Rhee applies "tough love" to reform failing Washington D.C. schools, firing lots of teachers because their students' test scores didn't make the cut, etc.
Waiting for Superman touts the Finnish schools as the ones to emulate and they are very good ones, too. Omitted from their account is the fact that Finnish teachers are tenured, unionized, respected and quite well paid.
So what does correlate with educational outcomes? Childhood poverty. In Finland, only 2% of their children are poor. In the U.S. it's 23%.
The problem is systemic, not the teachers, or the types of schools.
In some sense this is nothing new. Back when Pittsburgh was a network of steel mills and mine tailings Carnegie funded meuseums, libraries, arboretums, and strike-breakers who shot workers that complained. He was public about the need to "give back" and made a point of demanding that the places were open on Sundays because he forced his workers to do 12 hour days six days a week.
No doubt he may have felt he was helping, and no doubt the institutions have been and still are a positive benefit, but they also did nothing to attack the root cause of the suffering nor did they make any fundamental change in society. That would upset his apple cart. By the same token the fact that private donors needed to fund public institutions was based upon the simple fact that they had all the money.
It is also notable that some of the more recent endeavors such as Gates' tech-driven charter schools, or Facebook's donation to the same, or for that matter Apple's donation of iPads to LAUSD have a direct commercial component. The intial gift may be free but in the end it is market-making as much of the cash routes back to the company. They may genuinely believe in the solution but the financial connection is also clear.
More interesting though Pierre Omidyar who combined his business and "philanthropy" more directly by putting money into a foundation that then invests in startups he runs which "do social good" or which sell technology to those that do so.
Ultimately Bill Gates and Jeff Bezos may have more to play with than Carnegie ever dreamed of but at the end of the day much of what they are doing is the same, starving necessary institutions of funds, smoothing out the rough edges of their PR (especially when, like Bezos, they are in the crosshairs), and then peddling "solutions" that look good but only reinforce the conditions that make them rich.
JerryDenim , January 11, 2019 at 12:42 pm
" have a direct commercial component. The intial gift may be free but in the end it is market-making as much of the cash routes back to the company."
How true, but you might not even be cynical enough. Back in 2012 (I believe) there was reportage about large banks quitely lobbying Bloomberg to make big cuts to the New York City's funding of local charities and non-profits. Several million dollars were cut as a result of the austerity lobbying by the banks. The same week the food pantry where I volunteered, which lost $40,000 of City funding if memory serves me correctly, received a "generous" gift of a folding table from Citibank. My wife who at the time worked at a large non-profit dedicated to community issues in the South Bronx, had to attend a presentation by a Citibank employee with a name like "How the Nonprofit Community Has Failed the Community". Her attendance was a courtesy demanded in exchange for a several thousand dollar donation from Citibank to her nonprofit. Her non-profit lost much more in funding from the City due to the banks' lobbying efforts, and surprise surprise, what was the main thrust of the Citibank presentation? How micro-finance lending can help historically marginalized communities of course! My wife's organization was engaged in several programs aimed at encouraging and aiding entrepreneurship and financial literacy. Citibank saw local non-profits that were helping the community keep their collective heads above the water as competition. Their programatic work was harmful to the bank's business model of luring people into odious debt by promulgating an environment of despair and desperation. Beware of billionaires and bankers bearing gifts. Their vast fortunes should be trimmed down to size with taxation/force and distributed democratically according to the needs of the community, not the whims of the market or the misguided opinions of non-expert, know-it-all billionaires who have never lived nor worked in the communities they claim to care about. Montanamaven , January 11, 2019 at 1:31 pm Charity makes people supplicants which is a form of servitude. "Thank you kindly, sir, for you gracious gift." That is not a "free" society. We should have a society where no one needs some good folks' trickle downs. A basic guaranteed income might work better than the system we have now especially with an affordable heath care system. It would eliminate food banks and homeless shelters and jobs involving making lists and forms and graphs for the Medical Insurance Business. And it would eliminate a lot of other stupid and bullsh*t jobs. Yes, I've been rereading David Graeber's "Bullsh*t Jobs." chuck roast , January 11, 2019 at 4:36 pm Several years ago I collected signatures for Move to Amend, an organizations which advocates for an anti-corporate personhood amendment to the US Constitution. I learned two things: 1. ordinary citizens 'get it' about corporations running the show, and they are enthusiastic about bringing them to heel, and 2. ordinary citizens who work in 501(c)3 non-profits are far less enthusiastic about the possible withering away of their cozy corporate dole. So, while the giant vampire squids of the world drift lazily along on a fine current of their own making, keep in mind that there are huge schools of pilot fish that depend on their leavings for survival. All of these small fish will surely resist any effort to tenderize this calamari. drHampartzunk , January 11, 2019 at 4:43 pm No one said it better than William Jewett Tucker, a contemporary critic of Carnegie: "I can conceive of no greater mistake, more disastrous in the end to religion if not to society, than of trying to make charity do the work of justice." David in Santa Cruz , January 11, 2019 at 8:28 pm This was a terrific post on a very important issue. Even in my insignificant little burg we have experienced this problem first-hand. A local Charter School was doing a very good job of "keeping out the brown people" and publishing a "walk of shame" of all who made "voluntary" contributions to their coffers, thus "outing" those who didn't (the California constitution forbids schools that spend public money from requiring fees). They even went so far as to hire a Head of School from one of the last Mississippi Segregation Acadamies, just in case their "mission" wasn't clear. Admission was by lottery ("because lotteries are fair!"), unless you happened to be on their massively bloated and self-appointed Board (including influential local officials, quelle surprise !). Those with learning differences or languages other than English were "strongly discouraged" from even applying. The Charter covered their operating budget with all those "voluntary" contributions, and had sequestered all the cash squeezed out of the local public schools, in order to buy an office building (because kids just love preparing for the world of work by going to school in office buildings!). A local billionaire whose name rhymes with "Netflix" bailed them out with a$10M donation for the building when it appeared that some in authority might look askance at who would be the beneficiaries of this insider real estate deal using skimmed-off public monies.
Scratch a Charter School and 9 times out of 10 there's a real estate deal underlying it ("Because, the children !"). Billionaires should have no more influence than any other individual voter in making public policy.
orange cats , January 13, 2019 at 9:45 am
Grrrrr, Charter Schools are making me angry. The real estate deal(s), you mention are absolutely true. Here's another sweet scheme in Arizonia: "The Arizona Republic has reported that Rep. Eddie Farnsworth stands to make about $30 million from selling three charter schools he built with taxpayer money. The toothless Arizona State Board for Charter Schools approved the transfer of his for-profit charter school to a new, non-profit company. He might collect up to$30 million -- and maybe even continue running the operation in addition to retaining a $3.8 million share in the new for-profit company. The Benjamin Franklin charter schools operate in wealthy neighborhoods. The 3,000 students have good scores and the schools have a B rating. But that's not surprising, since most of the parents have high incomes and college educations. If the schools are like most charters in the state, they're more racially segregated than the campuses in the surrounding school districts. The state pays the charter schools$2,000 per student more than it pays traditional school districts like Payson -- which is supposedly to make up for the charter's inability to issue bonds and such.
However, converting the charters to a non-profit company will enable the schools to avoid property taxes and qualify for federal education funds. Taxpayers will essentially end up paying for the same schools twice, since taxpayers have footed the bills for the lease payments to the tune of about $5 million annually. Now, the new owners will use taxpayer money to finance the purchase of buildings already paid for by taxpayers." drHampartzunk , January 11, 2019 at 9:08 pm Stevenson school in Mountain View CA, a public school with PACT (parents and children together), has a lottery. Its students are 70% white. Across the street, Theuerkauf, which does not have PACT, is 30% white and no lottery. And a huge difference in the two schools test scores. Smells illegal. Also, Google took the former building of the former PACT program hosting school, which resulted in this grotesque distortion of the supposed public service the school district provides. Michael Fiorillo , January 12, 2019 at 9:09 am As a former NYC public school teacher who fought against the billionaire-funded hostile takeover of public education for two decades, I'm gratified to see the beginnings of a harsher critique of so-called philanthropy, in education and everywhere else. But the next hurdle is to overcome the tic of always qualifying critique and pushback with talk of the "good intentions" of these Overclass gorgons. Their intention are not "good" in the way most human beings construe that word, and are the same as they've always been: accumulation and establishing the political wherewithal to maintain/facilitiate the same. This hustle does the added trick of getting the public to subsidize it's own impoverishment and loss of political power (as in Overclass ed reformers funding efforts to eliminate local school boards). When there is near-total congruence between your financial/political interests and the policies driven by your "philanthropy," the credibility of your "good intentions" transacts at an extremely high discount, no matter how much you try to dress it up with vacuous and insipid social justice cliches. For a case in point, just spend five minutes researching the behavior and rhetoric of Teach For America. Malanthropy (n): the systemic use of non-profit, tax-exempt entities to facilitate the economic and political interests of their wealthy endowers, to the detriment of society at large. See also, Villainthropy. Mattski , January 12, 2019 at 11:07 am The critical thing, I have found, is to see "philanthropy" and charitable endeavor as a cornerstone of capitalism, without which the system would–without any doubt–fail. Engels and others documented, contemporary scholars have continued to document, the way that the wives of the first factory owners established almshouses and lying in hospitals where the deserving poor were separated from the undeserving, dunned with religion and political cant, and channelled into various forms of work, including reproductive labor. A very big piece of the neoliberal puzzle involves the rise of the NGO during the Clinton/Blair period, and its integration with works of the like of the IMF and USAID, the increasing sophistication of this enterprise which has at times also included union-busting (see Grenada in the aftermath of the US invasion) and worse. As a State Department function, the Peace Corps integrates the best of charity, grassroots capitalism, and good old Protestant cant. Spring Texan , January 12, 2019 at 5:54 pm I've read the Winners Take All book and it's terrific! Even if you understand the general outlines, the author will make you see things differently because of his intimate knowledge of how this ecosystem works. Highly recommended! Also recommend his twitter account, @AnandWrites He's really good on "pinkerizing" too, and "Thought Leaders" and how they comfort the comfortable. #### [Jan 12, 2019] Tucker Carlson Mitt Romney supports the status quo. But for everyone else, it's infuriating Fox News ##### Highly recommended! ##### Notable quotes: ##### "... Adapted from Tucker Carlson's monologue from "Tucker Carlson Tonight" on January 2, 2019. ..." ###### Jan 02, 2019 | www.foxnews.com Tucker: America's goal is happiness, but leaders show no obligation to voters Voters around the world revolt against leaders who won't improve their lives. Newly-elected Utah senator Mitt Romney kicked off 2019 with an op-ed in the Washington Post that savaged Donald Trump's character and leadership. Romney's attack and Trump's response Wednesday morning on Twitter are the latest salvos in a longstanding personal feud between the two men. It's even possible that Romney is planning to challenge Trump for the Republican nomination in 2020. We'll see. But for now, Romney's piece is fascinating on its own terms. It's well-worth reading. It's a window into how the people in charge, in both parties, see our country. Romney's main complaint in the piece is that Donald Trump is a mercurial and divisive leader. That's true, of course. But beneath the personal slights, Romney has a policy critique of Trump. He seems genuinely angry that Trump might pull American troops out of the Syrian civil war. Romney doesn't explain how staying in Syria would benefit America. He doesn't appear to consider that a relevant question. More policing in the Middle East is always better. We know that. Virtually everyone in Washington agrees. Corporate tax cuts are also popular in Washington, and Romney is strongly on board with those, too. His piece throws a rare compliment to Trump for cutting the corporate rate a year ago. That's not surprising. Romney spent the bulk of his business career at a firm called Bain Capital. Bain Capital all but invented what is now a familiar business strategy: Take over an existing company for a short period of time, cut costs by firing employees, run up the debt, extract the wealth, and move on, sometimes leaving retirees without their earned pensions. Romney became fantastically rich doing this. Meanwhile, a remarkable number of the companies are now bankrupt or extinct. This is the private equity model. Our ruling class sees nothing wrong with it. It's how they run the country. Mitt Romney refers to unwavering support for a finance-based economy and an internationalist foreign policy as the "mainstream Republican" view. And he's right about that. For generations, Republicans have considered it their duty to make the world safe for banking, while simultaneously prosecuting ever more foreign wars. Modern Democrats generally support those goals enthusiastically. There are signs, however, that most people do not support this, and not just in America. In countries around the world -- France, Brazil, Sweden, the Philippines, Germany, and many others -- voters are suddenly backing candidates and ideas that would have been unimaginable just a decade ago. These are not isolated events. What you're watching is entire populations revolting against leaders who refuse to improve their lives. Something like this has been in happening in our country for three years. Donald Trump rode a surge of popular discontent all the way to the White House. Does he understand the political revolution that he harnessed? Can he reverse the economic and cultural trends that are destroying America? Those are open questions. But they're less relevant than we think. At some point, Donald Trump will be gone. The rest of us will be gone, too. The country will remain. What kind of country will be it be then? How do we want our grandchildren to live? These are the only questions that matter. The answer used to be obvious. The overriding goal for America is more prosperity, meaning cheaper consumer goods. But is that still true? Does anyone still believe that cheaper iPhones, or more Amazon deliveries of plastic garbage from China are going to make us happy? They haven't so far. A lot of Americans are drowning in stuff. And yet drug addiction and suicide are depopulating large parts of the country. Anyone who thinks the health of a nation can be summed up in GDP is an idiot. The goal for America is both simpler and more elusive than mere prosperity. It's happiness. There are a lot of ingredients in being happy: Dignity. Purpose. Self-control. Independence. Above all, deep relationships with other people. Those are the things that you want for your children. They're what our leaders should want for us, and would want if they cared. But our leaders don't care. We are ruled by mercenaries who feel no long-term obligation to the people they rule. They're day traders. Substitute teachers. They're just passing through. They have no skin in this game, and it shows. They can't solve our problems. They don't even bother to understand our problems. One of the biggest lies our leaders tell us that you can separate economics from everything else that matters. Economics is a topic for public debate. Family and faith and culture, meanwhile, those are personal matters. Both parties believe this. Members of our educated upper-middle-classes are now the backbone of the Democratic Party who usually describe themselves as fiscally responsible and socially moderate. In other words, functionally libertarian. They don't care how you live, as long as the bills are paid and the markets function. Somehow, they don't see a connection between people's personal lives and the health of our economy, or for that matter, the country's ability to pay its bills. As far as they're concerned, these are two totally separate categories. Social conservatives, meanwhile, come to the debate from the opposite perspective, and yet reach a strikingly similar conclusion. The real problem, you'll hear them say, is that the American family is collapsing. Nothing can be fixed before we fix that. Yet, like the libertarians they claim to oppose, many social conservatives also consider markets sacrosanct. The idea that families are being crushed by market forces seems never to occur to them. They refuse to consider it. Questioning markets feels like apostasy. Both sides miss the obvious point: Culture and economics are inseparably intertwined. Certain economic systems allow families to thrive. Thriving families make market economies possible. You can't separate the two. It used to be possible to deny this. Not anymore. The evidence is now overwhelming. How do we know? Consider the inner cities. Thirty years ago, conservatives looked at Detroit or Newark and many other places and were horrified by what they saw. Conventional families had all but disappeared in poor neighborhoods. The majority of children were born out of wedlock. Single mothers were the rule. Crime and drugs and disorder became universal. What caused this nightmare? Liberals didn't even want to acknowledge the question. They were benefiting from the disaster, in the form of reliable votes. Conservatives, though, had a ready explanation for inner-city dysfunction and it made sense: big government. Decades of badly-designed social programs had driven fathers from the home and created what conservatives called a "culture of poverty" that trapped people in generational decline. There was truth in this. But it wasn't the whole story. How do we know? Because virtually the same thing has happened decades later to an entirely different population. In many ways, rural America now looks a lot like Detroit. This is striking because rural Americans wouldn't seem to have much in common with anyone from the inner city. These groups have different cultures, different traditions and political beliefs. Usually they have different skin colors. Rural people are white conservatives, mostly. Yet, the pathologies of modern rural America are familiar to anyone who visited downtown Baltimore in the 1980s: Stunning out of wedlock birthrates. High male unemployment. A terrifying drug epidemic. Two different worlds. Similar outcomes. How did this happen? You'd think our ruling class would be interested in knowing the answer. But mostly they're not. They don't have to be interested. It's easier to import foreign labor to take the place of native-born Americans who are slipping behind. But Republicans now represent rural voters. They ought to be interested. Here's a big part of the answer: male wages declined. Manufacturing, a male-dominated industry, all but disappeared over the course of a generation. All that remained in many places were the schools and the hospitals, both traditional employers of women. In many places, women suddenly made more than men. Now, before you applaud this as a victory for feminism, consider the effects. Study after study has shown that when men make less than women, women generally don't want to marry them. Maybe they should want to marry them, but they don't. Over big populations, this causes a drop in marriage, a spike in out-of-wedlock births, and all the familiar disasters that inevitably follow -- more drug and alcohol abuse, higher incarceration rates, fewer families formed in the next generation. This isn't speculation. This is not propaganda from the evangelicals. It's social science. We know it's true. Rich people know it best of all. That's why they get married before they have kids. That model works. But increasingly, marriage is a luxury only the affluent in America can afford. And yet, and here's the bewildering and infuriating part, those very same affluent married people, the ones making virtually all the decisions in our society, are doing pretty much nothing to help the people below them get and stay married. Rich people are happy to fight malaria in Congo. But working to raise men's wages in Dayton or Detroit? That's crazy. This is negligence on a massive scale. Both parties ignore the crisis in marriage. Our mindless cultural leaders act like it's still 1961, and the biggest problem American families face is that sexism is preventing millions of housewives from becoming investment bankers or Facebook executives. For our ruling class, more investment banking is always the answer. They teach us it's more virtuous to devote your life to some soulless corporation than it is to raise your own kids. Sheryl Sandberg of Facebook wrote an entire book about this. Sandberg explained that our first duty is to shareholders, above our own children. No surprise there. Sandberg herself is one of America's biggest shareholders. Propaganda like this has made her rich. We are ruled by mercenaries who feel no long-term obligation to the people they rule. They're day traders. Substitute teachers. They're just passing through. They have no skin in this game, and it shows. What's remarkable is how the rest of us responded to it. We didn't question why Sandberg was saying this. We didn't laugh in her face at the pure absurdity of it. Our corporate media celebrated Sandberg as the leader of a liberation movement. Her book became a bestseller: "Lean In." As if putting a corporation first is empowerment. It is not. It is bondage. Republicans should say so. They should also speak out against the ugliest parts of our financial system. Not all commerce is good. Why is it defensible to loan people money they can't possibly repay? Or charge them interest that impoverishes them? Payday loan outlets in poor neighborhoods collect 400 percent annual interest. We're OK with that? We shouldn't be. Libertarians tell us that's how markets work -- consenting adults making voluntary decisions about how to live their lives. OK. But it's also disgusting. If you care about America, you ought to oppose the exploitation of Americans, whether it's happening in the inner city or on Wall Street. And by the way, if you really loved your fellow Americans, as our leaders should, if it would break your heart to see them high all the time. Which they are. A huge number of our kids, especially our boys, are smoking weed constantly. You may not realize that, because new technology has made it odorless. But it's everywhere. And that's not an accident. Once our leaders understood they could get rich from marijuana, marijuana became ubiquitous. In many places, tax-hungry politicians have legalized or decriminalized it. Former Speaker of the House John Boehner now lobbies for the marijuana industry. His fellow Republicans seem fine with that. "Oh, but it's better for you than alcohol," they tell us. Maybe. Who cares? Talk about missing the point. Try having dinner with a 19-year-old who's been smoking weed. The life is gone. Passive, flat, trapped in their own heads. Do you want that for your kids? Of course not. Then why are our leaders pushing it on us? You know the reason. Because they don't care about us. When you care about people, you do your best to treat them fairly. Our leaders don't even try. They hand out jobs and contracts and scholarships and slots at prestigious universities based purely on how we look. There's nothing less fair than that, though our tax code comes close. Under our current system, an American who works for a salary pays about twice the tax rate as someone who's living off inherited money and doesn't work at all. We tax capital at half of what we tax labor. It's a sweet deal if you work in finance, as many of our rich people do. In 2010, for example, Mitt Romney made about$22 million dollars in investment income. He paid an effective federal tax rate of 14 percent. For normal upper-middle-class wage earners, the federal tax rate is nearly 40 percent. No wonder Mitt Romney supports the status quo. But for everyone else, it's infuriating.
Our leaders rarely mention any of this. They tell us our multi-tiered tax code is based on the principles of the free market. Please. It's based on laws that the Congress passed, laws that companies lobbied for in order to increase their economic advantage. It worked well for those people. They did increase their economic advantage. But for everyone else, it came at a big cost. Unfairness is profoundly divisive. When you favor one child over another, your kids don't hate you. They hate each other.
That happens in countries, too. It's happening in ours, probably by design. Divided countries are easier to rule. And nothing divides us like the perception that some people are getting special treatment. In our country, some people definitely are getting special treatment. Republicans should oppose that with everything they have.
What kind of country do you want to live in? A fair country. A decent country. A cohesive country. A country whose leaders don't accelerate the forces of change purely for their own profit and amusement. A country you might recognize when you're old.
A country that listens to young people who don't live in Brooklyn. A country where you can make a solid living outside of the big cities. A country where Lewiston, Maine seems almost as important as the west side of Los Angeles. A country where environmentalism means getting outside and picking up the trash. A clean, orderly, stable country that respects itself. And above all, a country where normal people with an average education who grew up in no place special can get married, and have happy kids, and repeat unto the generations. A country that actually cares about families, the building block of everything.
Video
What will it take a get a country like that? Leaders who want it. For now, those leaders will have to be Republicans. There's no option at this point.
But first, Republican leaders will have to acknowledge that market capitalism is not a religion. Market capitalism is a tool, like a staple gun or a toaster. You'd have to be a fool to worship it. Our system was created by human beings for the benefit of human beings. We do not exist to serve markets. Just the opposite. Any economic system that weakens and destroys families is not worth having. A system like that is the enemy of a healthy society.
Internalizing all this will not be easy for Republican leaders. They'll have to unlearn decades of bumper sticker-talking points and corporate propaganda. They'll likely lose donors in the process. They'll be criticized. Libertarians are sure to call any deviation from market fundamentalism a form of socialism.
That's a lie. Socialism is a disaster. It doesn't work. It's what we should be working desperately to avoid. But socialism is exactly what we're going to get, and very soon unless a group of responsible people in our political system reforms the American economy in a way that protects normal people.
If you want to put America first, you've got to put its families first.
Adapted from Tucker Carlson's monologue from "Tucker Carlson Tonight" on January 2, 2019.
#### [Jan 12, 2019] Tucker Carlson has sparked the most interesting debate in conservative politics by Jane Coaston
##### "... Carlson told me that beyond changing our tax code, he has no major policies in mind. "I'm not even making the case for an economic system in particular," he told me. "All I'm saying is don't act like the way things are is somehow ordained by God or a function or raw nature." ..."
###### Jan 10, 2019 | www.vox.com
"All I'm saying is don't act like the way things are is somehow ordained by God."
Last Wednesday, the conservative talk show host Tucker Carlson started a fire on the right after airing a prolonged monologue on his show that was, in essence, an indictment of American capitalism.
America's "ruling class," Carlson says, are the "mercenaries" behind the failures of the middle class -- including sinking marriage rates -- and "the ugliest parts of our financial system." He went on: "Any economic system that weakens and destroys families is not worth having. A system like that is the enemy of a healthy society."
He concluded with a demand for "a fair country. A decent country. A cohesive country. A country whose leaders don't accelerate the forces of change purely for their own profit and amusement."
The monologue was stunning in itself, an incredible moment in which a Fox News host stated that for generations, "Republicans have considered it their duty to make the world safe for banking, while simultaneously prosecuting ever more foreign wars." More broadly, though, Carlson's position and the ensuing controversy reveals an ongoing and nearly unsolvable tension in conservative politics about the meaning of populism, a political ideology that Trump campaigned on but Carlson argues he may not truly understand.
Moreover, in Carlson's words: "At some point, Donald Trump will be gone. The rest of us will be gone too. The country will remain. What kind of country will be it be then?"
The monologue and its sweeping anti-elitism drove a wedge between conservative writers. The American Conservative's Rod Dreher wrote of Carlson's monologue, "A man or woman who can talk like that with conviction could become president. Voting for a conservative candidate like that would be the first affirmative vote I've ever cast for president." Other conservative commentators scoffed. Ben Shapiro wrote in National Review that Carlson's monologue sounded far more like Sens. Bernie Sanders or Elizabeth Warren than, say, Ronald Reagan.
I spoke with Carlson by phone this week to discuss his monologue and its economic -- and cultural -- meaning. He agreed that his monologue was reminiscent of Warren, referencing her 2003 book The Two-Income Trap: Why Middle-Class Parents Are Growing Broke . "There were parts of the book that I disagree with, of course," he told me. "But there are parts of it that are really important and true. And nobody wanted to have that conversation."
Carlson wanted to be clear: He's just asking questions. "I'm not an economic adviser or a politician. I'm not a think tank fellow. I'm just a talk show host," he said, telling me that all he wants is to ask "the basic questions you would ask about any policy." But he wants to ask those questions about what he calls the "religious faith" of market capitalism, one he believes elites -- "mercenaries who feel no long-term obligation to the people they rule" -- have put ahead of "normal people."
But whether or not he likes it, Carlson is an important voice in conservative politics. His show is among the most-watched television programs in America. And his raising questions about market capitalism and the free market matters.
"What does [free market capitalism] get us?" he said in our call. "What kind of country do you want to live in? If you put these policies into effect, what will you have in 10 years?"
Populism on the right is gaining, again
Carlson is hardly the first right-leaning figure to make a pitch for populism, even tangentially, in the third year of Donald Trump, whose populist-lite presidential candidacy and presidency Carlson told me he views as "the smoke alarm ... telling you the building is on fire, and unless you figure out how to put the flames out, it will consume it."
Populism is a rhetorical approach that separates "the people" from elites. In the words of Cas Mudde, a professor at the University of Georgia, it divides the country into "two homogenous and antagonistic groups: the pure people on the one end and the corrupt elite on the other." Populist rhetoric has a long history in American politics, serving as the focal point of numerous presidential campaigns and powering William Jennings Bryan to the Democratic nomination for president in 1896. Trump borrowed some of that approach for his 2016 campaign but in office has governed as a fairly orthodox economic conservative, thus demonstrating the demand for populism on the right without really providing the supply and creating conditions for further ferment.
When right-leaning pundit Ann Coulter spoke with Breitbart Radio about Trump's Tuesday evening Oval Office address to the nation regarding border wall funding, she said she wanted to hear him say something like, "You know, you say a lot of wild things on the campaign trail. I'm speaking to big rallies. But I want to talk to America about a serious problem that is affecting the least among us, the working-class blue-collar workers":
Coulter urged Trump to bring up overdose deaths from heroin in order to speak to the "working class" and to blame the fact that working-class wages have stalled, if not fallen, in the last 20 years on immigration. She encouraged Trump to declare, "This is a national emergency for the people who don't have lobbyists in Washington."
Ocasio-Cortez wants a 70-80% income tax on the rich. I agree! Start with the Koch Bros. -- and also make it WEALTH tax.
-- Ann Coulter (@AnnCoulter) January 4, 2019
These sentiments have even pitted popular Fox News hosts against each other.
Sean Hannity warned his audience that New York Rep. Alexandria Ocasio-Cortez's economic policies would mean that "the rich people won't be buying boats that they like recreationally, they're not going to be taking expensive vacations anymore." But Carlson agreed when I said his monologue was somewhat reminiscent of Ocasio-Cortez's past comments on the economy , and how even a strong economy was still leaving working-class Americans behind.
"I'm just saying as a matter of fact," he told me, "a country where a shrinking percentage of the population is taking home an ever-expanding proportion of the money is not a recipe for a stable society. It's not."
Carlson told me he wanted to be clear: He is not a populist. But he believes some version of populism is necessary to prevent a full-scale political revolt or the onset of socialism. Using Theodore Roosevelt as an example of a president who recognized that labor needs economic power, he told me, "Unless you want something really extreme to happen, you need to take this seriously and figure out how to protect average people from these remarkably powerful forces that have been unleashed."
"I think populism is potentially really disruptive. What I'm saying is that populism is a symptom of something being wrong," he told me. "Again, populism is a smoke alarm; do not ignore it."
But Carlson's brand of populism, and the populist sentiments sweeping the American right, aren't just focused on the current state of income inequality in America. Carlson tackled a bigger idea: that market capitalism and the "elites" whom he argues are its major drivers aren't working. The free market isn't working for families, or individuals, or kids. In his monologue, Carlson railed against libertarian economics and even payday loans, saying, "If you care about America, you ought to oppose the exploitation of Americans, whether it's happening in the inner city or on Wall Street" -- sounding very much like Sanders or Warren on the left.
Carlson's argument that "market capitalism is not a religion" is of course old hat on the left, but it's also been bubbling on the right for years now. When National Review writer Kevin Williamson wrote a 2016 op-ed about how rural whites "failed themselves," he faced a massive backlash in the Trumpier quarters of the right. And these sentiments are becoming increasingly potent at a time when Americans can see both a booming stock market and perhaps their own family members struggling to get by.
Capitalism/liberalism destroys the extended family by requiring people to move apart for work and destroying any sense of unchosen obligations one might have towards one's kin.
-- Jeremy McLallan (@JeremyMcLellan) January 8, 2019
At the Federalist, writer Kirk Jing wrote of Carlson's monologue, and a response to it by National Review columnist David French:
Our society is less French's America, the idea, and more Frantz Fanon's "Wretched of the Earth" (involving a very different French). The lowest are stripped of even social dignity and deemed unworthy of life . In Real America, wages are stagnant, life expectancy is crashing, people are fleeing the workforce, families are crumbling, and trust in the institutions on top are at all-time lows. To French, holding any leaders of those institutions responsible for their errors is "victimhood populism" ... The Right must do better if it seeks to govern a real America that exists outside of its fantasies.
J.D. Vance, author of Hillbilly Elegy , wrote that the [neoliberal] economy's victories -- and praise for those wins from conservatives -- were largely meaningless to white working-class Americans living in Ohio and Kentucky: "Yes, they live in a country with a higher GDP than a generation ago, and they're undoubtedly able to buy cheaper consumer goods, but to paraphrase Reagan: Are they better off than they were 20 years ago? Many would say, unequivocally, 'no.'"
Carlson's populism holds, in his view, bipartisan possibilities. In a follow-up email, I asked him why his monologue was aimed at Republicans when many Democrats had long espoused the same criticisms of free market economics. "Fair question," he responded. "I hope it's not just Republicans. But any response to the country's systemic problems will have to give priority to the concerns of American citizens over the concerns of everyone else, just as you'd protect your own kids before the neighbor's kids."
Who is "they"?
And that's the point where Carlson and a host of others on the right who have begun to challenge the conservative movement's orthodoxy on free markets -- people ranging from occasionally mendacious bomb-throwers like Coulter to writers like Michael Brendan Dougherty -- separate themselves from many of those making those exact same arguments on the left.
When Carlson talks about the "normal people" he wants to save from nefarious elites, he is talking, usually, about a specific group of "normal people" -- white working-class Americans who are the "real" victims of capitalism, or marijuana legalization, or immigration policies.
In this telling, white working-class Americans who once relied on a manufacturing economy that doesn't look the way it did in 1955 are the unwilling pawns of elites. It's not their fault that, in Carlson's view, marriage is inaccessible to them, or that marijuana legalization means more teens are smoking weed ( this probably isn't true ). Someone, or something, did this to them. In Carlson's view, it's the responsibility of politicians: Our economic situation, and the plight of the white working class, is "the product of a series of conscious decisions that the Congress made."
The criticism of Carlson's monologue has largely focused on how he deviates from the free market capitalism that conservatives believe is the solution to poverty, not the creator of poverty. To orthodox conservatives, poverty is the result of poor decision making or a lack of virtue that can't be solved by government programs or an anti-elite political platform -- and they say Carlson's argument that elites are in some way responsible for dwindling marriage rates doesn't make sense .
But in French's response to Carlson, he goes deeper, writing that to embrace Carlson's brand of populism is to support "victimhood populism," one that makes white working-class Americans into the victims of an undefined "they:
Carlson is advancing a form of victim-politics populism that takes a series of tectonic cultural changes -- civil rights, women's rights, a technological revolution as significant as the industrial revolution, the mass-scale loss of religious faith, the sexual revolution, etc. -- and turns the negative or challenging aspects of those changes into an angry tale of what they are doing to you .
And that was my biggest question about Carlson's monologue, and the flurry of responses to it, and support for it: When other groups (say, black Americans) have pointed to systemic inequities within the economic system that have resulted in poverty and family dysfunction, the response from many on the right has been, shall we say, less than enthusiastic .
Really, it comes down to when black people have problems, it's personal responsibility, but when white people have the same problems, the system is messed up. Funny how that works!!
-- Judah Maccabeets (@AdamSerwer) January 9, 2019
Yet white working-class poverty receives, from Carlson and others, far more sympathy. And conservatives are far more likely to identify with a criticism of "elites" when they believe those elites are responsible for the expansion of trans rights or creeping secularism than the wealthy and powerful people who are investing in private prisons or an expansion of the militarization of police . Carlson's network, Fox News, and Carlson himself have frequently blasted leftist critics of market capitalism and efforts to fight inequality .
I asked Carlson about this, as his show is frequently centered on the turmoils caused by " demographic change ." He said that for decades, "conservatives just wrote [black economic struggles] off as a culture of poverty," a line he includes in his monologue .
He added that regarding black poverty, "it's pretty easy when you've got 12 percent of the population going through something to feel like, 'Well, there must be ... there's something wrong with that culture.' Which is actually a tricky thing to say because it's in part true, but what you're missing, what I missed, what I think a lot of people missed, was that the economic system you're living under affects your culture."
Carlson said that growing up in Washington, DC, and spending time in rural Maine, he didn't realize until recently that the same poverty and decay he observed in the Washington of the 1980s was also taking place in rural (and majority-white) Maine. "I was thinking, 'Wait a second ... maybe when the jobs go away the culture changes,'" he told me, "And the reason I didn't think of it before was because I was so blinded by this libertarian economic propaganda that I couldn't get past my own assumptions about economics." (For the record, libertarians have critiqued Carlson's monologue as well.)
Carlson told me that beyond changing our tax code, he has no major policies in mind. "I'm not even making the case for an economic system in particular," he told me. "All I'm saying is don't act like the way things are is somehow ordained by God or a function or raw nature."
And clearly, our market economy isn't driven by God or nature, as the stock market soars and unemployment dips and yet even those on the right are noticing lengthy periods of wage stagnation and dying little towns across the country. But what to do about those dying little towns, and which dying towns we care about and which we don't, and, most importantly, whose fault it is that those towns are dying in the first place -- those are all questions Carlson leaves to the viewer to answer.
#### [Jan 06, 2019] Neocons in US niversities: Everything Madeleine Albright Doesn t Like is Fascism
##### "... "political science" is not a science but pseudo-academic field for losers who do not want to study real history or take courses which actually develop intellect and provide fundamental knowledge. ..."
###### Jan 06, 2019 | www.unz.com
Early on in her book, Albright says:
My students remarked that the Fascist chiefs we remember best were charismatic
Marked in bold is the most terrifying thing about Albright's book and I am not even going to read her pseudo-intellectual excrement. The fact that obviously deranged fanatic hack has students is a testimony to a sewer level of the US "elite-producing" machine and a pathetic sight contemporary US "elite" represents.
This is apart from the fact that "political science" is not a science but pseudo-academic field for losers who do not want to study real history or take courses which actually develop intellect and provide fundamental knowledge.
#### [Jan 04, 2019] A whopping 84 percent of all stocks owned by Americans belong to the wealthiest 10 percent of households. And that includes everyone's stakes in pension plans, 401(k)'s and individual retirement accounts, as well as trust funds, mutual funds and college savings programs like 529 plans.
###### Jan 04, 2019 | economistsview.typepad.com
anne -> anne... , January 01, 2019 at 12:58 PM
February 8, 2018
We All Have a Stake in the Stock Market, Right? Guess Again
By PATRICIA COHEN
Take a deep breath and relax.
The riotous market swings that have whipped up frothy peaks of anxiety over the last week -- bringing the major indexes down more than 10 percent from their high -- have virtually no impact on the income or wealth of most families. The reason: They own little or no stock.
A whopping 84 percent of all stocks owned by Americans belong to the wealthiest 10 percent of households. And that includes everyone's stakes in pension plans, 401(k)'s and individual retirement accounts, as well as trust funds, mutual funds and college savings programs like 529 plans.
"For the vast majority of Americans, fluctuations in the stock market have relatively little effect on their wealth, or well-being, for that matter," said Edward N. Wolff, an economist at New York University who recently published new research * on the topic....
Tom aka Rusty said in reply to anne... , January 02, 2019 at 12:13 PM
I am skeptical of the 84% if only because 401(k) plans have gotten so large.
Darrell in Phoenix said in reply to Tom aka Rusty... , January 03, 2019 at 01:50 PM
What I could find says 401(k)s have $5.6T, IRAs have$2.5T, and when you add in pensions, the total is $29 trillion. Not sure when those numbers are from. Hard to know what part of that is stocks vs. bonds. As of last April, US stock markets had$34 trillion and the rest of the world $44 trillion equiv. So, if IRA, 401(k) and retirement plans have almost as much wealth as the total of us stocks, and that is 16% of all stocks... does that mean we 1) Americans own a lot more foreign stocks than foreigners own american stocks or 2) 84% of retirement assets are bonds? There is, what?$50 trillion is US debt, much of it backed by bonds.
So, $30 trillion retirement assets,$24.5T bonds and $5.5 trillion stocks... such that$5.5T is 16% of $34T? That doesn't "smell right" to me. point , January 01, 2019 at 12:37 PM Meh. "And it certainly made most Americans poorer. While 2/3 of the corporate tax cut may have gone to U.S. residents, 84 percent of stocks are held by the wealthiest 10 percent of the population. Everyone else will see hardly any benefit." Wildly unsubstantiated first sentence, though the rest seems likely true. Whether the bulk went to tax cuts for domestic or foreign national or into the furnace, there was indeed some sliver that actually went to the rest of us. anne -> point... , January 01, 2019 at 01:05 PM Wildly unsubstantiated... [ Correct and documented, as always. ] Plp -> anne... , January 01, 2019 at 01:41 PM "And it certainly made most Americans poorer" " everyone else will see hardly any Benefit " Well which is it Poorer or a very little benefit ? Sloppy righteousness Plp -> Plp... , January 01, 2019 at 01:55 PM Here's the PK finesse "since the tax cut isn't paying for itself it will eventually have to be paid for some other way " Nonsense ! " either by raising other taxes or by cutting spending on programs people value" This pretends the federal government is a household Not a self determining sovereign economy Plp -> Plp... , January 01, 2019 at 02:01 PM Sovereign debt in the sovereign's own currency Has no intrinsic real value Example The burden of that debt on society can become zero Once the rate of intetest On the whole stock of debt is cycled into a zero real rate status The Fed could start that process at any time Once it's zero real it can stay zero real forever EMichael -> Plp... , January 02, 2019 at 04:38 AM It's about efficiency, not just the printing press. And even the MMT people realize there are limits. RC AKA Darryl, Ron said in reply to EMichael... , January 02, 2019 at 06:29 AM Efficiency of what, I might ask? Efficiency of shipping goods halfway around the world from where people work for less in less safe environments is really the efficiency of theft by capitalists, not the efficiency of production. Taking from the land and sea and dumping waste into the land, sea, and air is the efficiency of theft by capitalists too, not the efficiency of resource use. We are very efficient at making billionaires from externalized costs. We continue to cheaply sell ourselves out because the price is right. Ask Paine what lies hidden in the price? EMichael -> RC AKA Darryl, Ron... , January 02, 2019 at 06:47 AM Yeah, I got that business and government can both be inefficient in many ways. My point is that when you reduce the cost of doing business, or reduce the credit worthiness of a borrower, you will see greater inefficiency. Digging holes and filling them in is one way to spend money. Building a road or a building is another. Which would you prefer? RC AKA Darryl, Ron said in reply to EMichael... , January 02, 2019 at 07:21 AM I would prefer unhiding externalized costs and allocating domestic labor to pay those costs, not with taxes, but with production of domestic goods and the elimination of pollutants and managed use of limited resources. That's just me and entirely off the subject when it comes to macroeconomics. In any case, I am also for Paine's KLV full employment macroeconomics. If anything KLV macro is more accessible both politically and intellectually than the kinds of price movements that would be required to place environmentally sustainable caps on carbon emissions or the commercial menhaden catch. A nominal interest rate for interbank lending that was maintained by the Fed to persist at just the rate of inflation except for lower when necessary to recover from a recession is not a terrible thing. The consequence of braking the economy just to avoid hitting some inflation target is reckless driving. As we know the crash victims are always labor. EMichael -> RC AKA Darryl, Ron... , January 02, 2019 at 07:41 AM I'd prefer all of that, and a pony. You need to separate Paine's economics from his politics. He believes a peoples' party can deliver that. It cannot. It will not. As efficiency goes out the door when a small, unregulated group controls everything. Not to say our version of capitalism has anywhere near the government regulation I think it needs to reach your(and my) goals. But it is light years ahead of Paine's dreams. RC AKA Darryl, Ron said in reply to EMichael... , January 02, 2019 at 07:51 AM Paine's economics are insightful and useful. Paine's politics are bifurcated. Paine is as much for a progressive liberal democrat as he is for an enlightened communist dictator. Which do you think has a greater chance of actually ever existing in this century? EMichael -> RC AKA Darryl, Ron... , January 02, 2019 at 08:03 AM I'm all in on Paine's economics, but I believe his politics make him an opponent to ever coming up with progressive liberal democrats running the country. All or nothing with him, and that makes it beyond hard to move towards that goal. Many in here like that. I admire them for going through their life without once ever settling for anything but perfect. I never had that opportunity. A bunch of small steps are necessary, as the Founders insured that. Raging against those facts are immense negatives. And it is why Reps win elections. Christopher H. said in reply to EMichael... , January 02, 2019 at 09:21 AM lol the Founders F!@#ed up. They gave us the Senate and electoral college. RC AKA Darryl, Ron said in reply to EMichael... , January 02, 2019 at 09:33 AM I am largely in concurrence with you, but I do have some specific caveats. At least in my part of the country Paine's far left politics are not representative of anything that we come into contact with in public life. Your politics are bit left of us here. I am the far left in these parts. Paine's more populous left side is barely represented by any group in my reality. So, for me, Paine is a unique curiosity reminiscent of my socialist friends from the 60's and early 70's for which I have seen no analog since the introduction of Disco and double-knit leisure suits. The EV crowd in general is a microcosm of nerdiness rather than a microcosm of well informed constituencies of the US unrepresentative "democracy." There is nothing unsettling about it. This crowd is as normal as the characters of "Big Bang Theory." Republicans win elections because they get the most votes. The VA voter turnout for 2018 was almost 60%, well above 2014 and 2010 midterms which were just above 40%. Most people think that Trump is the most politically divisive POTUS in history, but I think nothing in my life has done more to unify the Democratic Party given they can curb their enthusiasm about beating Trump in 2020 enough to not rip the party apart over who gets the spoils. Turnout for POTUS election in VA has been above and sometimes well above 70% for every POTUS election since 1975 except for 2000. Turnout for VA gubernatorial elections has been between 40% and 50% for each election from 1997 up through 2017, but ran much higher before motor voter stopped the purging of voter registration rolls. VA elects state legislators in off years for statewide elections with just over 30% of voters showing up. https://www.elections.virginia.gov/resultsreports/registration-statistics/registrationturnout-statistics/index.html Tom aka Rusty said in reply to EMichael... , January 02, 2019 at 12:12 PM Common sense can still be applied to politics. Going all flaming leftist is a recipe for losing elections. We need to elect more Democrats. EMichael -> Tom aka Rusty... , January 02, 2019 at 04:39 PM Understand. But flaming leftist will help the working class. RC AKA Darryl, Ron said in reply to EMichael... , January 02, 2019 at 07:44 AM "...My point is that when you reduce the cost of doing business, or reduce the credit worthiness of a borrower, you will see greater inefficiency. Digging holes and filling them in is one way to spend money. Building a road or a building is another. Which would you prefer?" [While I would prefer bridges to digging holes and filling them, my hesitation in answering this question was with the assumption that lower interest rates generate more wasteful investment, despite that I know it to be true in some contexts. Speculation is the problem more than real projects by far. Diversity among investments can be a very good thing. Failure in this context is just a consequence of innovation by trial and error, one of the more efficient means. Besides, for private investment the risk spread limits useless excursions, while the state needs conscious limits on pork perhaps, but pork is also a useful medium of political exchange. Uncle's discretionary spending is a very small pot of gold.] EMichael -> RC AKA Darryl, Ron... , January 02, 2019 at 08:05 AM Lower interest make business plans much easier. In doing so, risks are taken that should not be taken, thus increasing inefficiency. This is especially true when the planners carry absolutely no financial risk themselves on a project. Christopher H. said in reply to EMichael... , January 02, 2019 at 09:17 AM " Many in here like that. I admire them for going through their life without once ever settling for anything but perfect. I never had that opportunity. A bunch of small steps are necessary, as the Founders insured that. Raging against those facts are immense negatives. And it is why Reps win elections." The New Deal. The Great Society. Social Security. Medicare. Medicaid. EMichael would have argued against all of them as overreaching. His excuse for the Democrats was that past Presidents had large majorities in Congress. He would say the country is too conservative and racist. But they like those programs now. Christopher H. said in reply to EMichael... , January 02, 2019 at 09:19 AM During the golden age of social democracy during the post War period, when entrepreneurs failed they had a safety net and could try again. EMichael has this weird puritanical streak. Just like mulp, another crank on the Interent. He wants his failed red state family member to wallow in bitterness. RC AKA Darryl, Ron said in reply to EMichael... , January 02, 2019 at 09:48 AM "Lower interest make business plans much easier. In doing so, risks are taken that should not be taken, thus increasing inefficiency. This is especially true when the planners carry absolutely no financial risk themselves on a project." [I understood what you were going for and do not doubt that you have specific instances for which you are sure that is true. For a few years prior to 2008 then I am sure that was true, but those "animal spirits" were drunk on more than just low interest rates. There was a specific sequence of events that played out over a long period of time bringing the US economy to the precipice of financial system euphoria over the infallibility of markets. Lenders and borrowers and especially middlemen stared down into the abyss and then kept on truckin'. Then we all heard a big splat! Now is not then. Some future now may be then again if we forget about then, but it takes a lot of stupid to get there, not just low interest rates. Taking a bit more risk, but without the stupid is how we learn from failure to achieve greater success. RC AKA Darryl, Ron said in reply to RC AKA Darryl, Ron... , January 02, 2019 at 09:52 AM If either the dot.com splat or the mortgage splat were not clearly visible at least three or four years before the splat then either you need a new prescription for your eye glasses or you need to step out of that fog that you were living in. Darrell in Phoenix said in reply to RC AKA Darryl, Ron... , January 02, 2019 at 10:27 AM "success through failure" has become a norm of American business, with the PotUS as the perfect example. He never got into the casino, steak, wine, water, university, etc. businesses with intent on making money in those businesses. Heck, he barely breaks even on the condo and golf businesses. He creates the towers and golf resorts to promote the name, and promotes the name to be able to lease it to doomed businesses which he starts with the intent of losing money on the leasing of his name. I suspect the most profitable thing he's ever done was "realty tv" host and having a book ghost-written in his name. And yes, low interest rates DO create easy money, and much of it does find its way into "success through failure" investments. Why would you loan money to a business that you know was a scam just created to accumulate debt then go bust? Because you can securitize the debt and sell it off to Main Street suckers to eat the loss. Why else "success through failure". Well, I've worked for a company that dumped a lot of money into a venture it knew was doomed long-term. Why? Because it intended to go IPO, and it needed the (unprofitable) revenue from the doomed venture to pump its price in the IPO. I think we'd all agree that "success through failure" is terrible and wish it would go away. Problem is, it works. RC AKA Darryl, Ron said in reply to Darrell in Phoenix... , January 02, 2019 at 12:18 PM Regarding "Success through failure" I was thinking in terms of the dot.com boom from which sprang the broadband Internet and Amazon. Out there in Phoenix AZ where you and EMichael live things must be really crazy. Back in 70's Phoenix was the yuppy Mecca. What happened? Darrell in Phoenix said in reply to RC AKA Darryl, Ron... , January 02, 2019 at 01:18 PM True, not all of the dot.com was bad investment. Just most. We got a lot of housing built during the housing boom too. Too bad most of it was 2000-3000 sqft McMansions on golf courses, 50 miles from any jobs. "Out there in Phoenix AZ where you and EMichael live things must be really crazy." 1970 Phoenix metro had 1 million people. Today we're at 4.75 million. Politics are a mess. Big money is pushing to constantly lower taxes, but now people are pushing back wanting more funding for schools. Surprisingly, we've passed phased in$12 minimum wage and medical marijuana (recreational failed by less than 1%), and now have split representation at the federal level indicating a move toward liberal.
And yet, we'll still very Republican in the state house and go highly conservative on many other issues such as animal rights. A recent "green energy initiative" failed ugly.
So, to sum it up... Pretty Liberal, but Very CONservative, with a HUGE swing vote that goes this-way-and-that in random directions and on different issues...
...but in general want low taxes, are hate big government...
...except on the things like Social Security, Medicare, Medicaid, Defense, education, transportation, police, fire, courts, justice system, boarder security, anti-terrorism, and the rest of stuff government actually spends almost all of its money on...
... but are all for getting rid of all the wasteful government that practically doesn't really exist...
... and we definitely want religious freedom, as long as that religion is Christianity and the freedom is to force their views onto others, and not allow other religions to have a place in society.
Hope that clarifies what happened.
RC AKA Darryl, Ron said in reply to Darrell in Phoenix... , January 03, 2019 at 07:57 AM
"...1970 Phoenix metro had 1 million people. Today we're at 4.75 million...
...Hope that clarifies what happened."
EMichael -> RC AKA Darryl, Ron... , January 02, 2019 at 04:42 PM
Adequate regulation would have stopped that.
No one notices that the biggest factor in the housing bubble was bush ordering the OCC to take regulation of national banks out of the hands of the states.
The bubble would have been much, much less.
RC AKA Darryl, Ron said in reply to EMichael... , January 03, 2019 at 07:58 AM
Oh, butt for the winged frog...
Darrell in Phoenix said in reply to EMichael... , January 03, 2019 at 08:56 AM
"Adequate regulation would have stopped that."
The population increase? People would have to be somewhere, and unlike coastal California with those stupid oceans, bays and mountains... Phoenix has plenty of open space.
2000-3000 sqft mcmansions 50 miles from jobs? Probably true. Without the housing bubble we would have hit the wall on housing and caused massive rent spike a decade ago instead of a few years ago. With that massive rent increase then instead of now, meaning that a decade ago we would have seen the in-building of small apartments and condos that we are now getting.
Net, we probably would have been better off with more in-building of smaller, multi-family units instead of massive sprawl of McMansions.
RC AKA Darryl, Ron said in reply to Darrell in Phoenix... , January 03, 2019 at 09:41 AM
Don't complain too much. The "massive sprawl of McMansions" is a sure sign of widespread prosperity. Here in eastern Henrico County VA we have the massive sprawl of McCracker boxes instead although not just crackers live in them. McMansions are usually on at least 1/2 acre lots, while McCracker boxes are built so close together that most of the time there was not room left for a driveway and people park on the street except that some of those streets are actually the highways to the neighboring cracker box town. On street parking is just one sign of poverty. There are also drug related shootings just like in the big city.
In eastern Henrico there are only a few small McMansion developments in prime real estate overlooking the flood plain of the James River where there is any such high ground in eastern Henrico near the river. Chesterfield County across the James River has the advantage of very high ground near the James River at River's Bend, a.k.a, Meadowville, where there is plenty room for a golf course and marina as well as loads of McMansions and high-end apartment buildings. High and dry western Henrico County is where they build the McMansions along with all the exclusive high end shopping. The "Sad-eyed Lady of the Lowlands" was probably sad because her basement flooded whenever it rained:<)
Darrell in Phoenix said in reply to RC AKA Darryl, Ron... , January 03, 2019 at 10:50 AM
"Don't complain too much."
I wasn't complaining.
I was adding a tad to the "inefficiencies" discussion caused by disconnecting loan origination from loss risk.
I got my piece of the giant federal government giveaway needed to clean up the mess. In 2011 I bought a 1000 sqft condo for $48K that I now have leased out for a nice cash-flow positive$600+ a month and true after-tax profit of about the same $600 a month (add$100 of the payment that is principal reduction, then subtract 22% income tax on $500 a month ($700 profit - $200 depreciation)). If you notice the purchase price doesn't match the depreciation, yeah, I've done over$20K in additional capital improvements that increase the base including new roof, new HVAC, replaced all aluminum windows and doors with high-E, gutted and replaced the kitchen and both baths. Summer cooling bill was cut by more than half from ~$300 to ~$125 by the new windows and doors and more efficient HVAC, increasing the monthly rent accordingly.
I've only been spending abut $400 of that$600 profit, letting the rest accumulate for maintenance, repairs, upgrades.
Oh, I also save about $250 a month on the mortgage of my primary by locking in 3% interest rate. Not big deals in the grand scheme, but the boom->crash->rent squeeze worked out okay for me personally.... for now. Darrell in Phoenix said in reply to Darrell in Phoenix... , January 03, 2019 at 11:10 AM As for the cracker houses, we got a lot of those in the 80's and 90's before the big McMansion boom. Like these 1990s beauties with almost, but not quite enough room in the driveway to park a car without blocking the sidewalk. https://www.zillow.com/homes/for_sale/globalrelevanceex_sort/33.540639,-112.146931,33.538696,-112.149814_rect/18_zm/ To be perfectly honest, it is exactly those kinds of houses that the Phoenix market needs a lot more of. Switching from those to McMansions, then hardly any construction at all for 6 or 7 years, is why there is such a crunch on housing, and skyrocketing rents and house prices now. Even now they aren't building many of those small single family homes. They are building redevelopment/in-fill condos in downtown/near ASU in Tempe and apartments in the middle-burbs. anne -> anne... , January 01, 2019 at 01:43 PM https://www.nytimes.com/2018/11/14/opinion/the-tax-cut-and-the-balance-of-payments-wonkish.html November 14, 2018 The Tax Cut and the Balance of Payments (Wonkish) Lots of financial maneuvering, signifying nothing By Paul Krugman What tax cuts were supposed to do A tax cut for corporations looks, on its face, like a big giveaway to stockholders, mainly bypassing ordinary families: of stocks held by Americans, 84 percent are held by the wealthiest 10 percent; * 35 percent of U.S. stocks are held by foreigners. ** The claim by tax cut advocates was, however, that the tax cut would be passed through to workers, because we live in an integrated global capital market. There were multiple reasons not to believe this argument in practice, but it's still worth working through its implications.... anne -> anne... , January 01, 2019 at 01:52 PM https://www.nytimes.com/2019/01/01/opinion/the-trump-tax-cut-even-worse-than-youve-heard.html The key point to realize is that in today's globalized corporate system, a lot of any country's corporate sector, our own very much included, is actually owned by foreigners, either directly because corporations here are foreign subsidiaries, or indirectly because foreigners own American stocks. Indeed, roughly a third of U.S. corporate profits basically flow to foreign nationals – which means that a third of the tax cut flowed abroad, rather than staying at home. This probably outweighs any positive effect on GDP growth. So the tax cut probably made America poorer, not richer. And it certainly made most Americans poorer. While 2/3 of the corporate tax cut may have gone to U.S. residents, 84 percent of stocks are held by the wealthiest 10 percent of the population. Everyone else will see hardly any benefit.... -- Paul Krugman Tom aka Rusty said in reply to anne... , January 02, 2019 at 12:10 PM It will not make them poorer, but will not make many better off, there is a difference. Tom aka Rusty said in reply to point... , January 02, 2019 at 12:08 PM As my first tax professor said, "the best first answer to most tax questions is IT DEPENDS." In the pro formas I have done not everyone in the middle class is getting a tax cut. Some a slight tax increase, most not too much impact at all. We will know a lot more by April. anne , January 01, 2019 at 12:50 PM http://cepr.net/blogs/beat-the-press/steven-rattner-s-charts-in-the-nyt-don-t-show-he-says-they-show December 31, 2018 Steven Rattner's Charts in the New York Times Don't Show He Says They Show By Dean Baker Steven Rattner used his New York Times column * to present a number of charts to show Donald Trump's failures as president. While some, like the drop in enrollments in the health care exchanges, do in fact show failure, others do not really make his case. For example, he has a chart with a headline "paltry raise for the middle class." What his chart actually shows is that middle class wages, adjusted for inflation, fell sharply in the recession, but have been rising roughly 1.0 percent a year since 2014. They recovered their pre-recession levels in 2017 and now are almost a percentage point above the 2008 level. This is not a great story, but the picture under Trump is certainly better than under Obama. (This wasn't entirely Obama's fault, since he inherited an economy that was failing.) The chart shows more rapid growth at the bottom of the pay ladder and a modest downturn under Trump for those at the top. By recent standards, this is not a bad picture, even if Trump does not especially deserve credit for it. (He came in with an unemployment rate that was low and falling.) Rattner also presents as a bad sign projections for fewer Federal Reserve rate hikes. While one basis for projecting fewer rate hikes is that the economy now looks weaker for 2019 than had been thought earlier in the year (but still stronger than had been projected in 2016), another reason is that inflation is lower than expected. Economists have consistently over-estimated the impact that low unemployment would have on the inflation rate. With inflation coming in lower than projected, there is less reason for the Fed to raise rates. Contrary to what Rattner is implying, this is a good development. It means that the unemployment rate can continue to fall and workers at the middle and the bottom of the pay ladder can continue to see real wage gains. Rattner also shows us how growth projections for the U.S. and the world have been lowered since June of 2018. It's not clear how much Trump can be held responsible for growth in the EU (try blaming the European Commission's austerity drive) and the rest of the world, but his argument about the U.S. is pretty weak. The 2.4 percent growth projection from December 2018 is actually up 0.1 percentage point from the June projection. More importantly, it is up from a projection of 1.7 percent from January of 2017, the month Trump took office. Then we have the chart showing the rise in the debt relative to GDP. While Rattner is right that the tax cuts to the rich were a waste of resources, the higher debt to GDP ratio is basically meaningless. (Japan's debt to GDP ratio is almost 250 percent and the current interest rate on its long-term bonds is 0.00 percent.) If anyone is seriously concerned about the debt that the government is passing on to future generations then it is also necessary to include the rents associated with patent and copyright monopolies. These monopolies are alternative mechanisms to direct funding that the government uses to pay for services (i.e. research and creative work). To take the most important case, suppose the government were the replace the$70 billion (0.35 percent of GDP) in patent monopoly supported research that the pharmaceutical industry conducts each year with direct funding of $70 billion. All research findings could then be placed in the public domain and new drugs would sell at generic prices. Rattner and his crew would count the$70 billion in addition spending as an addition to the debt and deficit. However, when the industry is able to charge the public an extra $360 billion ** (1.8 percent of GDP) a year in higher drug prices due to patent monopolies and related protections, Rattner and company choose to ignore the burden. This sort of groundless debt fear mongering deserves only ridicule; it is not serious economic analysis. Trump has done many awful things as president and threatens to do many more. But this is not a reason to adopt Trumpian tactics, the data provide plenty of grounds to attack his performance without playing games with it. anne -> anne... , January 01, 2019 at 02:41 PM https://fred.stlouisfed.org/graph/?g=mv7B January 15, 2018 Real Median Weekly Earnings, * 1992-2018 * All full time wage and salary workers (Percent change) January 15, 2018 Real Median Weekly Earnings, * 1992-2018 * All full time wage and salary workers (Indexed to 1992) anne -> anne... , January 01, 2019 at 02:41 PM https://fred.stlouisfed.org/graph/?g=mm0s January 15, 2018 Real Median Weekly Earnings for men and women, * 1992-2018 * All full time wage and salary workers (Percent change) January 15, 2018 Real Median Weekly Earnings for men and women, * 1992-2018 * All full time wage and salary workers (Indexed to 1992) anne , January 01, 2019 at 12:50 PM http://cepr.net/blogs/beat-the-press/e-j-dionne-provides-classic-example-of-liberals-missing-the-boat December 31, 2018 E.J. Dionne Provides Classic Example of Liberals Missing the Boat By Dean Baker I often rail against liberals who wring their hands over the unfortunate folks who have been left behind by globalization and technology. E.J. Dionne gave us a classic example * of such hand-wringing in his piece today on the need to help the left behinds to keep them from becoming flaming reactionaries. For some reason, it is difficult for many liberals to grasp the idea that the bad plight of tens of millions of middle class workers did not just happen, but rather was deliberately engineered. Longer and stronger patent and copyright protection did not just happen, it was deliberate policy. Subjecting manufacturing workers to global competition, while largely protecting doctors, dentists, and other highly paid professionals was also a policy decisions. Saving the Wall Street banks from the consequences of their own greed and incompetence was also conscious policy. I know it's difficult for intellectuals to grasp new ideas, but if we want to talk seriously about rising inequality, then it will be necessary for them to try. (Yeah, I'm advertising my - free - book "Rigged: How Globalization and the Rules of the Modern Economy Were Structured to Make the Rich Richer" ** again.) Anyhow, let's hope that in 2019 we can actually talk about the policies that were put in place to redistribute income upward and not just pretend that Bill Gates and his ilk getting all the money was a natural process. Plp -> anne... , January 01, 2019 at 01:27 PM The way forward is not taking the path that got us here in reverse till its say 1976 again Because once there where do we go next Where do we go from there that doesn't by twist and turn lead back here in another post 2008 Quagmired earth Christopher H. said in reply to Plp... , January 01, 2019 at 01:27 PM The Nordic countries have gone further than 1976 - and it works! But even they have been backsliding. They key is rising living standards for everyone. That means eradicating poverty & financial precariousness and rising incomes up the income ladder. End the Dem's fascination with means testing. Make big programs everyone supports. Republican party needs to be destroyed as Jane Curtin said on CNN. #### [Jan 03, 2019] Piketty's World Inequality Review- A Critical Analysis - naked capitalism ##### Notable quotes: ##### "... By James K. Galbraith, Lloyd M. Bentsen Jr. Chair in Government and Business Relations, University of Texas at Austin. Originally published at the Institute of New Economic Thinking website ..." ##### "... World Inequality Report 2018 ..." ##### "... World Top Incomes Database ..." ##### "... Capital in the XXI Century ..." ##### "... Development and Change ..." ##### "... World Inequality Report ..." ###### Jan 03, 2019 | www.nakedcapitalism.com Yves here. It's surprising to see Piketty and even more so, one of this co-authors, Gabriel Zucman, make such strong claims for tax data as a way to measure income inequality. The rich and super rich engage in tax avoidance and evasion, to the degree that Zucman has estimated that 6% of the world's wealth is hidden. First, that wealth was hidden to avoid paying taxes on it and/or to hide its criminal origins (such as looting governments). Second, the income on hidden wealth is also by nature hidden. By James K. Galbraith, Lloyd M. Bentsen Jr. Chair in Government and Business Relations, University of Texas at Austin. Originally published at the Institute of New Economic Thinking website Thomas Piketty and his colleagues [1] have produced a new exposition of their empirical work, entitled the World Inequality Report 2018 (hereafter: WIR). Their purpose is to showcase the exploration of income and wealth inequalities begun with the World Top Incomes Database (Atkinson and Piketty 2010) and theorized in Piketty's epic Capital in the XXI Century (2014) . In particular the WIR concentrates on the presentation of measures and evidence; the stated goal is to inform a "deliberative process" with "more rigorous and transparent information on income and wealth" than has been available to date. In a review article published on-line and open access in Development and Change on December 24, 2018, I initiate this "deliberative process" by examining the WIR data and the claims made for it. The ground-breaking, systematic and transparent methodology on which the WIR rests is largely the use of tax records–specifically income tax records–mined to show the income shares of tranches of the income-earning population: top one percent, top ten percent, next forty percent, and bottom fifty percent are the usual divisions. These Piketty and his colleagues argue are more complete, comprehensive, and comparable across countries and through time than the generally-used alternative, which is household or person-based surveys. The WIR authors write disparagingly of the "Gini index" -- the inequality measure most prevalent in such surveys -- which they find too "technical" and not sufficiently intuitive. But they also object to survey methods: "The main problem with household surveys, however, is that they usually rely entirely on self-reported information about income and wealth. As a consequence, they misrepresent top income and wealth levels, and therefore overall inequality." (p. 29) This sweeping critique carries on for several pages, brushing aside a body of research comprising thousands of papers and millions of survey observations, including the work of the Luxembourg Income Studies, the World Bank, Eurostat, the Economic Commission for Latin America, and the United States Census Bureau among scores of national data-collection agencies. It is a repudiation of what almost every previous researcher has done in this field over fifty years. But are tax data really better? Where survey and tax measures both exist, and report different results, should one systematically prefer a measure based on taxes? The answer depends in part on the quality of the survey measures. But it also must depend in part on the quality, consistency, length and continuity of the national tax record, and in particular of the income tax. The WIR authors acknowledge that tax data have limits, in particular they cannot cover income and wealth hidden from tax authorities in tax havens. But the question of the quality of tax records goes much further than this. My new essay examines the question from three points of view: the coverage provided by tax data in the world economy, the consistency of tax data with other sources of information on income inequality, and the peculiarities of tax-based measurement of inequality in the United States. It goes on to make a comparison with measures drawn from other forms of administrative data -- specifically payroll records, used by the University of Texas Inequality Project -- which are generally more consistent with records of inequality measured in household surveys than are the WIR's tax records. In brief summary, the review shows that by comparison with payroll and survey data, available records from tax files are relatively sparse, and biased toward wealthier countries and those that were once British colonies, which imposed income tax. It shows that tax data are far less consistent with survey and payroll records than are the latter two with each other. And it shows that even within the United States, a country with good tax records by world standards, changes in tax law distort the WIR's measures of changes in the top income shares, while a misunderstanding of the nature of low-income tax filers in the US leads to a dramatic but nonsensical claim that the earnings of the bottom 50 percent of Americans have "collapsed" in recent decades. Overall, the review casts doubt on claims by the authors of the World Inequality Report to have produced major advances in the study of world economic inequality, and documents that many of the findings touted in the Report as new and unprecedented have in fact been reported in the literature for years, even decades in some cases. [1] The credited co-authors are Facundo Alvaredo, Lucas Chancel, Thomas Piketty, Emmanuel Saez and Gabriel Zucman. Figure 1 Top One Percent Shares from the World Inequality Database, showing an unacknowledged data break due to the US Tax Reform Act of 1986. Adjusting the data for the change in the tax definition of income would show that the US top share tracks the UK and Canada very closely. Low numbers for France and Italy are likely due to inferior tax recording of high income persons, not an underlying condition of less inequality. I do not see your and the essay's point about tax evasion impacting actual reported income more harshly than surveys. Even in cases where answers to surveys are required by law, the penalties and effort undertaken by enforcement agencies are going to be several orders of magnitude greater in cases of tax evasion compared to incorrect survey answers. Furthermore income taxes are frequently automated which makes correct or at least some reporting the default case while the default state of surveys is no data at all. Only by taking action is any data generated. While it is theoreticaly possible that surveys are more accurate because the incentive of lower taxes is also stronger, the logic argument given here is not self-evident and actual empirical data is needed for prove. The critic regarding change in tax reporting over time is quite correct altough I am far from certain that neither survey methods nor questions haven't changed over the last century. A feature not talked about at all is samplesize which always favours actual tax data over surveys given that everyone with an income over a small threshold must pay taxes. The only empirical evidence provided in the essay is self-referential. If one proxy (survey data) is faulty correlation between it and another proxy (payroll records) does not prove that the first proxy is correct because it remains possible that both proxies have the same deficiencies and are therefore correlated – instead of both being a good approximation of reality. This is not to say that the essay must be wrong or Piketty et al's assertions must be right, but with only the information provided here a lack of evidence still exists in my opinion. CanCyn , January 3, 2019 at 12:36 pm this line towards the end: " nonsensical claim that the earnings of the bottom 50 percent of Americans have "collapsed" in recent decades." is nonsensical itself. Anyone who doesn't believe that low and middle income earners' incomes have collapsed is living in a very opaque bubble, head firmly planted up *ss. I earned$12.00 per hour in retail with only a high school education in the early 1980s. I'm Canadian but I don't think the two countries are so different in this regard. In 2018 dollars, according to the Bank of Canada inflation calculator that is $26.00 per hour!! Do you know anyone in retail with only a high school education earning$26.00 per hour today? My husband, again, no high school, was making twice that amount in a steel mill in the eighties. Know anyone earning almost $60 per hour in any kind of factory work today? I sure don't. That includes the people who work in that factory now, it is mostly precarious contract work, at much lower wages, the union having been busted long ago Further, I went back to school in the late 80s and ended with my Masters in Library Science, my first full-time job in the early 90s had me earning the 1990ish equivalent of that$12.00 per hour, things were already going south. It took me quite a few more years to outpace that 1980 retail wage. My husband and I are living proof that wages have collapsed. I don't care how anecdotal that is, I know the truth, just by looking around and talking to people.
It is beyond frustrating to have to argue against this stuff.
CanCyn , January 3, 2019 at 2:57 pm
one correction, that should be "only high school" not "no high school" with regard to my husband's education.
One question for the author. How do you account for the fact that payroll records, at least as I understand them, generally omit capital gains which is where the upper income generally get most of their real wealth. Stock buybacks and share disbursements are not generally considered "payroll" as I understand it.
The advantage of tax is that it should, at least in theory, show money received by individuals rather than just money sent out in one category.
There's a lot more to inequality than money,,,
#### [Dec 27, 2018] The Yoda of Silicon Valley by Siobhan Roberts
##### "... One good teacher makes all the difference in life. More than one is a rare blessing. ..."
###### Dec 17, 2018 | www.nytimes.com
With more than one million copies in print, "The Art of Computer Programming " is the Bible of its field. "Like an actual bible, it is long and comprehensive; no other book is as comprehensive," said Peter Norvig, a director of research at Google. After 652 pages, volume one closes with a blurb on the back cover from Bill Gates: "You should definitely send me a résumé if you can read the whole thing."
The volume opens with an excerpt from " McCall's Cookbook ":
Here is your book, the one your thousands of letters have asked us to publish. It has taken us years to do, checking and rechecking countless recipes to bring you only the best, only the interesting, only the perfect.
Inside are algorithms, the recipes that feed the digital age -- although, as Dr. Knuth likes to point out, algorithms can also be found on Babylonian tablets from 3,800 years ago. He is an esteemed algorithmist; his name is attached to some of the field's most important specimens, such as the Knuth-Morris-Pratt string-searching algorithm. Devised in 1970, it finds all occurrences of a given word or pattern of letters in a text -- for instance, when you hit Command+F to search for a keyword in a document.
... ... ...
During summer vacations, Dr. Knuth made more money than professors earned in a year by writing compilers. A compiler is like a translator, converting a high-level programming language (resembling algebra) to a lower-level one (sometimes arcane binary) and, ideally, improving it in the process. In computer science, "optimization" is truly an art, and this is articulated in another Knuthian proverb: "Premature optimization is the root of all evil."
Eventually Dr. Knuth became a compiler himself, inadvertently founding a new field that he came to call the "analysis of algorithms." A publisher hired him to write a book about compilers, but it evolved into a book collecting everything he knew about how to write for computers -- a book about algorithms.
... ... ...
When Dr. Knuth started out, he intended to write a single work. Soon after, computer science underwent its Big Bang, so he reimagined and recast the project in seven volumes. Now he metes out sub-volumes, called fascicles. The next installation, "Volume 4, Fascicle 5," covering, among other things, "backtracking" and "dancing links," was meant to be published in time for Christmas. It is delayed until next April because he keeps finding more and more irresistible problems that he wants to present.
In order to optimize his chances of getting to the end, Dr. Knuth has long guarded his time. He retired at 55, restricted his public engagements and quit email (officially, at least). Andrei Broder recalled that time management was his professor's defining characteristic even in the early 1980s.
Dr. Knuth typically held student appointments on Friday mornings, until he started spending his nights in the lab of John McCarthy, a founder of artificial intelligence, to get access to the computers when they were free. Horrified by what his beloved book looked like on the page with the advent of digital publishing, Dr. Knuth had gone on a mission to create the TeX computer typesetting system, which remains the gold standard for all forms of scientific communication and publication. Some consider it Dr. Knuth's greatest contribution to the world, and the greatest contribution to typography since Gutenberg.
This decade-long detour took place back in the age when computers were shared among users and ran faster at night while most humans slept. So Dr. Knuth switched day into night, shifted his schedule by 12 hours and mapped his student appointments to Fridays from 8 p.m. to midnight. Dr. Broder recalled, "When I told my girlfriend that we can't do anything Friday night because Friday night at 10 I have to meet with my adviser, she thought, 'This is something that is so stupid it must be true.'"
... ... ...
Lucky, then, Dr. Knuth keeps at it. He figures it will take another 25 years to finish "The Art of Computer Programming," although that time frame has been a constant since about 1980. Might the algorithm-writing algorithms get their own chapter, or maybe a page in the epilogue? "Definitely not," said Dr. Knuth.
"I am worried that algorithms are getting too prominent in the world," he added. "It started out that computer scientists were worried nobody was listening to us. Now I'm worried that too many people are listening."
Scott Kim Burlingame, CA Dec. 18
Thanks Siobhan for your vivid portrait of my friend and mentor. When I came to Stanford as an undergrad in 1973 I asked who in the math dept was interested in puzzles. They pointed me to the computer science dept, where I met Knuth and we hit it off immediately. Not only a great thinker and writer, but as you so well described, always present and warm in person. He was also one of the best teachers I've ever had -- clear, funny, and interested in every student (his elegant policy was each student can only speak twice in class during a period, to give everyone a chance to participate, and he made a point of remembering everyone's names). Some thoughts from Knuth I carry with me: finding the right name for a project is half the work (not literally true, but he labored hard on finding the right names for TeX, Metafont, etc.), always do your best work, half of why the field of computer science exists is because it is a way for mathematically minded people who like to build things can meet each other, and the observation that when the computer science dept began at Stanford one of the standard interview questions was "what instrument do you play" -- there was a deep connection between music and computer science, and indeed the dept had multiple string quartets. But in recent decades that has changed entirely. If you do a book on Knuth (he deserves it), please be in touch.
IMiss America US Dec. 18
I remember when programming was art. I remember when programming was programming. These days, it is 'coding', which is more like 'code-spraying'. Throw code at a problem until it kind of works, then fix the bugs in the post-release, or the next update.
AI is a joke. None of the current 'AI' actually is. It is just another new buzz-word to throw around to people that do not understand it at all. We should be in a golden age of computing. Instead, we are cutting all corners to get something out as fast as possible. The technology exists to do far more. It is the human element that fails us.
Ronald Aaronson Armonk, NY Dec. 18
My particular field of interest has always been compiler writing and have been long awaiting Knuth's volume on that subject. I would just like to point out that among Kunth's many accomplishments is the invention of LR parsers, which are widely used for writing programming language compilers.
Edward Snowden Russia Dec. 18
Yes, \TeX, and its derivative, \LaTeX{} contributed greatly to being able to create elegant documents. It is also available for the web in the form MathJax, and it's about time the New York Times supported MathJax. Many times I want one of my New York Times comments to include math, but there's no way to do so! It comes up equivalent to: $e^{i\pi}+1$.
48 Recommend
henry pick new york Dec. 18
I read it at the time, because what I really wanted to read was volume 7, Compilers. As I understood it at the time, Professor Knuth wrote it in order to make enough money to build an organ. That apparantly happened by 3:Knuth, Searching and Sorting. The most impressive part is the mathemathics in Semi-numerical (2:Knuth). A lot of those problems are research projects over the literature of the last 400 years of mathematics.
Steve Singer Chicago Dec. 18
I own the three volume "Art of Computer Programming", the hardbound boxed set. Luxurious. I don't look at it very often thanks to time constraints, given my workload. But your article motivated me to at least pick it up and carry it from my reserve library to a spot closer to my main desk so I can at least grab Volume 1 and try to read some of it when the mood strikes. I had forgotten just how heavy it is, intellectual content aside. It must weigh more than 25 pounds.
Terry Hayes Los Altos, CA Dec. 18
I too used my copies of The Art of Computer Programming to guide me in several projects in my career, across a variety of topic areas. Now that I'm living in Silicon Valley, I enjoy seeing Knuth at events at the Computer History Museum (where he was a 1998 Fellow Award winner), and at Stanford. Another facet of his teaching is the annual Christmas Lecture, in which he presents something of recent (or not-so-recent) interest. The 2018 lecture is available online - https://www.youtube.com/watch?v=_cR9zDlvP88
Chris Tong Kelseyville, California Dec. 17
One of the most special treats for first year Ph.D. students in the Stanford University Computer Science Department was to take the Computer Problem-Solving class with Don Knuth. It was small and intimate, and we sat around a table for our meetings. Knuth started the semester by giving us an extremely challenging, previously unsolved problem. We then formed teams of 2 or 3. Each week, each team would report progress (or lack thereof), and Knuth, in the most supportive way, would assess our problem-solving approach and make suggestions for how to improve it. To have a master thinker giving one feedback on how to think better was a rare and extraordinary experience, from which I am still benefiting! Knuth ended the semester (after we had all solved the problem) by having us over to his house for food, drink, and tales from his life. . . And for those like me with a musical interest, he let us play the magnificent pipe organ that was at the center of his music room. Thank you Professor Knuth, for giving me one of the most profound educational experiences I've ever had, with such encouragement and humor!
Been there Boulder, Colorado Dec. 17
I learned about Dr. Knuth as a graduate student in the early 70s from one of my professors and made the financial sacrifice (graduate student assistantships were not lucrative) to buy the first and then the second volume of the Art of Computer Programming. Later, at Bell Labs, when I was a bit richer, I bought the third volume. I have those books still and have used them for reference for years. Thank you Dr, Knuth. Art, indeed!
Gianni New York Dec. 18
@Trerra In the good old days, before Computer Science, anyone could take the Programming Aptitude Test. Pass it and companies would train you. Although there were many mathematicians and scientists, some of the best programmers turned out to be music majors. English, Social Sciences, and History majors were represented as well as scientists and mathematicians. It was a wonderful atmosphere to work in . When I started to look for a job as a programmer, I took Prudential Life Insurance's version of the Aptitude Test. After the test, the interviewer was all bent out of shape because my verbal score was higher than my math score; I was a physics major. Luckily they didn't hire me and I got a job with IBM.
M Martínez Miami Dec. 17
In summary, "May the force be with you" means: Did you read Donald Knuth's "The Art of Computer Programming"? Excellent, we loved this article. We will share it with many young developers we know.
mds USA Dec. 17
Dr. Knuth is a great Computer Scientist. Around 25 years ago, I met Dr. Knuth in a small gathering a day before he was awarded a honorary Doctorate in a university. This is my approximate recollection of a conversation. I said-- " Dr. Knuth, you have dedicated your book to a computer (one with which he had spent a lot of time, perhaps a predecessor to PDP-11). Isn't it unusual?". He said-- "Well, I love my wife as much as anyone." He then turned to his wife and said --"Don't you think so?". It would be nice if scientists with the gift of such great minds tried to address some problems of ordinary people, e.g. a model of economy where everyone can get a job and health insurance, say, like Dr. Paul Krugman.
I was in a training program for women in computer systems at CUNY graduate center, and they used his obtuse book. It was one of the reasons I dropped out. He used a fantasy language to describe his algorithms in his book that one could not test on computers. I already had work experience as a programmer with algorithms and I know how valuable real languages are. I might as well have read Animal Farm. It might have been different if he was the instructor.
Doug McKenna Boulder Colorado Dec. 17
Don Knuth's work has been a curious thread weaving in and out of my life. I was first introduced to Knuth and his The Art of Computer Programming back in 1973, when I was tasked with understanding a section of the then-only-two-volume Book well enough to give a lecture explaining it to my college algorithms class. But when I first met him in 1981 at Stanford, he was all-in on thinking about typography and this new-fangled system of his called TeX. Skip a quarter century. One day in 2009, I foolishly decided kind of on a whim to rewrite TeX from scratch (in my copious spare time), as a simple C library, so that its typesetting algorithms could be put to use in other software such as electronic eBook's with high-quality math typesetting and interactive pictures. I asked Knuth for advice. He warned me, prepare yourself, it's going to consume five years of your life. I didn't believe him, so I set off and tried anyway. As usual, he was right.
Baddy Khan San Francisco Dec. 17
I have signed copied of "Fundamental Algorithms" in my library, which I treasure. Knuth was a fine teacher, and is truly a brilliant and inspiring individual. He taught during the same period as Vint Cerf, another wonderful teacher with a great sense of humor who is truly a "father of the internet". One good teacher makes all the difference in life. More than one is a rare blessing.
Indisk Fringe Dec. 17
I am a biologist, specifically a geneticist. I became interested in LaTeX typesetting early in my career and have been either called pompous or vilified by people at all levels for wanting to use. One of my PhD advisors famously told me to forget LaTeX because it was a thing of the past. I have now forgotten him completely. I still use LaTeX almost every day in my work even though I don't generally typeset with equations or algorithms. My students always get trained in using proper typesetting. Unfortunately, the publishing industry has all but largely given up on TeX. Very few journals in my field accept TeX manuscripts, and most of them convert to word before feeding text to their publishing software. Whatever people might argue against TeX, the beauty and elegance of a property typeset document is unparalleled. Long live LaTeX
PaulSFO San Francisco Dec. 17
A few years ago Severo Ornstein (who, incidentally, did the hardware design for the first router, in 1969), and his wife Laura, hosted a concert in their home in the hills above Palo Alto. During a break a friend and I were chatting when a man came over and *asked* if he could chat with us (a high honor, indeed). His name was Don. After a few minutes I grew suspicious and asked "What's your last name?" Friendly, modest, brilliant; a nice addition to our little chat.
Tim Black Wilmington, NC Dec. 17
When I was a physics undergraduate (at Trinity in Hartford), I was hired to re-write professor's papers into TeX. Seeing the beauty of TeX, I wrote a program that re-wrote my lab reports (including graphs!) into TeX. My lab instructors were amazed! How did I do it? I never told them. But I just recognized that Knuth was a genius and rode his coat-tails, as I have continued to do for the last 30 years!
Jack512 Alexandria VA Dec. 17
A famous quote from Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it." Anyone who has ever programmed a computer will feel the truth of this in their bones.
#### [Dec 24, 2018] Income inequality happens by design. We cant fix it by tweaking capitalism
##### "... Correction: The average person in poverty in the U.S. does not live in the same abject, third world poverty as you might find in Honduras, Central African Republic, Cambodia, or the barrios of Sao Paulo. ..."
##### "... Since our poor don't live in abject poverty, I invite you to live as a family of four on less than $11,000 a year anywhere in the United States. If you qualify and can obtain subsidized housing you may have some of the accoutrements in your home that you seem to equate with living the high life. You know, running water, a fridge, a toilet, a stove. You would also likely have a phone (subsidized at that) so you might be able to participate (or attempt to participate) in the job market in an honest attempt to better your family's economic prospects and as is required to qualify for most assistance programs. ..." ##### "... So many dutiful neoliberals on here rushing to the defense of poor Capitalism. Clearly, these commentators are among those who are in the privileged position of reaping the true benefits of Capitalism - And, of course, there are many benefits to reap if you are lucky enough to be born into the right racial-socioeconomic context. ..." ##### "... Please walk us through how non-capitalist systems create wealth and allow their lowest class people propel themselves to the top in one generation. You will note that most socialist systems derive their technology and advancements from the more capitalistic systems. Pharmaceuticals, software, and robotics are a great example of this. I shutter to think of what the welfare of the average citizen of the world would be like without the advancements made via the capitalist countries. ..." ###### Dec 05, 2015 | The Guardian The poorest Americans have no realistic hope of achieving anything that approaches income equality. They still struggle for access to the basics ... ... ... The disparities in wealth that we term "income inequality" are no accident, and they can't be fixed by fiddling at the edges of our current economic system. These disparities happened by design, and the system structurally disadvantages those at the bottom. The poorest Americans have no realistic hope of achieving anything that approaches income equality; even their very chances for access to the most basic tools of life are almost nil. ... ... ... Too often, the answer by those who have hoarded everything is they will choose to "give back" in a manner of their choosing – just look at Mark Zuckerberg and his much-derided plan to "give away" 99% of his Facebook stock. He is unlikely to help change inequality or poverty any more than "giving away" of$100m helped children in Newark schools.
Allowing any of the 100 richest Americans to choose how they fix "income inequality" will not make the country more equal or even guarantee more access to life. You can't take down the master's house with the master's tools, even when you're the master; but more to the point, who would tear down his own house to distribute the bricks among so very many others?
mkenney63 5 Dec 2015 20:37
Excellent article. The problems we face are structural and can only be solved by making fundamental changes. We must bring an end to "Citizens United", modern day "Jim Crow" and the military industrial complex in order to restore our democracy. Then maybe, just maybe, we can have an economic system that will treat all with fairness and respect. Crony capitalism has had its day, it has mutated into criminality.
Kencathedrus -> Marcedward 5 Dec 2015 20:23
In the pre-capitalist system people learnt crafts to keep themselves afloat. The Industrial Revolution changed all that. Now we have the church of Education promising a better life if we get into debt to buy (sorry, earn) degrees.
The whole system is messed up and now we have millions of people on this planet who can't function even those with degrees. Barbarians are howling at the gates of Europe. The USA is rotting from within. As Marx predicted the Capitalists are merely paying their own grave diggers.
mkenney63 -> Bobishere 5 Dec 2015 20:17
I would suggest you read the economic and political history of the past 30 years. To help you in your study let me recommend a couple of recent books: "Winner Take all Politics" by Jacob Hacker and Paul Pierson and "The Age of Acquiescence" by Steve Fraser. It always amazes me that one can be so blind the facts of recent American history; it's not just "a statistical inequality", it's been a well thought-out strategy over time to rig the system, a strategy engaged in by politicians and capitalists. Shine some light on this issue by acquainting yourself with the facts.
Maharaja Brovinda -> Singh Jill Harrison 5 Dec 2015 19:42
We play out the prisoner's dilemma in life, in general, over and over in different circumstances, every day. And we always choose the dominant - rational - solution. But the best solution is not based on rationality, but rather on trust and faith in each other - rather ironically for our current, evidence based society!
Steven Palmer 5 Dec 2015 19:19
Like crack addicts the philanthropricks only seek to extend their individual glory, social image their primary goal, and yet given the context they will burn in history. Philanthroptits should at least offset the immeasurable damage they have done through their medieval wealth accumulation. Collaborative philanthropy for basic income is a good idea, but ye, masters tools.
BlairM -> Iconoclastick 5 Dec 2015 19:10
Well, to paraphrase Winston Churchill, capitalism is the worst possible economic system, except for all those other economic systems that have been tried from time to time.
I'd rather just have the freedom to earn money as I please, and if that means inequality, it's a small price to pay for not having some feudal lord or some party bureaucrat stomping on my humanity.
brusuz 5 Dec 2015 18:52
As long as wealth can be created by shuffling money from one place to another in the giant crap shoot we call our economy, nothing will change. Until something takes place to make it advantageous for the investor capitalists to put that money to work doing something that actually produces some benefit to the society as a whole, they will continue their extractive machinations. I see nothing on the horizon that is going to change any of that, and to cast this as some sort of a racial issue is quite superficial. We have all gotten the shaft, since there is no upward mobility available to anyone. Since the Bush crowd of neocons took power, we have all been shackled with "individual solutions to societal created problems."
Jimi Del Duca 5 Dec 2015 18:31
Friends, Capitalism is structural exploitation of ALL WORKERS. Thinking about it as solely a race issue is divisive. What we need is CLASS SOLIDARITY and ORGANIZATION. See iww.org We are the fighting union with no use for capitalists!
slightlynumb -> AmyInNH 5 Dec 2015 18:04
You'd be better off reading Marx if you want to understand capitalism. I think you are ascribing the word to what you think it should be rather than what it is.
It is essentially a class structure rather than any defined economic system. Neoliberal is essentially laissez faire capitalism. It is designed to suborn nation states to corporate benefit.
AmyInNH -> tommydog
They make $40 a month. Working 7 days a week. At least 12 hour days. Who's fed you that "we're doing them a favor" BS? And I've news for you regarding "Those whose skills are less adaptable to doing so are seeing their earnings decline." We have many people who have 3 masters degrees making less than minimum wage. We have top notch STEM students shunned so corporations can hire captive/cheaper foreign labor, called H1-Bs, who then wait 10 years working for them waiting for their employment based green card. Or "visiting" students here on J1 visas, so the employers can get out of paying: social security, federal unemployment insurance, etc. Wake up and smell the coffee tommydog. They've more than a thumb on the scale. seamanbodine, I am a socialist. I decided to read this piece to see if Mr. Thrasher could write about market savagery without propounding the fiction that whites are somehow exempt from the effects of it. No, he could not. I clicked on the link accompanying his assertion that whites who are high school dropouts earn more than blacks with college degrees, and I read the linked piece in full. The linked piece does not in fact compare income (i.e., yearly earnings) of white high school dropouts with those of black college graduates, but it does compare family wealth across racial cohorts (though not educational ones), and the gap there is indeed stark, with average white family wealth in the six figures (full disclosure, I am white, and my personal wealth is below zero, as I owe more in student loans than I own, so perhaps I am not really white, or I do not fully partake of "whiteness," or whatever), and average black family wealth in the four figures. The reason for this likely has a lot to do with home ownership disparities, which in turn are linked in significant part to racist redlining practices. So white dropouts often live in homes their parents or grandparents bought, while many black college graduates whose parents were locked out of home ownership by institutional racism and, possibly, the withering of manufacturing jobs just as the northward migration was beginning to bear some economic fruit for black families, are still struggling to become homeowners. Thus, the higher average wealth for the dropout who lives in a family owned home. But this is not what Mr. Thrasher wrote. He specifically used the words "earn more," creating the impression that some white ignoramus is simply going to stumble his way into a higher salary than a cultivated, college educated black person. That is simply not the case, and the difference does matter. Why does it matter? Because I regularly see middle aged whites who are broken and homeless on the streets of the town where I live, and I know they are simply the tip of a growing mountain of privation. Yeah, go ahead, call it white tears if you want, but if you cannot see that millions (including, of course, not simply folks who are out and out homeless, but folks who are struggling to get enough to eat and routinely go without needed medication and medical care) of people who have "white privilege" are indeed oppressed by global capitalism then I would say that you are, at the end of the day, NO BETTER THAN THE WHITES YOU DISDAIN. If you have read this far, then you realize that I am in no way denying the reality of structural racism. But an account of economic savagery that entirely subsumes it into non-economic categories (race, gender, age), that refuses to acknowledge that blacks can be exploiters and whites can be exploited, is simply conservatism by other means. One gets the sense that if we have enough black millionaires and enough whites dying of things like a lack of medical care, then this might bring just a little bit of warmth to the hearts of people like Mr. Thrasher. Call it what you want, but don't call it progressive. Maybe it is historical karma. Which is understandable, as there is no reason why globally privileged blacks in places like the U.S. or Great Britain should bear the burden of being any more selfless or humane than globally privileged whites are or have been. The Steven Thrashers of humanity are certainly no worse than many of the whites they cannot seem to recognize as fully human are. But nor are they any better. JohnLG 5 Dec 2015 17:23 I agree that the term "income inequality" is so vague that falls between useless and diversionary, but so too is most use of the word "capitalism", or so it seems to me. Typically missing is a penetrating analysis of where the problem lies, a comprehensibly supported remedy, or large-scale examples of anything except what's not working. "Income inequality" is pretty abstract until we look specifically at the consequences for individuals and society, and take a comprehensive look at all that is unequal. What does "capitalism" mean? Is capitalism the root of all this? Is capitalism any activity undertaken for profit, or substantial monopolization of markets and power? Power tends to corrupt. Money is a form of power, but there are others. The use of power to essentially cheat, oppress or kill others is corrupt, whether that power is in the form of a weapon, wealth, the powers of the state, or all of the above. Power is seductive and addictive. Even those with good intensions can be corrupted by an excess of power and insufficient accountability, while predators are drawn to power like sharks to blood. Democracy involves dispersion of power, ideally throughout a whole society. A constitutional democracy may offer protection even to minorities against a "tyranny of the majority" so long as a love of justice prevails. Selective "liberty and justice" is not liberty and justice at all, but rather a tyranny of the many against the few, as in racism, or of the few against the many, as by despots. Both forms reinforce each other in the same society, both are corrupt, and any "ism" can be corrupted by narcissism. To what degree is any society a shining example of government of, for, and by the people, and to what degree can one discover empirical evidence of corruption? What do we do about it? AmyInNH -> CaptainGrey 5 Dec 2015 17:15 You're too funny. It's not "lifting billions out of poverty". It's moving malicious manufacturing practices to the other side of the planet. To the lands of no labor laws. To hide it from consumers. To hide profits. And it is dying. Legislatively they choke off their natural competition, which is an essential element of capitalism. Monopoly isn't capitalism. And when they bribe legislators, we don't have democracy any more either. Jeremiah2000 -> Teresa Trujillo 5 Dec 2015 16:53 Stocks have always been "a legal form of gambling". What is happening now however, is that a pair of treys can beat out your straight flush. Companies that have never turned a profit fetch huge prices on the stock market. The stock market suckered millions in before 2008 and then prices plummeted. Where did the money from grandpa's pension fund go? Gary Reber 5 Dec 2015 16:45 Abraham Lincoln said that the purpose of government is to do for people what they cannot do for themselves. Government also should serve to keep people from hurting themselves and to restrain man's greed, which otherwise cannot be self-controlled. Anyone who seeks to own productive power that they cannot or won't use for consumption are beggaring their neighbor––the equivalency of mass murder––the impact of concentrated capital ownership. The words "OWN" and "ASSETS" are the key descriptors of the definition of wealth. But these words are not well understood by the vast majority of Americans or for that matter, global citizens. They are limited to the vocabulary used by the wealthy ownership class and financial publications, which are not widely read, and not even taught in our colleges and universities. The wealthy ownership class did not become wealthy because they are "three times as smart." Still there is a valid argument that the vast majority of Americans do not pay particular attention to the financial world and educate themselves on wealth building within the current system's limited past-savings paradigm. Significantly, the wealthy OWNERSHIP class use their political power (power always follows property OWNERSHIP) to write the system rules to benefit and enhance their wealth. As such they have benefited from forging trade policy agreements which further concentrate OWNERSHIP on a global scale, military-industrial complex subsidies and government contracts, tax code provisions and loopholes and collective-bargaining rules – policy changes they've used their wealth to champion. Gary Reber 5 Dec 2015 16:44 Unfortunately, when it comes to recommendations for solutions to economic inequality, virtually every commentator, politician and economist is stuck in viewing the world in one factor terms – human labor, in spite of their implied understanding that the rich are rich because they OWN the non-human means of production – physical capital. The proposed variety of wealth-building programs, like "universal savings accounts that might be subsidized for low-income savers," are not practical solutions because they rely on savings (a denial of consumption which lessens demand in the economy), which the vast majority of Americans do not have, and for those who can save their savings are modest and insignificant. Though, millions of Americans own diluted stock value through the "stock market exchanges," purchased with their earnings as labor workers (savings), their stock holdings are relatively minuscule, as are their dividend payments compared to the top 10 percent of capital owners. Pew Research found that 53 percent of Americans own no stock at all, and out of the 47 percent who do, the richest 5 percent own two-thirds of that stock. And only 10 percent of Americans have pensions, so stock market gains or losses don't affect the incomes of most retirees. As for taxpayer-supported saving subsidies or other wage-boosting measures, those who have only their labor power and its precarious value held up by coercive rigging and who desperately need capital ownership to enable them to be capital workers (their productive assets applied in the economy) as well as labor workers to have a way to earn more income, cannot satisfy their unsatisfied needs and wants and sufficiently provide for themselves and their families. With only access to labor wages, the 99 percenters will continue, in desperation, to demand more and more pay for the same or less work, as their input is exponentially replaced by productive capital. As such, the vast majority of American consumers will continue to be strapped to mounting consumer debt bills, stagnant wages and inflationary price pressures. As their ONLY source of income is through wage employment, economic insecurity for the 99 percent majority of people means they cannot survive more than a week or two without a paycheck. Thus, the production side of the economy is under-nourished and hobbled as a result, because there are fewer and fewer "customers with money." We thus need to free economic growth from the slavery of past savings. I mentioned that political power follows property OWNERSHIP because with concentrated capital asset OWNERSHIP our elected representatives are far too often bought with the expectation that they protect and enhance the interests of the wealthiest Americans, the OWNERSHIP class they too overwhelmingly belong to. Many, including the author of this article, have concluded that with such a concentrated OWNERSHIP stronghold the wealthy have on our politics, "it's hard to see where this cycle ends." The ONLY way to reverse this cycle and broaden capital asset OWNERSHIP universally is a political revolution. (Bernie Sanders, are you listening?) The political revolution must address the problem of lack of demand. To create demand, the FUTURE economy must be financed in ways that create new capital OWNERS, who will benefit from the full earnings of the FUTURE productive capability of the American economy, and without taking from those who already OWN. This means significantly slowing the further concentration of capital asset wealth among those who are already wealthy and ensuring that the system is reformed to promote inclusive prosperity, inclusive opportunity, and inclusive economic justice. yamialwaysright 5 Dec 2015 16:13 I was interested and in agreement until I read about structured racism. Many black kidsin the US grow up without a father in the house. They turn to anti-social behaviour and crime. Once you are poor it is hard to get out of being poor but Journalists are not doing justice to a critique of US Society if they ignore the fact that some people behave in a self-destructive way. I would imagine that if some black men in the US and the UK stuck with one woman and played a positive role in the life of their kids, those kids would have a better chance at life. People of different racial and ethnic origin do this also but there does seem to be a disproportionate problem with some black US men and some black UK men. Poverty is one problem but growing up in poverty and without a father figure adds to the problem. What the author writes applies to other countries not just the US in relation to the super wealthy being a small proportion of the population yet having the same wealth as a high percentage of the population. This in not a black or latino issue but a wealth distribution issue that affects everyone irrespective of race or ethnic origin. The top 1%, 5% or 10% having most of the wealth is well-known in many countries. nuthermerican4u 5 Dec 2015 15:59 Capitalism, especially the current vulture capitalism, is dog eat dog. Always was, always will be. My advice is that if you are a capitalist that values your heirs, invest in getting off this soon-to-be slag heap and find other planets to pillage and rape. Either go all out for capitalism or reign in this beast before it kills all of us. soundofthesuburbs 5 Dec 2015 15:32 Our antiquated class structure demonstrates the trickle up of Capitalism and the need to counterbalance it with progressive taxation. In the 1960s/1970s we used high taxes on the wealthy to counter balance the trickle up of Capitalism and achieved much greater equality. Today we have low taxes on the wealthy and Capitalism's trickle up is widening the inequality gap. We are cutting benefits for the disabled, poor and elderly so inequality can get wider and the idle rich can remain idle. They have issued enough propaganda to make people think it's those at the bottom that don't work. Every society since the dawn of civilization has had a Leisure Class at the top, in the UK we call them the Aristocracy and they have been doing nothing for centuries. The UK's aristocracy has seen social systems come and go, but they all provide a life of luxury and leisure and with someone else doing all the work. Feudalism - exploit the masses through land ownership Capitalism - exploit the masses through wealth (Capital) Today this is done through the parasitic, rentier trickle up of Capitalism: a) Those with excess capital invest it and collect interest, dividends and rent. b) Those with insufficient capital borrow money and pay interest and rent. The system itself provides for the idle rich and always has done from the first civilisations right up to the 21st Century. The rich taking from the poor is always built into the system, taxes and benefits are the counterbalance that needs to be applied externally. Iconoclastick 5 Dec 2015 15:31 I often chuckle when I read some of the right wing comments on articles such as this. Firstly, I question if readers actually read the article references I've highlighted, before rushing to comment. Secondly, the comments are generated by cifers who probably haven't set the world alight, haven't made a difference in their local community, they'll have never created thousands of jobs in order to reward themselves with huge dividends having and as a consequence enjoy spectacular asset/investment growth, at best they'll be chugging along, just about keeping their shit together and yet they support a system that's broken, other than for the one percent, of the one percent. A new report from the Institute for Policy Studies issued this week analyzed the Forbes list of the 400 richest Americans and found that "the wealthiest 100 households now own about as much wealth as the entire African American population in the United States". That means that 100 families – most of whom are white – have as much wealth as the 41,000,000 black folks walking around the country (and the million or so locked up) combined. Similarly, the report also stated that "the wealthiest 186 members of the Forbes 400 own as much wealth as the entire Latino population" of the nation. Here again, the breakdown in actual humans is broke down: 186 overwhelmingly white folks have more money than that an astounding 55,000,000 Latino people. family wealth" predicts outcomes for 10 to 15 generations. Those with extreme wealth owe it to events going back "300 to 450" years ago, according to research published by the New Republic – an era when it wasn't unusual for white Americans to benefit from an economy dependent upon widespread, unpaid black labor in the form of slavery. soundofthesuburbs -> soundofthesuburbs 5 Dec 2015 15:26 It is the 21st Century and most of the land in the UK is still owned by the descendants of feudal warlords that killed people and stole their land and wealth. When there is no land to build houses for generation rent, land ownership becomes an issue. David Cameron is married into the aristocracy and George Osborne is a member of the aristocracy, they must both be well acquainted with the Leisure Class. I can't find any hard work going on looking at the Wikipedia page for David Cameron's father-in-law. His family have been on their estate since the sixteenth century and judging by today's thinking, expect to be on it until the end of time. George Osborne's aristocratic pedigree goes back to the Tudor era: "he is an aristocrat with a pedigree stretching back to early in the Tudor era. His father, Sir Peter Osborne, is the 17th holder of a hereditary baronetcy that has been passed from father to son for 10 generations, and of which George is next in line." soundofthesuburbs 5 Dec 2015 15:24 The working and middle classes toil to keep the upper class in luxury and leisure. In the UK nothing has changed. We call our Leisure Class the Aristocracy. For the first time in five millennia of human civilisation some people at the bottom of society aren't working. We can't have that; idleness is only for the rich. It's the way it's always been and the way it must be again. Did you think the upper; leisure class, social calendar disappeared in the 19th Century? No it's alive and kicking in the 21st Century .... Peer into the lives of today's Leisure Class with Tatler. http://www.tatler.com/the-season If we have people at the bottom who are not working the whole of civilisation will be turned on its head. "The modern industrial society developed from the barbarian tribal society, which featured a leisure class supported by subordinated working classes employed in economically productive occupations. The leisure class is composed of people exempted from manual work and from practicing economically productive occupations, because they belong to the leisure class." The Theory of the Leisure Class: An Economic Study of Institutions, by Thorstein Veblen. It was written a long time ago but much of it is as true today as it was then. The Wikipedia entry gives a good insight. DBChas 5 Dec 2015 15:13 "income inequality" is best viewed as structural capitalism. It's not as if, did black and brown people and female people somehow (miraculously) attain the economic status of the lower-paid, white, male person, the problem would be solved--simply by adjusting pay scales. The problem is inherent to capitalism, which doesn't mean certain "types" of people aren't more disadvantaged for their "type." No one is saying that. For capitalists, it's easier to rationalize the obscene unfairness (only rich people say, "life's not fair") when their "type" is regarded as superior to a different "type," whether that be with respect to color or gender or both. Over time--a long time--the dominant party (white males since the Dark Ages, also the life-span of capitalism coincidentally enough) came to dominance by various means, too many to try to list, or even know of. Why white males? BTW, just because most in power and in money are white males does not mean ALL white males are in positions of power and wealth. Most are not, and these facts help to fog the issue. Indeed, "income inequality," is not an accident, nor can it be fixed, as the author notes, by tweaking (presumably he means capitalism). And he's quite right too in saying, "You can't take down the master's house with the master's tools..." I take that ALSO to mean, the problem can't be fixed by way of what Hedges has called a collapsing liberal establishment with its various institutions, officially speaking. That is, it's not institutional racism that's collapsing, but that institution is not officially recognized as such. HOWEVER, it IS possible, even when burdened with an economics that is capitalism, to redistribute wealth, and I don't just mean Mark Zuckerberg's. I mean all wealth in whatever form can be redistributed if/when government decides it can. And THIS TIME, unlike the 1950s-60s, not only would taxes on the wealthy be the same as then but the wealth redistributed would be redistributed to ALL, not just to white families, and perhaps in particular to red families, the oft forgotten ones. This is a matter of political will. But, of course, if that means whites as the largest voting block insist on electing to office those without the political will, nothing will change. In that case, other means have to be considered, and just a reminder: If the government fails to serve the people, the Constitution gives to the people the right to depose that government. But again, if whites as the largest voting block AND as the largest sub-group in the nation (and women are the largest part of that block, often voting as their men vote--just the facts, please, however unpleasant) have little interest in seeing to making necessary changes at least in voting booths, then...what? Bolshevism or what? No one seems to know and it's practically taboo even to talk about possibilities. Americans did it once, but not inclusively and not even paid in many instances. When it happens again, it has to happen with and for the participation of ALL. And it's worth noting that it will have to happen again, because capitalism by its very nature cannot survive itself. That is, as Marx rightly noted, capitalism will eventually collapse by dint of its internal contradictions. mbidding Jeremiah2000 5 Dec 2015 15:08 Correction: The average person in poverty in the U.S. does not live in the same abject, third world poverty as you might find in Honduras, Central African Republic, Cambodia, or the barrios of Sao Paulo. Since our poor don't live in abject poverty, I invite you to live as a family of four on less than$11,000 a year anywhere in the United States. If you qualify and can obtain subsidized housing you may have some of the accoutrements in your home that you seem to equate with living the high life. You know, running water, a fridge, a toilet, a stove. You would also likely have a phone (subsidized at that) so you might be able to participate (or attempt to participate) in the job market in an honest attempt to better your family's economic prospects and as is required to qualify for most assistance programs.
Consider as well that you don't have transportation to get a job that would improve your circumstances. You earn too much to qualify for meaningful levels of food support programs and fall into the insurance gap for subsidies because you live in a state that for ideological reasons refuses to expand Medicaid coverage. Your local schools are a disgrace but you can't take advantage of so-called school choice programs (vouchers, charters, and the like) as you don't have transportation or the time (given your employer's refusal to set fixed working hours for minimum wage part time work) to get your kids to that fine choice school.
You may have a fridge and a stove, but you have no food to cook. You may have access to running water and electricity, but you can't afford to pay the bills for such on account of having to choose between putting food in that fridge or flushing that toilet. You can't be there reliably for your kids to help with school, etc, because you work constantly shifting hours for crap pay.
Get back to me after six months to a year after living in such circumstances and then tell me again how Americans don't really live in poverty simply because they have access to appliances.
Earl Shelton 5 Dec 2015 15:08
The Earned Income Tax Credit seems to me a good starting point for reform. It has been around since the 70s -- conceived by Nixon/Moynihan -- and signed by socialist (kidding) Gerald Ford -- it already *redistributes* income (don't choke on the term, O'Reilly) directly from tax revenue (which is still largely progressive) to the working poor, with kids.
That program should be massively expanded to tax the 1% -- and especially the top 1/10 of 1% (including a wealth tax) -- and distribute the money to the bottom half of society, mostly in the form of work training, child care and other things that help put them in and keep them in the middle class. It is a mechanism already in existence to correct the worst ravages of Capitalism. Use it to build shared prosperity.
oKWJNRo 5 Dec 2015 14:40
So many dutiful neoliberals on here rushing to the defense of poor Capitalism. Clearly, these commentators are among those who are in the privileged position of reaping the true benefits of Capitalism - And, of course, there are many benefits to reap if you are lucky enough to be born into the right racial-socioeconomic context.
We can probably all agree that Capitalism has brought about widespread improvements in healthcare, education, living conditions, for example, compared to the feudal system that preceded it... But it also disproportionately benefits the upper echelons of Capitalist societies and is wholly unequal by design.
Capitalism depends upon the existence of a large underclass that can be exploited. This is part of the process of how surplus value is created and wealth is extracted from labour. This much is indisputable. It is therefore obvious that capitalism isn't an ideal system for most of us living on this planet.
As for the improvements in healthcare, education, living conditions etc that Capitalism has fostered... Most of these were won through long struggles against the Capitalist hegemony by the masses. We would have certainly chosen to make these improvements to our landscape sooner if Capitalism hadn't made every effort to stop us. The problem today is that Capitalism and its powerful beneficiaries have successfully convinced us that there is no possible alternative. It won't give us the chance to try or even permit us to believe there could be another, better way.
Martin Joseph -> realdoge 5 Dec 2015 14:33
Please walk us through how non-capitalist systems create wealth and allow their lowest class people propel themselves to the top in one generation. You will note that most socialist systems derive their technology and advancements from the more capitalistic systems. Pharmaceuticals, software, and robotics are a great example of this.
I shutter to think of what the welfare of the average citizen of the world would be like without the advancements made via the capitalist countries.
VWFeature 5 Dec 2015 14:29
Markets, economies and tax systems are created by people, and based on rules they agree on. Those rules can favor general prosperity or concentration of wealth. Destruction and predation are easier than creation and cooperation, so our rules have to favor cooperation if we want to avoid predation and destructive conflicts.
In the 1930's the US changed many of those rules to favor general prosperity. Since then they've been gradually changed to favor wealth concentration and predation. They can be changed back.
The trick is creating a system that encourages innovation while putting a safety net under the population so failure doesn't end in starvation.
A large part of our current problems is the natural tendency for large companies to get larger and larger until their failure would adversely affect too many others, so they're not allowed to fail. Tax law, not antitrust law, has to work against this. If a company can reduce its tax rate by breaking into 20 smaller (still huge) companies, then competition is preserved and no one company can dominate and control markets.
Robert Goldschmidt -> Jake321 5 Dec 2015 14:27
Bernie Sanders has it right on -- we can only heal our system by first having millions rise up and demand an end to the corruption of the corporations controlling our elected representatives. Corporations are not people and money is not speech.
moonwrap02 5 Dec 2015 14:26
The effects of wealth distribution has far reaching consequences. It is not just about money, but creating a fair society - one that is co-operative and cohesive. The present system has allowed an ever divide between the rich and poor, creating a two tier society where neither the twain shall meet. The rich and poor are almost different species on the planet and no longer belong to the same community. Commonality of interest is lost and so it's difficult to form community and to have good, friendly relationships across class differences that are that large.
"If capitalism is to be seen to be fair, the same rules are to apply to the big guy as to the little guy,"
Jeremiah2000 -> bifess 5 Dec 2015 14:17
Sorry. I get it now. You actually think that because the Washington elite has repealed Glass-Steagel that we live in a unregulated capitalistic system.
This is so far from the truth that I wasn't comprehending that anyone could think that. You can see the graph of pages published in the Federal Register here. Unregulated capitalism? Wow.
Dodd Frank was passed in 2010 (without a single Republican vote). Originally it was 2,300 pages. It is STILL being written by nameless bureaucrats and is over 20,000 pages. Unregulated capitalism? Really?
But the reality is that Goliath is conspiring with the government to regulate what size sling David can use and how many stones and how many ounces.
So we need more government regulations? They will disallow David from anything but spitwads and only two of those.
neuronmaker -> AmyInNH 5 Dec 2015 14:16
Do you understand the concept of corporations which are products of capitalism?
The legal institutions within each capitalist corporations and nations are just that, they are capitalist and all about making profits.
The law is made by the rich capitalists and for the rich capitalists. Each Legislation is a link in the chain of economic slavery by capitalists.
Capitalism and the concept of money is a construction of the human mind, as it does not exist in the natural world. This construction is all about using other human beings like blood suckers to sustain a cruel and evil life style - with blood and brutality as the core ideology.
Marcedward -> MarjaE 5 Dec 2015 14:12
I would agree that our system of help for the less-well-off could be more accessible and more generous, but that doesn't negate that point that there is a lot of help out there - the most important help being that totally free educational system. Think about it, a free education, and to get the most out of it a student merely has to show up, obey the rules, do the homework and study for tests. It's all laid out there for the kids like a helicopter mom laying out her kids clothes. How much easier can we make it? If people can't be bothered to show up and put in effort, how is their failure based on racism
tommydog -> martinusher 5 Dec 2015 14:12
As you are referring to Carlos Slim, interestingly while he is Mexican by birth his parents were both Lebanese.
slightlynumb -> AmyInNH 5 Dec 2015 14:12
Why isn't that capitalism? It's raw capitalism on steroids.
Zara Von Fritz -> Toughspike 5 Dec 2015 14:12
It's an equal opportunity plantation now.
Robert Goldschmidt 5 Dec 2015 14:11
The key to repairing the system is to identify the causes of our problems.
Here is my list:
The information technology revolution which continues to destroy wages by enabling automation and outsourcing.
The reformation of monopolies which price gouge and block innovation.
Hitting ecological limits such as climate change, water shortages, unsustainable farming.
Then we can make meaningful changes such as regulation of the portion of corporate profit that are pay, enforcement of national and regional antitrust laws and an escalating carbon tax.
Zara Von Fritz -> PostCorbyn 5 Dec 2015 14:11
If you can believe these quality of life or happiness indexes they put out so often, the winners tend to be places that have nice environments and a higher socialist mix in their economy. Of course there are examples of poor countries that practice the same but its not clear that their choice is causal rather than reactive.
We created this mess and we can fix it.
Zara Von Fritz -> dig4victory 5 Dec 2015 14:03
Yes Basic Income is possibly the mythical third way. It socialises wealth to a point but at the same time frees markets from their obligation to perpetually grow and create jobs for the sake of jobs and also hereford reduces the subsequent need for governments to attempt to control them beyond maintaining their health.
Zara Von Fritz 5 Dec 2015 13:48
As I understand it, you don't just fiddle with capitalism, you counteract it, or counterweight it. A level of capitalism, or credit accumulation, and a level of socialism has always existed, including democracy which is a manifestation of socialism (1 vote each). So the project of capital accumulation seems to be out of control because larger accumulations become more powerful and meanwhile the power of labour in the marketplace has become less so due to forces driving unemployment. The danger is that capital's power to control the democratic system reaches a point of no return.
Jeremiah2000 -> bifess 5 Dec 2015 13:42
"I do not have the economic freedom to grow my own food because i do not have access to enough land to grow it and i do not have the economic clout to buy a piece of land."
Economic freedom does NOT mean you get money for free. It means that means that if you grow food for personal use, the federal government doesn't trash the Constitution by using the insterstate commerce clause to say that it can regulate how much you grow on your own personal land.
Economic freedom means that if you have a widget, you can choose to set the price for $10 or$100 and that a buyer is free to buy it from you or not buy it from you. It does NOT mean that you are entitled to "free" widgets.
"If capitalism has not managed to eradicate poverty in rich first world countries then just what chance if there of capitalism eradicating poverty on a global scale?"
The average person in poverty in the U.S. doesn't live in poverty:
In fact, 80.9 percent of households below the poverty level have cell phones, and a healthy majority-58.2 percent-have computers.
Fully 96.1 percent of American households in "poverty" have a television to watch, and 83.2 percent of them have a video-recording device in case they cannot get home in time to watch the football game or their favorite television show and they want to record it for watching later.
Refrigerators (97.8 percent), gas or electric stoves (96.6 percent) and microwaves (93.2 percent) are standard equipment in the homes of Americans in "poverty."
More than 83 percent have air-conditioning.
Interestingly, the appliances surveyed by the Census Bureau that households in poverty are least likely to own are dish washers (44.9 percent) and food freezers (26.2 percent).
However, most Americans in "poverty" do not need to go to a laundromat. According to the Census Bureau, 68.7 percent of households in poverty have a clothes washer and 65.3 percent have a clothes dryer.
(Data from the U.S. census.)
#### [Dec 23, 2018] Trump proposes cutting food stamps for over 700,000 people just before Christmas by Matthew Rozsa
###### Dec 20, 2018 | www.salon.com
President Donald Trump is planning on using his executive powers to cut food stamps for more than 700,000 Americans.
The United States Department of Agriculture is proposing that states should only be allowed to waive a current food stamps requirement -- namely, that adults without dependents must work or participate in a job-training program for at least 20 hours each week if they wish to collect food stamps for more than three months in a three-year period -- on the condition that those adults live in areas where unemployment is above 7 percent, according to The Washington Post . Currently the USDA regulations permit states to waive that requirement if an adult lives in an area where the unemployment rate is at least 20 percent greater than the national rate. In effect, this means that roughly 755,000 Americans would potentially lose their waivers that permit them to receive food stamps.
The current unemployment rate is 3.7 percent.
The Trump administration's decision to impose the stricter food stamp requirements through executive action constitutes an end-run around the legislative process. Although Trump is expected to sign an $870 billion farm bill later this week -- and because food stamps goes through the Agriculture Department, it contains food stamp provisions -- the measure does not include House stipulations restricting the waiver program and imposing new requirements on parents with children between the ages of six and 12. The Senate version ultimately removed those provisions, meaning that the version being signed into law does not impose a conservative policy on food stamps, which right-wing members of Congress were hoping for. "Congress writes laws, and the administration is required to write rules based on the law," Sen. Debbie Stabenow, D-Mich., told The New York Times (Stabenow is the top Democrat on the Senate's agriculture committee). "Administrative changes should not be driven by ideology. I do not support unilateral and unjustified changes that would take food away from families." Matthew Rozsa is a breaking news writer for Salon. He holds an MA in History from Rutgers University-Newark and is ABD in his PhD program in History at Lehigh University. His work has appeared in Mic, Quartz and MSNBC. #### [Dec 17, 2018] Withouth the USSR as a countervailing force the level of inequality in Western societies will always rise to the level on which riots will start and then will fluctuates around this level. ###### Dec 17, 2018 | discussion.theguardian.com AmyInNH -> Riever , 23 Aug 2016 10:00 Swing between extremes, however, consistent in US history, economic predatory dependence on free/ultra cheap labor with no legal rights. Current instantiation, offshored and illegal and "temporary" immigrant labor. Note neither party in the US is proposing "immigration reform" is green card upon hire. Ds merely propose green card for time served for those over X number of years donated as captive/cheap. The entitled to cheap/captive now want it in law, national laws and trade agreements. All privilege/no responsibilities, including taxes. Doesn't scale. 1929 says so, 2008 says so. CivilDiscussion , 23 Aug 2016 10:25 Liberals, the Left, Progressives -- whatever you want to call them suffer from a basic problem. They don't work together and have no common goals. As the article stated they complain but offer no real solutions that they can agree on. Should we emphasize gay pride or should we emphasize good-paying jobs and benefits with good social welfare benefits? Until they can agree at least on priorities they will never reform the current corrupt system -- it is too entrenched. Even if the Capitalist Monstrosity we have now self-destructs as the writer indicates -- nothing good will replace it until the Left get their act together. AmyInNH -> Juillette , 23 Aug 2016 10:16 "Lesser of two evils" needs to go on the burn pile. Encumbent congress needs a turn over. Not showing up to vote is not okay. If people can't think of someone they want to write-in, "none of the above" is a protest vote. Not voting is silence, which equals consent. Local elections, beat back Koch/ALEC, hiding on ballots as "Libertarian". "Privatize everything" is their mantra, so they can further profitize via inescapeable taxes, while gutting "regulation" - safety and market integrity, with no accountability. Corporation 101: limited liability. While means we are left holding the bag. As in bailout -$125 billion in 1990, up to $7.7 trillion in 2008. Dave_P -> Isiodore , 23 Aug 2016 09:59 Anything the Economist presents as the overriding choice is probably best relegated to one factor among many. I respect Milanovic's work, but he's seeing things from where we are now. Remember we've seen populist surges come and go from the witch-burnings and religious panics of the 17th century to 1890s Bryanism and the 1930s far right, and each time they've yielded to a more articulate vision, though the last time it cost sixty million dead - not something we want to see repeated. This time it's hard because dissent still clings to a "post-ideological" delusion that those on top never succumbed to. But change will come as what I'd term "post-rational" alternatives fail to deliver. Let's hope it's sooner rather than later. willpodmore , 23 Aug 2016 09:53 "Brexit, too, was primarily a working-class revolt." Thank you Martin, at least someone writing in the Guardian has got the point! We voted against the EU's unelected European Central Bank, its unelected European Commission, its European Court of Justice, its Common Agricultural Policy and its Common Fisheries Policy. We voted against the EU's treaty-enshrined 'austerity' (= depression) policies, which have impoverished Greece, Spain, Portugal and Italy. We voted against the EU/US Transatlantic Trade and Investment Partnership, which would privatise all our public services, which threatens all our rights, and which discriminates against the countries of Africa, Asia and Latin America. We voted against the EU's tariffs against African farmers' cheaper produce. We opposed the City of London Corporation, the Institute of Directors, the CBI, the IMF, Citigroup, Goldman Sachs, JP Morgan, Citigroup and Morgan Stanley, which all wanted us to stay in the EU. We voted against the EU's undemocratic trilogue procedure and its pro-austerity Semester programme. We voted to leave this undemocratic, privatisation-enforcing, austerity-enforcing body. AmyInNH -> ciaofornow , 23 Aug 2016 10:39 Bailout was because that was public savings, pensions, 401ks, etc. the banks were playing with, and lost. Bailout is billing all of us for it. Bad, letting the banks/financial "services" not only survive but continue the exact same practices. Bailout:$7.2 to $7.7 trillion. Current derivative holdings:$500 trillion.
Not just moral hazard but economic hazard when capitalism basic rule is broken, allow bad businesses to die of their own accord. Subversion currently called "too big to fail", rather than tell the public "we lost all your savings, pensions, ...".
AmyInNH -> Dave_P , 23 Aug 2016 09:40
Relocating poverty from the East into the West isn't improvement.
Creating sweatshops in the East isn't raising their standard of living.
Creating economies so economically unstable that population declines isn't improvement.
Trying to bury that fact with immigration isn't improvement.
Configuring all of the above for record profit for the benefit of a tiny percentage of the population isn't improvement.
Gaming tax law to avoid paying into/for extensive business use of federal services and tax base isn't improvement.
Game over. Time for a reboot.
marxistelf -> Tobyrob , 23 Aug 2016 09:24
I am glad you finally concede a point on neo-liberalism. The moral hazard argument is extremely poor and typical in this era of runaway CEO pay, of a tendency to substitute self-help fables (a la "The monk who sold his Ferrari) and pop psychology ( a la Moral Hazard) for credible economic analysis.
The economic crisis is rooted in the profit motive just as capitalist economic growth is. Lowering of Tarrif barriers, outsourcing, changes in value capture (added value), new financial instruments, were attempts to restore the falling rate of profit. They did for a while, but, as always happens with Capitalism, the seeds of the new crisis were in the solution to the old.
And all the while the state continues growing in an attempt to keep capitalism afloat. Neoliberalism failed ( or should I say "small state" ) and here is the graph to prove it:
http://www.usgovernmentspending.com/include/usgs_chartSp03t.png
Homer32 , 23 Aug 2016 07:32
Interesting, and I believe accurate, analysis of the economic and political forces afoot. However it is ludicrous to state that Donald trump, who is a serial corpratist, out-sourcer, tax avoider and scam artist, actually believes any of those populist principles that you ascribe so firmly to him. The best and safest outcome of our election, in my opinion, would be to have a Clinton administration tempered by the influences from the populist wings of both parties.
Juillette , 23 Aug 2016 06:42
Great article, however the elite globalists are in complete denial in the US. Our only choice is to vote them out of power because the are owned by Wall Street. Both Bernie and Trump supporters should unite to vote establishment out of Washington.
Dave_P -> ShaunNewman , 23 Aug 2016 06:38
The opiate of the masses. As the churches empty, the stadiums fill.
Dave_P -> ciaofornow , 23 Aug 2016 06:36
There were similar observations in the immediate aftermath of 2008, and doubtless before. Many of us thought the crisis would trigger a rethink of the whole direction of the previous three decades, but instead we got austerity and a further lurch to the right, or at best Obama-style stimulus and modest tweaks which were better than the former but still rather missed the point. I still find it flabbergasting and depressing, but on reflection the 1930s should have been a warning of not just the economic hazards but also the political fallout, at least in Europe. The difference was that this time left ideology had all but vacated the field in the 1980s and was in no position to lead a fightback: all we can hope for is better late than never.
idontreadtheguardian -> thisisafact , 23 Aug 2016 05:16
Yes it is, it's an extremely bad thing destroying the fabric of society. Social science has documented that even the better off are more happy, satisfied with life and feel safer in societies (i.e. the Scandinavian) where there is a relatively high degree of economic equality. Yes, economic inequality is a BAD thing in itself.
Oh, give me a break. Social science will document anything it can publish, no matter how spurious. If Scandanavia is so great, why are they such pissheads? There has always been inequality, including in workers' paradises like the Soviet Union and Communist China. Inequality is what got us where we are today, through natural selection. Phenotype is largely dependent on genotype, so why shouldn't we pass on material wealth as well as our genes? Surely it is a parent's right to afford their offspring advantages if they can do so?
SaulGe -> John Black , 23 Aug 2016 03:30
Have you got any numbers? Or references for your allegations. I say the average or median wealth, opportunity, economic circumstance and health measures are substantially better than a generation (lets say 30 years) ago.
Heres this years data. Note the top 25 or so are almost all liberal western type democracies with mixed economies. http://www.numbeo.com/cost-of-living/country_price_rankings?itemId=105
And here is the graph showing growth in wages whilst it slowed for a variety of complex reasons has been overall strong for 25 of the last 30 years http://www.rba.gov.au/publications/bulletin/2015/jun/pdf/bu-0615-2.pdf
Again I don't think our system is perfect. I don't deny that some in our societies struggle and don't benefit, particularly the poorly educated, disabled, mentally ill and drug addicted. I actually agree that we could better target our social redistribution from those that have to those that need help. I disagree that we need higher taxes, protectionism, socialism, more public servants, more legislation. Indeed I disagree with proposition that other systems are better.
shastakath -> TimWorstall , 23 Aug 2016 03:17
George Orwell said, in the 30s, that the price of social justice would include a lowering of living standards for the working- & middle-classes, at least temporarily, so I follow your line of thought. However, the outrageous tilt toward the upper .1% has no "adjustment" fluff to shield it from the harsh despotism it represents. So, do put that in your statistical pipe and smoke it.
#### [Dec 16, 2018] Palace of Ashes China and the Decline of American Higher Education by Mark S. Ferrara
##### "... Educational institutions should not be seen as a profit making enterprise, education should be attainable to all without the fear of untenable costs. ..."
###### Dec 16, 2018 | www.amazon.com
A very scholarly and educational read, well researched and documented. It is very in-depth, perhaps not for the light hearted but I learned quite a bit about education philosophies world-wide, their origins, how that effects current thoughts and practices, etc. And how the United States higher educational institutions have gotten to where they are today, money printing machines with unsustainable growth and costs being pushed onto those just seeking to potentially better themselves.
I see this in young people all around, 25-35 year old's saddled with $50-100k in debt defining every action and option they have (or don't!). Not everyone gets themselves into this bind, people make poor decisions, but our higher educational institutions readily promote without ample warning and education and the result is what's rumored to be a$1 Trillion student loan debt bubble. This isn't sustainable
My years in oversea schools took place long ago, I can't testify nor draw direction comparisons to the situation we face today. But I can say, that with three young kids approaching college age we remain highly concerned to terrified what the costs and our kids futures.
Educational institutions should not be seen as a profit making enterprise, education should be attainable to all without the fear of untenable costs.
This is a good read, recommended.
#### [Dec 14, 2018] 10 of the best pieces of IT advice I ever heard
###### Dec 14, 2018 | www.techrepublic.com
1. Learn to say "no"
If you're new to the career, chances are you'll be saying "yes" to everything. However, as you gain experience and put in your time, the word "no" needs to creep into your vocabulary. Otherwise, you'll be exploited.
Of course, you have to use this word with caution. Should the CTO approach and set a task before you, the "no" response might not be your best choice. But if you find end users-and friends-taking advantage of the word "yes," you'll wind up frustrated and exhausted at the end of the day.
2. Be done at the end of the day
I used to have a ritual at the end of every day. I would take off my watch and, at that point, I was done... no more work. That simple routine saved my sanity more often than not. I highly suggest you develop the means to inform yourself that, at some point, you are done for the day. Do not be that person who is willing to work through the evening and into the night... or you'll always be that person.
3. Don't beat yourself up over mistakes made
You are going to make mistakes. Sometimes will be simple and can be quickly repaired. Others may lean toward the catastrophic. But when you finally call your IT career done, you will have made plenty of mistakes. Beating yourself up over them will prevent you from moving forward. Instead of berating yourself, learn from the mistakes so you don't repeat them.
4. Always have something nice to say
You work with others on a daily basis. Too many times I've watched IT pros become bitter, jaded people who rarely have anything nice or positive to say. Don't be that person. If you focus on the positive, people will be more inclined to enjoy working with you, companies will want to hire you, and the daily grind will be less "grindy."
5. Measure twice, cut once
How many times have you issued a command or clicked OK before you were absolutely sure you should? The old woodworking adage fits perfectly here. Considering this simple sentence-before you click OK-can save you from quite a lot of headache. Rushing into a task is never the answer, even during an emergency. Always ask yourself: Is this the right solution?
6. At every turn, be honest
I've witnessed engineers lie to avoid the swift arm of justice. In the end, however, you must remember that log files don't lie. Too many times there is a trail that can lead to the truth. When the CTO or your department boss discovers this truth, one that points to you lying, the arm of justice will be that much more forceful. Even though you may feel like your job is in jeopardy, or the truth will cause you added hours of work, always opt for the truth. Always.
7. Make sure you're passionate about what you're doing
Ask yourself this question: Am I passionate about technology? If not, get out now; otherwise, that job will beat you down. A passion for technology, on the other hand, will continue to drive you forward. Just know this: The longer you are in the field, the more likely that passion is to falter. To prevent that from happening, learn something new.
8. Don't stop learning
Quick-how many operating systems have you gone through over the last decade? No career evolves faster than technology. The second you believe you have something perfected, it changes. If you decide you've learned enough, it's time to give up the keys to your kingdom. Not only will you find yourself behind the curve, all those servers and desktops you manage could quickly wind up vulnerable to every new attack in the wild. Don't fall behind.
9. When you feel your back against a wall, take a breath and regroup
This will happen to you. You'll be tasked to upgrade a server farm and one of the upgrades will go south. The sweat will collect, your breathing will reach panic level, and you'll lock up like Windows Me. When this happens... stop, take a breath, and reformulate your plan. Strangely enough, it's that breath taken in the moment of panic that will help you survive the nightmare. If a single, deep breath doesn't help, step outside and take in some fresh air so that you are in a better place to change course.
10. Don't let clients see you Google a solution
This should be a no-brainer... but I've watched it happen far too many times. If you're in the middle of something and aren't sure how to fix an issue, don't sit in front of a client and Google the solution. If you have to, step away, tell the client you need to use the restroom and, once in the safety of a stall, use your phone to Google the answer. Clients don't want to know you're learning on their dime.
#### [Dec 14, 2018] You apply for a job. You hear nothing. Here's what to do next
###### Dec 14, 2018 | finance.yahoo.com
But the more common situation is that applicants are ghosted by companies. They apply for a job and never hear anything in response, not even a rejection. In the U.S., companies are generally not legally obligated to deliver bad news to job candidates, so many don't.
They also don't provide feedback, because it could open the company up to a legal risk if it shows that they decided against a candidate for discriminatory reasons protected by law such as race, gender or disability.
Hiring can be a lengthy process, and rejecting 99 candidates is much more work than accepting one. But a consistently poor hiring process that leaves applicants hanging can cause companies to lose out on the best talent and even damage perception of their brand.
Here's what companies can do differently to keep applicants in the loop, and how job seekers can know that it's time to cut their losses.
What companies can do differently
There are many ways that technology can make the hiring process easier for both HR professionals and applicants.
Only about half of all companies get back to the candidates they're not planning to interview, Natalia Baryshnikova, director of product management on the enterprise product team at SmartRecruiters, tells CNBC Make It .
"Technology has defaults, one change is in the default option," Baryshnikova says. She said that SmartRecruiters changed the default on its technology from "reject without a note" to "reject with a note," so that candidates will know they're no longer involved in the process.
Companies can also use technology as a reminder to prioritize rejections. For the company, rejections are less urgent than hiring. But for a candidate, they are a top priority. "There are companies out there that get back to 100 percent of candidates, but they are not yet common," Baryshnikova says.
How one company is trying to help
WayUp was founded to make the process of applying for a job simpler.
"The No. 1 complaint from candidates we've heard, from college students and recent grads especially, is that their application goes into a black hole," Liz Wessel, co-founder and CEO of WayUp, a platform that connects college students and recent graduates with employers, tells CNBC Make It .
WayUp attempts to increase transparency in hiring by helping companies source and screen applicants, and by giving applicants feedback based on soft skills. They also let applicants know if they have advanced to the next round of interviewing within 24 hours.
Wessel says that in addition to creating a better experience for applicants, WayUp's system helps companies address bias during the resume-screening processes. Resumes are assessed for hard skills up front, then each applicant participates in a phone screening before their application is passed to an employer. This ensures that no qualified candidate is passed over because their resume is different from the typical hire at an organization – something that can happen in a company that uses computers instead of people to scan resumes .
"The companies we work with see twice as many minorities getting to offer letter," Wessel said.
When you can safely assume that no news is bad news
First, if you do feel that you're being ghosted by a company after sending in a job application, don't despair. No news could be good news, so don't assume right off the bat that silence means you didn't get the job.
Hiring takes time, especially if you're applying for roles where multiple people could be hired, which is common in entry-level positions. It's possible that an HR team is working through hundreds or even thousands of resumes, and they might not have gotten to yours yet. It is not unheard of to hear back about next steps months after submitting an initial application.
If you don't like waiting, you have a few options. Some companies have application tracking in their HR systems, so you can always check to see if the job you've applied for has that and if there's been an update to the status of your application.
Otherwise, if you haven't heard anything, Wessel said that the only way to be sure that you aren't still in the running for the job is to determine if the position has started. Some companies will publish their calendar timelines for certain jobs and programs, so check that information to see if your resume could still be in review.
"If that's the case and the deadline has passed," Wessel says, it's safe to say you didn't get the job.
And finally, if you're still unclear on the status of your application, she says there's no problem with emailing a recruiter and asking outright.
#### [Dec 13, 2018] Why inequality matters?
##### "... Somewhat foolishly he deepened the cleavage between himself and ordinary people by both his patrician predilections and the love of lecturing ..."
###### Dec 13, 2018 | economistsview.typepad.com
anne , December 07, 2018 at 04:13 PM
https://glineq.blogspot.com/2018/12/why-inequality-matters.html
December 5, 2018
Why inequality matters?
This is the question that I am often asked and will be asked in two days. So I decided to write my answers down.
The argument why inequality should not matter is almost always couched in the following way: if everybody is getting better-off, why should we care if somebody is becoming extremely rich? Perhaps he deserves to be rich -- or whatever the case, even if he does not deserve, we need not worry about his wealth. If we do that implies envy and other moral flaws. I have dealt with the misplaced issue of envy here * (in response to points made by Martin Feldstein) and here ** (in response to Harry Frankfurt), and do not want to repeat it. So, let's leave envy out and focus on the reasons why we should be concerned about high inequality.
The reasons can be formally broken down into three groups: instrumental reasons having to do with economic growth, reasons of fairness, and reasons of politics.
The relationship between inequality and economic growth is one of the oldest relationships studied by economists. A very strong presumption was that without high profits there will be no growth, and high profits imply substantial inequality. We find this argument already in Ricardo where profit is the engine of economic growth. We find it also in Keynes and Schumpeter, and then in standard models of economic growth. We find it even in the Soviet industrialization debates. To invest you have to have profits (that is, surplus above subsistence); in a privately-owned economy it means that some people have to be wealthy enough to save and invest, and in a state-directed economy, it means that the state should take all the surplus.
But notice that throughout the argument is not one in favor of inequality as such. If it were, we would not be concerned about the use of the surplus. The argument is about a seemingly paradoxical behavior of the wealthy: they should be sufficiently rich but should not use that money to live well and consume but to invest. This point is quite nicely, and famously, made by Keynes in the opening paragraphs of his "The Economic Consequence of the Peace". For us, it is sufficient to note that this is an argument in favor of inequality provided wealth is not used for private pleasure.
The empirical work conducted in the past twenty years has failed to uncover a positive relationship between inequality and growth. The data were not sufficiently good, especially regarding inequality where the typical measure used was the Gini coefficient which is too aggregate and inert to capture changes in the distribution; also the relationship itself may vary in function of other variables, or the level of development. This has led economists to a cul-de-sac and discouragement so much so that since the late 1990s and early 2000s such empirical literature has almost ceased to be produced. It is reviewed in more detail in this paper. ***
More recently, with much better data on income distribution, the argument that inequality and growth are negatively correlated has gained ground. In a joint paper **** Roy van der Weide and I show this using forty years of US micro data. With better data and somewhat more sophisticated thinking about inequality, the argument becomes much more nuanced: inequality may be good for future incomes of the rich (that is, they become even richer) but it may be bad for future incomes of the poor (that is, they fall further behind). In this dynamic framework, growth rate itself is no longer something homogeneous as indeed it is not in the real life. When we say that the American economy is growing at 3% per year, it simply means that the overall income increased at that rate, it tells us nothing about how much better off, or worse off, individuals at different points of income distribution are getting.
Why would inequality have bad effect on the growth of the lower deciles of the distribution as Roy and I find? Because it leads to low educational (and even health) achievements among the poor who become excluded from meaningful jobs and from meaningful contributions they could make to their own and society's improvement. Excluding a certain group of people from good education, be it because of their insufficient income or gender or race, can never be good for the economy, or at least it can never be preferable to their inclusion.
High inequality which effectively debars some people from full participation translates into an issue of fairness or justice. It does so because it affects inter-generational mobility. People who are relatively poor (which is what high inequality means) are not able, even if they are not poor in an absolute sense, to provide for their children a fraction of benefits, from education and inheritance to social capital, that the rich provide to their offspring. This implies that inequality tends to persist across generations which in turns means that opportunities are vastly different for those at the top of the pyramid and those on the bottom. We have the two factors joining forces here: on the one hand, the negative effect of exclusion on growth that carries over generations (which is our instrumental reason for not liking high inequality), and on the other, lack of equality of opportunity (which is an issue of justice).
High inequality has also political effects. The rich have more political power and they use that political power to promote own interests and to entrench their relative position in the society. This means that all the negative effects due to exclusion and lack of equality of opportunity are reinforced and made permanent (at least, until a big social earthquake destroys them). In order to fight off the advent of such an earthquake, the rich must make themselves safe and unassailable from "conquest". This leads to adversarial politics and destroys social cohesion. Ironically, social instability which then results discourages investments of the rich, that is it undermines the very action that was at the beginning adduced as the key reason why high wealth and inequality may be socially desirable.
We therefore reach the end point where the unfolding of actions that were at the first supposed to produce beneficent outcome destroys by its own logic the original rationale. We have to go back to the beginning and instead of seeing high inequality as promoting investments and growth, we begin to see it, over time, as producing exactly the opposite effects: reducing investments and growth.
-- Branko Milanovic
Darrell in Phoenix said in reply to anne... , December 07, 2018 at 05:59 PM
"he argument is about a seemingly paradoxical behavior of the wealthy: they should be sufficiently rich but should not use that money to live well and consume but to invest."
I disagree on this. I do not care if they use the high income to invest or to live well, as long as it is one or the other.
The one thing I do not want the rich to do is to become a drain of money out of active circulation. The paradox of thrift. Excess saving by one dooms others into excess debt to keep the economy liquid.
If you invent a new widget that everyone on earth simply must have, and is willing to give you $1 per to get it, such that you have$7 billion a year income... good for you!
Now what do you deserve in return?
1) To consumer $7 billion worth of other peoples' production? Or 2) To trap the rest of humanity in$7 billion a year worth of debt servitude, which will have your income ever increase as interest is added to your income, a debt servitude from which it will be mathematically impossible for them to escape since you hold the money that they must get in order to repay their debts?
I vote 1.
Paine -> Darrell in Phoenix... , December 08, 2018 at 05:33 AM
Yes it's corporate capitalist actions that matter
The choice of capitalists to buy paper not products
Wealthy households are obscene But not macro drags. When they buy luxury products and personal services
When they buy existing stocks of land paintings and the like of course this is as bad as buying paper. But at least that portfolio shifting
Can CO exist with product purchases. So long as each type of spending remains close to a stable ratio
Darrell in Phoenix said in reply to Paine... , December 08, 2018 at 07:07 AM
In my "ideal" tax regimen, steeply progressive income taxes would be avoided by real property spending or capital investment to get deductions.
This, of course, would lead to over-investment in land, buildings, houses, etc. WHICH is why my regimen also includes a real property tax (in addition to state and local real estate taxes). The income tax would not be "avoided" by real property purchases as much as "delayed".
To avoid 90% income tax, buy diamonds, paintings, expensive autos... then only pay 5% per year on the real property, spreading the the tax over 20 years. Buy land, buildings, houses, etc., get hit with the 5%, plus the local real estate taxes.
Paine -> Darrell in Phoenix... , December 08, 2018 at 09:33 AM
A 100 % ground rent tax Ie a location value confiscatory tax
Can be off set by credits earned with the costs of "real " land improvements
Paine -> Paine... , December 08, 2018 at 09:36 AM
Existing stocks of jewels and paintings should be taxed
to extract the socially created
value of the item
This is an analogue to location taxes
Yes this can be avoided by.domation to a non.profit museum archive
kurt -> Darrell in Phoenix... , December 10, 2018 at 03:00 PM
It really depends on what is consumed. Consumption can lead to malinvestment. For instance, buying 1960s ferraris does very little for the current economy. This is an exceptionally low multiplier activity.
Soul Super Bad said in reply to anne... , December 07, 2018 at 06:37 PM
inequality have bad effect on the growth of the lower deciles of the distribution as Roy and I
"
~~BM~
keep in mind that there are many directions of growth. there is growth that benefits the workers, the rank-and-file. there is growth that benefits the excessively wealthy. but now, finally there's a third type of growth, the kind of growth that destroys the planet, and perhaps a 4th a new channel of growth that would help us to preserve the planet. we need to think about some of these things.
https://www.zerohedge.com/sites/default/files/inline-images/Screen-Shot-2018-11-29-at-2.41.17-PM.png?itok=WhDnbuoT
thanks, gals and
guys
!
reason -> anne... , December 08, 2018 at 01:59 AM
One VERY important item is missing from that list - environmental sustainability - giving people control over much more resources than they need is a waste of something precious.
Paine -> reason... , December 08, 2018 at 05:35 AM
Capitalists
Owning the planets surface
and its natural resources and products
Is pathological
mulp -> reason... , December 10, 2018 at 01:16 AM
Ted Turner owning millions of acres of land he's restoring to prairie sustained by bison, prairie dogs, wolves, etc is bad?
I wish he had ten times as much land. Or more so a million bison were roaming the west and supplying lots of bison steaks, hides, etc, as they did for thousands of years before about 1850.
anne , December 07, 2018 at 04:14 PM
https://glineq.blogspot.com/2018/12/first-reflections-on-french-evenements.html
December 5, 2018
First reflections on the French "événements de décembre"
Because I am suffering from insomnia (due to the jetlag) I decided to write down, in the middle of the night, my two quick impressions regarding the recent events in France -- events that watched from outside France seemed less dramatic than within.
I think they raise two important issues: one new, another "old".
It is indeed an accident that the straw that broke the camel's back was a tax on fuel that affected especially hard rural and periurban areas, and people with relatively modest incomes. It did so (I understand) not as much by the amount of the increase but by reinforcing the feeling among many that after already paying the costs of globalization, neoliberal policies, offshoring, competition with cheaper foreign labor, and deterioration of social services, now, in addition, they are to pay also what is, in their view and perhaps not entirely wrongly, seen as an elitist tax on climate change.
This raises a more general issue which I discussed in my polemic with Jason Hickel and Kate Raworth. Proponents of degrowth and those who argue that we need to do something dramatic regarding climate change are singularly coy and shy when it comes to pointing out who is going to bear the costs of these changes. As I mentioned in this discussion with Jason and Kate, if they were serious they should go out and tell Western audiences that their real incomes should be cut in half and also explain them how that should be accomplished. Degrowers obviously know that such a plan is a political suicide, so they prefer to keep things vague and to cover up the issues under a "false communitarian" discourse that we are all affected and that somehow the economy will thrive if we all just took full conscience of the problem--without ever telling us what specific taxes they would like to raise or how they plan to reduce people's incomes.
Now the French revolt brings this issue into the open. Many western middle classes, buffeted already by the winds of globalization, seem unwilling to pay a climate change tax. The degrowers should, I hope, now come up with concrete plans.
The second issue is "old". It is the issue of the cleavage between the political elites and a significant part of the population. Macron rose on an essentially anti-mainstream platform, his heterogenous party having been created barely before the elections. But his policies have from the beginning been pro-rich, a sort of the latter-say Thatcherism. In addition, they were very elitist, often disdainful of the public opinion. It is somewhat bizarre that such "Jupiterian" presidency, by his own admission, would be lionized by the liberal English-language press when his domestic policies were strongly pro-rich and thus not dissimilar from Trump's. But because Macron's international rhetoric (mostly rhetoric) was anti-Trumpist, he got a pass on his domestic policies.
Somewhat foolishly he deepened the cleavage between himself and ordinary people by both his patrician predilections and the love of lecturing others which at times veered into the absurd (as when he took several minutes to teach a 12-year old kid about the proper way to address the President). At the time when more than ever Western "couches populaires" wanted to have politicians that at least showed a modicum of empathy, Macron chose the very opposite tack of berating people for their lack of success or failure to find jobs (for which they apparently just needed to cross the road). He thus committed the same error that Hillary Clinton commuted with her "deplorables" comment. It is no surprise that his approval ratings have taken a dive, and, from what I understand, even they do not fully capture the extent of the disdain into which he is held by many.
It is under such conditions that "les evenements" took place. The danger however is that their further radicalization, and especially violence, undermines their original objectives. One remembers that May 1968, after driving de Gaulle to run for cover to Baden-Baden, just a few months later handed him one of the largest electoral victories -- because of demonstrators' violence and mishandling of that great political opportunity.
-- Branko Milanovic
Darrell in Phoenix said in reply to mulp ... , December 10, 2018 at 08:28 AM
"So, harvesting energy from the sun is unsustainable?"
No. I'm saying it is not scale-able.
How are you going to do it? Run diesel fuel powered tractors to dig pit mines to get metals, to be smelted in fossil fuel powered refineries. Burn fossil fuels to heat sand into glass. Use toxic solvents purify the glass and to electroplate toxic metals. Then incinerate the solvents in fossil fuel powered furnaces.
That may get us to a 40% reduction in carbon, but it isn't getting us to 90% reduction.
Even then, how are you going to get nitrogen fertilizers for farms? Currently we strip H2 from CH4 (natural gas), then mix with nitrogen in the air, apply electricity, poof, nitrogen fertilizers, and LOTS of CO2. I have yet to see a proposal for large-scale farming that offers a method of obtaining nitrogen fertilizers without CO2 emissions.
AND, there is still a massive problem of storing the electricity from when the wind is blowing and sun is shining until times when it isn't.
"So, you are calling for global thermonuclears war to purge 6 billion people from the planet?"
Nope.
"You clearly believe the solution is not paying workers to work, but to not pay them so they must die."
I'm all about paying workers to work. I vehemently disagree with liberals when they breach the idea of "universal basic income"... a great way to end up like the old Soviet Union, where everyone has money, but waits in long lines to get into stores with nothing on the shelves for sale.
"The population is too high to support hunter-gathers and subsistence farming for 7 billion people plus."
Correct.
"You have bought into Reagan's free lunch framing and argue less trash, less processing of 6trash to cut costs, so everyone must earn less so they consume less, ideally becoming dead."
Not even close.
This is where Liberals pissed me off right after Trump won and was still talking "border adjustment tax". The cry from the likes of Robert Reich was "oh noooo... prices will go up and hurt the poor." Since when were progressives the "we need low prices" party? I thought we were the ones that wanted higher prices, if those higher prices were caused by higher wages to workers!
"I call for evveryone paying high living costs to pay more workers to eliminate the waste of landfilling what was just mined from the land."
Not sure how that makes it magically possible to cut carbon emissions 90% though.
#### [Dec 12, 2018] The Neoliberal Agenda and the Student Debt Crisis in U.S. Higher Education (Routledge Studies in Education)
##### "... We only have to realize that the emperor has no clothes and reveal this reality. ..."
##### "... Indeed, the approach our money-dependent and money-driven legislators and policymakers have employed has been neoliberal in form and function, and it will continue to be so unless we help them to see the light or get out of the way. This book focuses on the $1.4+ trillion student debt crisis in the United States. It doesn't share hard and fast solutions per se. ..." ##### "... In 2011-2012, 50% of bachelor's degree recipients from for-profit institutions borrowed more than$40,000 and about 28% of associate degree recipients from for-profit institutions borrowed more than $30,000 (College Board, 2015a). ..." ###### Dec 12, 2018 | www.amazon.com Despite tthe fact that necoliberalism brings poor economic growth, inadequate availability of jobs and career opportunities, and the concentration of economic and social rewards in the hands of a privileged upper class resistance to it, espcially at universities, remain weak to non-existant. The first sign of high levels of dissatisfaction with neoliberalism was the election of Trump (who, of course, betrayed all his elections promises, much like Obma before him). As a result, the legitimation of neoliberalism based on references to the efficient and effective functioning of the market (ideological legitimation) is exhausted while wealth redistribution practices (material legitimation) are not practiced and, in fact, considered unacceptable. Despite these problems, resistance to neoliberalism remains weak. Strategics and actions of opposition have been shifted from the sphere of labor to that of the market creating a situation in which the idea of the superiority and desirability of the market is shared by dominant and oppositional groups alike. Even emancipatory movements such as women, race, ethnicity, and sexual orientation have espoused individualistic, competition-centered, and meritocratic views typical of ncolibcral dis- courses. Moreover, corporate forces have colonized spaces and discourses that have traditionally been employed by oppositional groups and move- ments. However, as systemic instability' continues and capital accumulation needs to be achieved, change is necessary. Given the weakness of opposi- tion, this change is led by corporate forces that will continue to further their interests but will also attempt to mitigate socio-economic contra- dictions. The unavailability of ideological mechanisms to legitimize ncolibcral arrangements will motivate dominant social actors to make marginal concessions (material legitimation) to subordinate groups. These changes, however, will not alter the corporate co-optation and distortion of discourses that historically defined left-leaning opposition. As contradic- tions continue, however, their unsustainability will represent a real, albeit difficult, possibility for anti-neoliberal aggregation and substantive change. Connolly (2016) reported that a poll shows that some graduated student loan borrowers would willingly go to extremes to pay off outstanding student debt. Those extremes include experiencing physical pain and suffering and even a reduced lifespan. For instance, 35% of those polled would take one year off life expectancy and 6.5% would willingly cut off their pinky finger if it meant ridding themselves of the student loan debt they currently held. Neoliberalism's presence in higher education is making matters worse for students and the student debt crisis, not better. In their book Structure and Agency in the Neoliberal University, Cannan and Shumar (2008) focus their attention on resisting, transforming, and dismantling the neoliberal paradigm in higher education. They ask how can market-based reform serve as the solution to the problem neoliberal practices and policies have engineered? It is like an individual who loses his keys at night and who decides to look only beneath the street light. This may be convenient because there is light, but it might not be where the keys are located. This metaphorical example could relate to the student debt crisis. What got us to where we are (escalating tuition costs, declining state monies, and increasing neoliberal influence in higher education) cannot get us out of the SI.4 trillion problem. And yet this metaphor may, in fact, be more apropos than most of us on the right, left, or center are as yet seeing because we mistakenly assume the market we have is the only or best one possible. As Lucille (this volume) strives to expose, the systemic cause of our problem is "hidden in plain sight," right there in the street light for all who look carefully enough to see. We only have to realize that the emperor has no clothes and reveal this reality. If and when a critical mass of us do, systemic change in our monetary exchange relations can and, we hope, will become our funnel toward a sustainable and socially, economically, and ecologically just future where public education and democracy can finally become realities rather than merely ideals. Indeed, the approach our money-dependent and money-driven legislators and policymakers have employed has been neoliberal in form and function, and it will continue to be so unless we help them to see the light or get out of the way. This book focuses on the$1.4+ trillion student debt crisis in the United States. It doesn't share hard and fast solutions per se. Rather, it addresses real questions (and their real consequences). Are collegians overestimating the economic value of going to college?
What are we, they, and our so-called elected leaders failing or refusing to sec and why? This critically minded, soul-searching volume shares territory with, yet pushes beyond, that of Akers and Chingos (2016), Baum (2016), Goldrick-Rab (2016), Graebcr (2011), and Johannscn (2016) in ways that we trust those critically minded authors -- and others concerned with our mess of debts, public and private, and unfulfilled human potential -- will find enlightening and even ground-breaking.
... ... ...
In the meantime, college costs have significantly increased over the past fifty years. The average cost of tuition and fees (excluding room and board) for public four-year institutions for a full year has increased from 52,387 (in 2015 dollars) for the 1975-1976 academic year, to 59,410 for 2015-2016. The tuition for public two-year colleges averaged $1,079 in 1975-1976 (in 2015 dollars) and increased to$3,435 for 2015-2016. At private non-profit four-year institutions, the average 1975-1976 cost of tuition and fees (excluding room and board) was $10,088 (in 2015 dollars), which increased to$32,405 for 2015-2016 (College Board, 2015b).
The purchasing power of Pell Grants has decreased. In fact, the maximum Pell Grants coverage of public four-year tuition and fees decreased from 83% in 1995-1996 to 61% in 2015-2016. The maximum Pell Grants coverage of private non-profit four-year tuition and fees decreased from 19% in 1995-1996 to 18% in 2015-2016 (College Board, 2015a).
... ... ....
... In 2013-2014, 61% of bachelor's degree recipients from public and private non-profit four-year institutions graduated with an average debt of $16,300 per graduate. In 2011-2012, 50% of bachelor's degree recipients from for-profit institutions borrowed more than$40,000 and about 28% of associate degree recipients from for-profit institutions borrowed more than $30,000 (College Board, 2015a). Rising student debt has become a key issue of higher education finance among many policymakers and researchers. Recently, the government has implemented a series of measures to address student debt. In 2005, the Bankruptcy Abuse Prevention and Consumer Protection Act (2005) was passed, which barred the discharge of all student loans through bankruptcy for most borrowers (Collinge, 2009). This was the final nail in the bankruptcy coffin, which had begun in 1976 with a five-year ban on student loan debt (SLD) bankruptcy and was extended to seven years in 1990. Then in 1998, it became a permanent ban for all who could not clear a relatively high bar of undue hardship (Best 6c Best, 2014). By 2006, Sallie Mae had become the nation's largest private student loan lender, reporting loan holdings of$123 billion. Its fee income collected from defaulted loans grew from $280 million in 2000 to$920 million in 2005 (Collinge, 2009). In 2007, in response to growing student default rates, the College Cost Reduction Act was passed to provide loan forgiveness for student loan borrowers who work full-time in a public service job. The Federal Direct Loan will be forgiven after 120 payments were made. This Act also provided other benefits for students to pay for their postsecondary education, such as lowering interest rates of GSL, increasing the maximum amount of Pell Grant (though, as noted above, not sufficiently to meet rising tuition rates), as well as reducing guarantor collection fees (Collinge, 2009).
In 2008, the Higher Education Opportunity Act (2008) was passed to increase transparency and accountability. This Act required institutions that are participating in federal financial aid programs to post a college price calculator on their websites in order to provide better college cost information for students and families (U.S. Department of Education |U.S. DoE|, 2015a). Due to the recession of 2008, the American Opportunity Tax Credit of 2009 (AOTC) was passed to expand the Hope Tax Credit program, in which the amount of tax credit increased to 100% for the first $2,000 of qualified educational expenses and was reduced to 25% of the second$2,000 in college expenses. The total credit cap increased from $1,500 to$2,500 per student. As a result, the federal spending on education tax benefits had a large increase since then (Crandall-Hollick, 2014), benefits that, again, are reaped only by those who file income taxes.
#### [Dec 11, 2018] John Taylor Gatto s book, The Underground History of American Education, lays out the sad fact of western education ; which has nothing to do with education; but rather, an indoctrination for inclusion in society as a passive participant. Docility is paramount in members of U.S. society so as to maintain the status quo
##### Neoliberalism looks like a cancer for the society... Unable to provide meaningful employment for people. Or at least look surprisingly close to one. Malignant growth.
###### Dec 11, 2018 | www.ianwelsh.net
• Lee Grove permalink April 25, 2016
Add one -- a BIG ONE–to your list: The utter destruction of the K-12 classroom learning environment: students spend the vast majority of their time trying to surreptitiously–or blatantly–use their cellphones in class; and if not actually using them, they are preoccupied with the thought of using them. It has been going on for almost a decade now, and we will start to see the results in that we will have a population where nobody can do anything that requires focus; it will be as if the entire upcoming population of college students has ADHD.
Welcome to the high-tech third world.
• V. Arnold permalink April 25, 2016
Lee Grove
April 25, 2016
Well Lee, you have a clue; but fail the really big picture regarding the abject failure of western education (which is a misnomer).
John Taylor Gatto's book, The Underground History of American Education, lays out the sad fact of "western education"; which has nothing to do with education; but rather, an indoctrination for inclusion in society as a passive participant.
Docility is paramount in members of U.S. society so as to maintain the status quo; working according to plan, near as I can tell
Responsibilities Good work experience in Puppet with L2/L3 Linux administration skills Ability to manage UNIX/Linux configuration management using Puppet Ability to understand the existing Puppet environment, modules, manifests, classes and troubleshoot them Ability to classify and manage different UNIX/Linux variants in Puppet Work experience with GIT Good work experience in Redhat Satellite environment Work experience in blade / enclosure hardware systems Volume manager Administration (VERITAS Volume Manager/Linux LVM) File system Administration (VERTIAS File system/ VERITAS Cluster FS/ext3/ext4) Troubleshooting the OS performance related issues Providing the production support, maintenance, administration & Implementation Upgrading the System/HBA ´s firmware
Nice To Have Oracle Virtualization Manager (LDOM)/L2 Linux Administration Skills
Knowledge / work experience in IBM PowerHA / HMC / LPAR and VIOs
Mandatory Functional Skills IT-IS Linux Administrator with Puppet Skillset
Veritas Volume Manager, Veritas Cluster
Total Experience Required 5 to 7 Plus years of experience in Linux Administration with indepth knowledge in Puppet
• Unix Midrange Engineer at ACE Insured, Whitehouse Station, NJ
Position Summary: rd level support of Chubb's UNIX, storage, and backup and recovery systems.
• Knowledge, Skills and Competencies:
#### [Dec 08, 2018] Americans don't "meekly allow fincancial crimes," No, Americans hugely endorse them. More students keep enrolling in all the biz schools all the time -- much more than any other field of study -- health care being a distant second
##### "... The students not only continue to flock to the amorality skills courses, but also put themselves into mega-debt by student loans to turn themselves not just imaginatively and ethically over to the corporate idolatries, but also to do another double whammy on themselves. ..."
###### Dec 08, 2018 | www.alternet.org
kyushuphil -> Neo Conned 6 years ago ,
People don't "meekly allow these crimes," Neo. Americans hugely endorse them.
The students not only continue to flock to the amorality skills courses, but also put themselves into mega-debt by student loans to turn themselves not just imaginatively and ethically over to the corporate idolatries, but also to do another double whammy on themselves. They accept the servitude of massive student loan debt, and ensure by prolonged interest payments on that debt to keep bloating all the most cynically immoral of high finance.
And then all the other departments of corporate academe have seen how smoothly work the most rank of corporate habits to ensure most mediocrity for most rank careerisms -- and all have only increased departmentalism protocols over recent years. Tenure now means nothing more than max award for most-narrowed specialist minds and for all most-max conformists in all those niched fields.
Nuthin' "meek" about all this, Neo. The corporate disease, the cubicle culture, the deference to plutocracy, the reduced literacy, the tracking to numbers -- all has been only steroided since Citizens United quite flagrantly legally underlined what most genteel in corporate ed have been doing for years.
willymack > kyushuphil • 6 years ago
zonmoy > kyushuphil • 6 years ago
and how have students been pushed into those programs and the problems pushed on them by the corporate crooks that own everything including our government.
#### [Dec 06, 2018] Understanding Society Sexual harassment in academic contexts
###### Dec 06, 2018 | understandingsociety.blogspot.com
Sexual harassment of women in academic settings is regrettably common and pervasive, and its consequences are grave. At the same time, it is a remarkably difficult problem to solve. The "me-too" movement has shed welcome light on specific individual offenders and has generated more awareness of some aspects of the problem of sexual harassment and misconduct. But we have not yet come to a public awareness of the changes needed to create a genuinely inclusive and non-harassing environment for women across the spectrum of mistreatment that has been documented. The most common institutional response following an incident is to create a program of training and reporting, with a public commitment to investigating complaints and enforcing university or institutional policies rigorously and transparently. These efforts are often well intentioned, but by themselves they are insufficient. They do not address the underlying institutional and cultural features that make sexual harassment so prevalent.
The problem of sexual harassment in institutional contexts is a difficult one because it derives from multiple features of the organization. The ambient culture of the organization is often an important facilitator of harassing behavior -- often enough a patriarchal culture that is deferential to the status of higher-powered individuals at the expense of lower-powered targets. There is the fact that executive leadership in many institutions continues to be predominantly male, who bring with them a set of gendered assumptions that they often fail to recognize. The hierarchical nature of the power relations of an academic institution is conducive to mistreatment of many kinds, including sexual harassment. Bosses to administrative assistants, research directors to post-docs, thesis advisors to PhD candidates -- these unequal relations of power create a conducive environment for sexual harassment in many varieties. In each case the superior actor has enormous power and influence over the career prospects and work lives of the women over whom they exercise power. And then there are the habits of behavior that individuals bring to the workplace and the learning environment -- sometimes habits of masculine entitlement, sometimes disdainful attitudes towards female scholars or scientists, sometimes an underlying willingness to bully others that finds expression in an academic environment. (A recent issue of the Journal of Social Issues ( link ) devotes substantial research to the topic of toxic leadership in the tech sector and the "masculinity contest culture" that this group of researchers finds to be a root cause of the toxicity this sector displays for women professionals. Research by Jennifer Berdahl, Peter Glick, Natalya Alonso, and more than a dozen other scholars provides in-depth analysis of this common feature of work environments.)
The scope and urgency of the problem of sexual harassment in academic contexts is documented in excellent and expert detail in a recent study report by the National Academies of Sciences, Engineering, and Medicine ( link ). This report deserves prominent discussion at every university.
The study documents the frequency of sexual harassment in academic and scientific research contexts, and the data are sobering. Here are the results of two indicative studies at Penn State University System and the University of Texas System:
The Penn State survey indicates that 43.4% of undergraduates, 58.9% of graduate students, and 72.8% of medical students have experienced gender harassment, while 5.1% of undergraduates, 6.0% of graduate students, and 5.7% of medical students report having experienced unwanted sexual attention and sexual coercion. These are staggering results, both in terms of the absolute number of students who were affected and the negative effects that these experiences had on their ability to fulfill their educational potential. The University of Texas study shows a similar pattern, but also permits us to see meaningful differences across fields of study. Engineering and medicine provide significantly more harmful environments for female students than non-STEM and science disciplines. The authors make a particularly worrisome observation about medicine in this context:
The interviews conducted by RTI International revealed that unique settings such as medical residencies were described as breeding grounds for abusive behavior by superiors. Respondents expressed that this was largely because at this stage of the medical career, expectation of this behavior was widely accepted. The expectations of abusive, grueling conditions in training settings caused several respondents to view sexual harassment as a part of the continuum of what they were expected to endure. (63-64)
The report also does an excellent job of defining the scope of sexual harassment. Media discussion of sexual harassment and misconduct focuses primarily on egregious acts of sexual coercion. However, the authors of the NAS study note that experts currently encompass sexual coercion, unwanted sexual attention, and gender harassment under this category of harmful interpersonal behavior. The largest sub-category is gender harassment:
"a broad range of verbal and nonverbal behaviors not aimed at sexual cooperation but that convey insulting, hostile, and degrading attitudes about" members of one gender ( Fitzgerald, Gelfand, and Drasgow 1995 , 430). (25)
The "iceberg" diagram (p. 32) captures the range of behaviors encompassed by the concept of sexual harassment. (See Leskinen, Cortina, and Kabat 2011 for extensive discussion of the varieties of sexual harassment and the harms associated with gender harassment.)
The report emphasizes organizational features as a root cause of a harassment-friendly environment.
By far, the greatest predictors of the occurrence of sexual harassment are organizational. Individual-level factors (e.g., sexist attitudes, beliefs that rationalize or justify harassment, etc.) that might make someone decide to harass a work colleague, student, or peer are surely important. However, a person that has proclivities for sexual harassment will have those behaviors greatly inhibited when exposed to role models who behave in a professional way as compared with role models who behave in a harassing way, or when in an environment that does not support harassing behaviors and/or has strong consequences for these behaviors. Thus, this section considers some of the organizational and environmental variables that increase the risk of sexual harassment perpetration. (46)
Some of the organizational factors that they refer to include the extreme gender imbalance that exists in many professional work environments, the perceived absence of organizational sanctions for harassing behavior, work environments where sexist views and sexually harassing behavior are modeled, and power differentials (47-49). The authors make the point that gender harassment is chiefly aimed at indicating disrespect towards the target rather than sexual exploitation. This has an important implication for institutional change. An institution that creates a strong core set of values emphasizing civility and respect is less conducive to gender harassment. They summarize this analysis in the statement of findings as well:
Organizational climate is, by far, the greatest predictor of the occurrence of sexual harassment, and ameliorating it can prevent people from sexually harassing others. A person more likely to engage in harassing behaviors is significantly less likely to do so in an environment that does not support harassing behaviors and/or has strong, clear, transparent consequences for these behaviors. (50)
So what can a university or research institution do to reduce and eliminate the likelihood of sexual harassment for women within the institution? Several remedies seem fairly obvious, though difficult.
• Establish a pervasive expectation of civility and respect in the workplace and the learning environment
• Diffuse the concentrations of power that give potential harassers the opportunity to harass women within their domains
• Ensure that the institution honors its values by refusing the "star culture" common in universities that makes high-prestige university members untouchable
• Be vigilant and transparent about the processes of investigation and adjudication through which complaints are considered
• Create effective processes that ensure that complainants do not suffer retaliation
• Consider candidates' receptivity to the values of a respectful, civil, and non-harassing environment during the hiring and appointment process (including research directors, department and program chairs, and other positions of authority)
As the authors put the point in the final chapter of the report:
Preventing and effectively addressing sexual harassment of women in colleges and universities is a significant challenge, but we are optimistic that academic institutions can meet that challenge--if they demonstrate the will to do so. This is because the research shows what will work to prevent sexual harassment and why it will work. A systemwide change to the culture and climate in our nation's colleges and universities can stop the pattern of harassing behavior from impacting the next generation of women entering science, engineering, and medicine. (169)
#### [Nov 29, 2018] Literature, language, history are essential for a truly cultured human.
##### "... Tucker Carlson is the only media individual left that is brave enough to state the truth. So by implication the United States has zero democracy when it comes to our foreign policy. ..."
###### Nov 29, 2018 | turcopolier.typepad.com
Being on the affected side as a historian please let me add, that the students' majority studies microhistory, family, company, or even family members' personal events that is, which adds very little to our understanding of the world. It is overly and openly supported currently in most universities for a number of reasons.
This is why obviously ideologically biased works about major correspondences such as Piketty's or Niall Ferguson's, not to mention that young Israeli guy (Yair??) has so much effect. Because basically they are the only ones, or at least the ones with the chance to publish, who take the great effort of choosing the harder way and making the necessary research. There are too few willing to take the harder path.
Scientification, or should I say natural scientification of social sciences also does not help, because it promotes the 'publish or perish' principle. But social sciences aren't like natural sciences, where X hours in a laboratory or experimenting yields surely X or X/2 publications.
And on the top of that Marxist thinkers and intelligentsia, cast away from all meaningful positions to universities in the 50's and 60's fearing a communist influence have completely overtaken the higher education in the Western Hemisphere. In the Eastern European countries they managed to keep their positions.
To sum it up while most of your criticism is valid, international relations e.g. has its merit, but are taught mostly by neoliberals and Marxists, with the known results.
smoothieX12 -> Pat Lang , 17 hours ago
They are from the social sciences like Political Science or International Relations which are empty of real content.
Fully concur. They throw in sometimes some "game theory" to give that an aura of "science", but most of it is BS. If, just in case, I am misconstrued as fighting humanities field--I am not fighting it. Literature, language, history are essential for a truly cultured human. When I speak about "humanities" I personally mean namely Political "Science".
Eric Newhill -> Pat Lang , 18 hours ago
Sir, I stand corrected on the humanities into govt assertion. I do tend to get humanities and social sciences jumbled in my numbers/cost/benefit based thinking. I am open to people telling me how to do tasks that they have more experience performing and that I might need to know about. And I have curiosities about people's experiences and perspectives on how the world of men works, but I'm not so concerned about the world of men that I lose my integrity or soul or generally get sucked into their reality over my own. Of course that's just me. Someone like Trump seeks approval and high rank amongst men. So, yes, I guess he is susceptible; though I still think somewhat less than others. This is evident in how he refuses to follow the conventions and expectations of what a president should look and act like. He is a defiant sort. I like that about him. Of course needing to be defiant is still a need and therefore a chink in his armor.
Pat Lang Mod -> Eric Newhill , 17 hours ago
He is in thrall to the Israelis, their allies, the neocons, political donors and the popular media. An easy mark for skilled operators.
Harlan Easley -> Pat Lang , 14 hours ago
I agree with you and I believe their influence has deepen over the two years. The only pro neocon policy he ran on was regime change in Iran. Terrible idea no doubt. The vote was either potential regime change in Iran or a dangerous escalation with Russia in Syria. I voted for more time. He seemed to have some sense on Syria and Russia at the time. Of course Clinton was promising Apocalypse Now. You've stated the Neocon's have insinuated themselves into both parties. R2P and such. They basically control the foreign policy of both parties due to control by donors, organizational control of DNC, RNC, the moronic narrative, think tanks, media, probably security services, etc.
Tucker Carlson is the only media individual left that is brave enough to state the truth. So by implication the United States has zero democracy when it comes to our foreign policy. As far as I can tell the United States policy toward Russia continues toward escalation. Two current examples being the absurd Mueller "investigation" into collusion and the Ukraine provocation in the Sea of Azov. Are we heading into the last war?
Richard Higginbotham -> Pat Lang , 18 hours ago
Engineer here, "worked" on myself and not even by very skilled people. Manipulative people are hard to counteract, if you're not manipulative yourself the thought process is not intuitive. If you spend most of your life solving problems, you think its everyone's goal. As I've gotten older I've only solidified my impression that as far as working and living outside of school, the best "education" to have would be history. Preferably far enough back or away to limit any cultural biases. I'm not sure that college classes would fill the gap though.
Any advice to help the "marks" out there?
Mark Logan -> Richard Higginbotham , 10 hours ago
I'll pitch in with a suggestion for those who are for whatever reason not fond of reading: An old history education series called The Western Tradition. Eugene Weber. A shrewd old guy who was interested in motivations which drove our history and culture. Will get your kids solid A's in history if nothing else, if you can get them hooked on it. Insightful narrative as opposed to dry facts helps retention. There are much worse starting points.
Moreover, the most of books which I believe constitute a canon of sorts are mentioned and points made in them brought to bear. Leviathan, The Prince, Erasmus, how they affected general thought, which makes the viewer want to read them.
Re-reading TE Lawrence at the moment. What to watch a "pro" work? Scary good, he was.
TTG -> Pat Lang , 10 hours ago
To this day, my favorite college course was "The Century of Darwin" taught by Dr. Brown in the history department of RPI in 1973. Dr. Brown was a bespectacled, white haired little man who looked like everyone's idea of a history professor. The course examined the history of scientific discovery, evolving and competing religious and scientific ideas leading up to the general acceptance of Darwin's works. It was a history of everything course, an intellectually exhilarating experience. I still have the textbooks. I heartedly recommend those books.
"Darwin's Century" by Loren Eiseley came out in 1958 and was reprinted in 2009 with a new forward by Stephan Bertman. "The Death of Adam" by John Green first came out in 1960 and was reprinted in 1981. "Genesis and Geology" by Charles C. Gillespie came out in 1951. My paperback edition was published in 1973 and cost \$2.45 new.
English Outsider -> Pat Lang , an hour ago
Colonel - Boswell's life of Johnson. A giant of a man seen through the eyes of a clever and observant pygmy. And they both know it.
That makes it an odd book, that interplay between the two. It's also the ultimate in tourism. One is dumped in the middle of eighteenth century London and very soon it becomes a second home.
For a long time that's all I got out of the book. Johnson himself emerges only slowly. A true intellectual giant with a flawless acuity of perception, an elephantine memory, and the gift of turning out the perfect exposition, whether a long argument or one of his famous pithy comments, is the starting point only.
As a person he can easily be read as a slovenly bully, at one time even as an unapologetic hired gun turning out the propaganda of the day. He was subject to long fits of depression alternating with periods of great industry. As he got older the industry fell away and he spent much of his time in the coffee house. It was there, often, that Boswell gathered up the materials - a fragment here, a disquisition there - that allow us to see through to Johnson's outlook.
It was an outlook, or one could call it a philosophy of life, that could not be more needed at this time of frantic and one sided ideological war.
It was no tidily worked-up outlook. Intensely patriotic yet ever conscious of the failings of his country. Honorable yet accepting that he lived at a time of great corruption. Loyal yet always yearning after an older dispensation. Robust common sense but fully recognizing the Transcendent. Narrowly prejudiced yet open to other cultures, recognizing their equal validity and worth while remaining rooted in his own.
It's an outlook that today would be despised by many because, as far as I can tell, he had no ideology, no millenarian solution into which all problems can be jammed. Merely a broad and humane normality and a recognition that, ultimately, each pilgrim must find his own way.
#### [Nov 28, 2018] Colonel Lang on importance of taking elective courses in Humanities (using Trump as a counterexample)
##### "... Unlike your brother a good recruiting case officer would never ignore you except maybe at the beginning as a tease. That also works with women that you want personally. ..."
###### Nov 28, 2018 | turcopolier.typepad.com
Yes. Trump says that is how he "rolls." The indicators that this is true are everywhere. He does not believe what the "swampies" tell him. He listens to the State Department, the CIA, DoD, etc. and then acts on ill informed instinct and information provided by; lobbies, political donors, foreign embassies, and his personal impressions of people who have every reason to want to deceive him. As I wrote earlier he sees the world through an entrepreneurial hustler's lens.
He crudely assigns absolute dollar values to policy outcomes and actions which rarely have little to do with the actual world even if they might have related opposed to the arena of contract negotiations.
He evidently learned about balance sheets at the Wharton School of Business at the University of Pennsylvania and wishes to apply the principle of the bottom line to everything. I will guess that he resisted taking elective courses in the Humanities as much as he could believing them to be useless. That is unfortunate since such courses tend to provide context for present day decisions.
I have known several very rich businessmen of similar type who sent their children to business school with exactly that instruction with regard to literature, history, philosophy, etc. From an espionage case officer's perspective he is an easy mark. If you are regular contact with him all that is needed to recruit him is to convince him that you believe in the "genius" manifested in his mighty ego and swaggering bluster and then slowly feed him what you want him to "know."
That does not mean that he has been recruited by someone or something but the vulnerability is evident. IMO the mistake he has made in surrounding himself with neocons and other special pleaders, people like Pompeo and Bolton is evidence that he is very controllable by the clever and subtle. pl
Col. Lang, I appreciate your insight on his personality which you have written about often and dead on for awhile.
The Cage , 3 hours ago
I have an aged wire haired Jack Russel Terrier. He is well past his time. He is almost blind, and is surely deaf. In his earlier days he was a force of nature. He still is now, but only in the context of food. He is still obsessed with it at every turn. Food is now his reality and he will not be sidetracked or otherwise distracted by any other stimuli beyond relieving himself when and where he sees fit. He lives by his gut feeling and damn everything else. There is no reason, no other calculus for him. Trump's trusting his "gut" is just about as simplistic and equally myopic. My dog is not a tragedy, he shoulders no burden for others and when he gets to the point of soiling himself or is in pain, he will be held in my arms and wept over for the gift he has been when the needle pierces his hide. Trump, well, he is a tragedy. He does shoulder a responsibility to millions and millions and for those to follow after he is long dead and gone. His willful ignorance in the face of reason and science reminds me of the lieutenant colonel of 2/7 Cav. you spoke of at LZ Buttons.
The number of folks who will pay the price for this are legion in comparison. His accomplices and "advisers" as you intone, will be deemed worthy of a Nuremburg of sorts when viewed in posterity. "Character must under grid talent or talent will cave in." His gut stove pipes him as a leader. I love and respect my dog. He follows his gut, because that is his end-state. It's honest. I will mourn the passing of one and and already rue the day the other was born.
Pat Lang Mod -> The Cage , 2 hours ago
Were you at LZ Buttons?
exSpec4Chuck , 4 hours ago
Just after I looked at this post I went to Twitter and this came up. I don't know how long it's been since Jeremy Young was in grad school but a 35% decline drop in History dissertations is shocking even if it's over a span of 3-4 decades. View Hide
Pat Lang Mod -> exSpec4Chuck , 4 hours ago
Yes. It's either STEM or Social Sciences these days and that is almost as bad as Journalism or Communications Arts. Most media people are Journalism dummies.
VietnamVet , 4 hours ago
Colonel,
Donald Trump is a Salesman. He stands out in the Supreme Court photo: https://www.washingtonpost....
He survived as a New York City Boss. He has the same problem as Ronald Reagan. He believes the con. In reality, since the restoration of classical economics, sovereign states are secondary to corporate plutocrats. Yes, he is saluted. He has his finger on the red button. But, he is told what they want them to hear. There are no realists within a 1000 yards of him. The one sure thing is there will be a future disaster be it climate change, economic collapse or a world war. He is not prepared for it.
Pat Lang Mod -> VietnamVet , 4 hours ago
You are a one trick pony. There are other forces that are effective in addition to plutocrats and they are mostly bad.
JerseyJeffersonian , 5 hours ago
Falling under the sway of those who know the price of everything, but the value of nothing is an unenviable estate. The concentrated wisdom discoverable through a clear-eyed study of the humanities can serve as a corrective, and if one is lucky, as a prophylaxis against thinking of this type.
I am commending study of the humanities as historically understood, not the "humanities" of contemporary academia, which is little better than atheistic materialism of the Marxist variety, out of which any place for the genuinely spiritual has been systematically extirpated in favor of the imposition of some sort of sentimentalism as an ersatz substitute.
Eric Newhill , 6 hours ago
My response to flattery, even if subtle, is, "Yeah? Gee thanks. Now please just tell me what you're really after". I'd think any experienced man should have arrived at the same reaction at least by the time he's 35. Ditto trusting anyone in an atmosphere where power and money are there for the taking by the ambitious and clever. As for a balance sheet approach, IMO, there is a real need for that kind of thinking in govt. Perhaps a happy mix of it + a humanities based perspective.
A lot of people come out of humanities programs and into govt with all kinds of dopey notions; like R2P, globalism, open borders, etc.
Pat Lang Mod -> Eric Newhill , 6 hours ago
That is what the smart guys all say before really skilled people work on them. Eventually they ask you to tell them what is real. The Humanities thing stung? I remember the engineer students mocking me at VMI over this.
smoothieX12 -> Pat Lang , 4 hours ago
They are from the social sciences like Political Science or International Relations which are empty of real content.
Fully concur. They throw in sometimes some "game theory" to give that an aura of "science", but most of it is BS. If, just in case, I am misconstrued as fighting humanities field--I am not fighting it. Literature, language, history are essential for a truly cultured human. When I speak about "humanities" I personally mean namely Political "Science".
Grazhdanochka -> smoothieX12 , 2 hours ago
As I wrote earlier the Issue in those Courses is they are actually pure and concentrated Fields...... Political Science, International Relations are ambigious enough that a candidate can appeal to many Sectors and it is accepted, expected they will be competent.... Whether that be Governance/Diplomacy, Business, Travel etc...
Thus if you have no Idea what you want - those Fields are good to study, learning relatively little.....
If you know what you want - you have a Path.... You can study more concentrated Fields, but you damn well have to hope there is a Job at the end of the Rainbow (Known at least a couple People who studied only to be told almost immediately - you will not find Jobs domestically)
Pat Lang Mod -> Grazhdanochka , an hour ago
No. PS and the other SS are artificial constructs in our universities that posit views of mankind that are false.
Pat Lang Mod -> smoothieX12 , 3 hours ago
"Political Science" as we understand it here is not among the Humanities. It is pseudo science invented in the 19th Century.
Pat Lang Mod -> Pat Lang , 3 hours ago
The Humanities as they have been known. https://en.wikipedia.org/wi...
Eric Newhill -> Pat Lang , 5 hours ago
Sir, I stand corrected on the humanities into govt assertion. I do tend to get humanities and social sciences jumbled in my numbers/cost/benefit based thinking. I am open to people telling me how to do tasks that they have more experience performing and that I might need to know about. And I have curiosities about people's experiences and perspectives on how the world of men works, but I'm not so concerned about the world of men that I lose my integrity or soul or generally get sucked into their reality over my own. Of course that's just me. Someone like Trump seeks approval and high rank amongst men. So, yes, I guess he is susceptible; though I still think somewhat less than others. This is evident in how he refuses to follow the conventions and expectations of what a president should look and act like. He is a defiant sort. I like that about him. Of course needing to be defiant is still a need and therefore a chink in his armor.
Pat Lang Mod -> Eric Newhill , 3 hours ago
He is in thrall to the Israelis, their allies, the neocons, political donors and the popular media. An easy mark for skilled operators.
Richard Higginbotham -> Pat Lang , 5 hours ago
Engineer here, "worked" on myself and not even by very skilled people. Manipulative people are hard to counteract, if you're not manipulative yourself the thought process is not intuitive. If you spend most of your life solving problems, you think its everyone's goal. As I've gotten older I've only solidified my impression that as far as working and living outside of school, the best "education" to have would be history. Preferably far enough back or away to limit any cultural biases. I'm not sure that college classes would fill the gap though. | 2019-02-16 15:52:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2304871529340744, "perplexity": 3751.3451184413047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480622.9/warc/CC-MAIN-20190216145907-20190216171907-00423.warc.gz"} |
https://socratic.org/questions/how-do-you-factor-3h-2-19h-20 | # How do you factor 3h^2 + 19h + 20?
Jun 2, 2015
$f \left(h\right) = 3 {h}^{2} + 19 h + 20$
Let's use AC Method, with a twist...
$A = 3$, $B = 19$, $C = 20$
Look for a factorization of $A C = 3 \cdot 20 = 60$ into a pair of factors whose sum is $B = 19$.
The pair $B 1 = 4$, $B 2 = 15$ works.
Then for each of the combinations $A$, $B 1$ and $A$, $B 2$, divide by the $\text{HCF}$ (highest common factor) to get the coefficients of a factor of $f \left(h\right)$...
$\left(A , B 1\right) = \left(3 , 4\right)$ $\left(\text{HCF 1}\right)$$\rightarrow$$\left(3 , 4\right)$$\rightarrow$$\left(3 h + 4\right)$
$\left(A , B 2\right) = \left(3 , 15\right)$ $\left(\text{HCF 3}\right)$$\rightarrow$$\left(1 , 5\right)$$\rightarrow$$\left(h + 5\right)$
So $f \left(h\right) = 3 {h}^{2} + 19 h + 20 = \left(3 h + 4\right) \left(h + 5\right)$ | 2019-10-22 18:39:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 27, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9449833631515503, "perplexity": 563.5052818389407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987823061.83/warc/CC-MAIN-20191022182744-20191022210244-00032.warc.gz"} |
http://www.theweinerworks.com/?p=635 | # Calculus! #12: Early Transcendentals 1.6C
Okay. Let’s do it. Let’s get into inverse trig functions.
In my experience, inverse trig functions are one of those things that, for whatever reason, students dread. A lot of kids already dread regular trig, and now there’s an inverted version!
I admit, I once had a touch of that. But, in fact, inverse trig is relatively easy to work with, and opens up some cool possibilities when we get to integrals.
Remember, when confronted with new math that might make you nervous, don’t think of it as a scary impossible monster. It’s a tool you’re about to master.
So, let’s do it.
First off, inverse trig functions have to be modified slightly to make sense. As we discussed a while back, inverse functions are supposed to pass the horizontal line test. They’re supposed to be “one-to-one,” meaning that for each x value you put in, you get exactly one y value out.
But trig functions cycle up and down! So, for a given x, you’re going to see the same y infinity times. That’s too many times. So, clearly trig functions aren’t naturally one-to-one.
As we’ve said in the past, sine is the root of all trig, so we’ll start there.
Inverse Sine
So, we compel them to be one-to-one by restricting the domain to a single non-repeating cycle (half a wavelength, in physics terms). So, for example, if we want to run arcsin on y = sinx, we restrict x to be between -π/2 and π/2. This may feel a bit like cheating, but you’ve actually done this before. The common example is this: when you say $y=\sqrt{x}$, you restrict the domain to positive x values if you don’t want imaginary numbers.
Or, think of it this way: For y = sin(x), the available y values are inclusively between -1 and 1, right? So, if we convert that to arcsin(y) = x, the available values are inclusively between arcsin(-1) and arcsin(1), aka -π/2 and π/2. (Credit to incognitoman o twitter for this insight).
So, it’s really not so weird to be restricting the domain. Let’s move forth!
I wanna go over an example in the book with you because I think it gives some good insight into how working with inverse trig is not so hard.
Example 12-B: Evaluate tan(arcsin(1/3)).
Man, that looks like a bitch, right? But, step back, think, and put together everything you know.
First off, you can say arcsin(1/3) = something. We’ll call that something theta (ɵ).
1) arcsin(1/3) = ɵ.
Since we know sine is the inverse of arcsin, we can take a further step:
2) 1/3 = sin(ɵ)
Since we know sine is the ratio of the opposite to the hypotenuse (I like to say “O/H”), we can construct a triangle with a hypotenuse of length 3 and a side opposite ɵ of length 1.
Go ahead and draw that triangle.
Now, recall from step 1 that we defined ɵ as arcsin(1/3), and recall from the beginning that we’re trying to find out what in the world tan(arcsin(1/3)) is. By combining those two expressions, we get a new one:
3) tan(ɵ)
Now, look at your triangle. Since you know the opposite side is 1 and the hypotenuse is 3, you can use the Pythagorean Theorem to get the length of the adjacent side. When you solve, you should get $\sqrt{8}$.
Now you know all the sides of the triangle in question. All that’s left is to solve for the tangent. And, as you know, tangent is the ratio of the opposite side to the adjacent side. In this case, that’s $\dfrac{1}{\sqrt{8}}$.
So you see how by a little logic and a little algebra and by remembering your trig, you can turn an ugly expression into something nice and simple.
Lastly, remember that restricting the domain is a real thing you’ve done! It’s not a think you do with a wink at your textbook to allow you to solve certain problems. It puts real constraints on what you can do. The book defines these nicely as follows:
arcsin(sin(x)) = x, for [-π/2, π/2]
sin(arcsin(x)) = x, for [-1,1]
This is different from regular function inversion. If you have a function that adds 1 and you invert it by subtracting 1, that’s the whole mathematical picture. For trig, there’s a little more going on.
Cosine
Cosine obeys pretty much the same rules, except we restrict the domain slightly differently.
arccos(cos(x)) = x, for [0, π]
cos(arccos(x)) = x, for [-1,1]
Make sense? Remember the graph of cos is just the graph of sin shifted over by π/2.
Inverse Tangent
Check out the graph of tangent and you’ll readily see where we need to restrict. However, you may not notice that the x value never actually touches π/2. So, for arctan, the y values are (-π/2, π/2), as opposed to [-π/2, π/2] for sine.
Let’s do the book’s example here again. I want to show how by using what you know about trig, inverse trig functions remain fairly easy to work with.
Example 13: Simply the expression cos(arctanx)
The book wants you to know a certain trig shortcut, but I’m going to give you the complete under-the-hood version of how to answer this question. Here goes:
1) arctan(x) = ɵ
In other words, the arctan(x) equals some angle, which I’ll designate ɵ.
Now, draw a triangle with the following: An angle ɵ, the opposite side of which is labeled “O,” the adjacent side of which is labeled “A,” and the hypotenuse of which is labeled “H.” This isn’t standard nomenclature, but we’re under the hood right now, so it’ll help.
2) x = tan(ɵ)
This is obvious from (1). Really just algebra so far.
Now, from (2) and our knowledge that tangent is the ratio of the opposite to the adjacent side, we can say this:
3) O/A =x
That is, x is the ratio of opposite to adjacent. By some algebra, we can convert this into another form:
4) O = Ax
Now, from the Pythagorean Theorem, we know $O^2 + A^2 = H^2$. Combining that equation with (4), we get this:
5) $A^2 x^2 + A^2 = H^2$
Still with me? I just substituted Ax for O, which we established was legal in step (4).
Now, let’s simplify:
6) $H^2 = A^2 (1 + x^2)$
Now, let’s simplify again.
7) $\dfrac{H^2}{A^2} = 1 + x^2$
Now, you may remember that cosine is just the ratio of the adjacent and the hypotenuse, or (A/H). Knowing that, we can simplify (7) into this:
8) $\dfrac{1}{\cos^2 \theta} = 1 + x^2$
And now we can make that much prettier as:
9) $\cos \theta = \dfrac{1}{\sqrt{1+x^2}}$
OKAY, now remember at the outset we wanted cos(arctanx). Then, we redefined arctanx as ɵ. So, what we’re looking for is just cos(ɵ). And we have it in (9).
$\dfrac{1}{\sqrt{1+x^2}}$
WOOH! Now, that might seem like a lot of steps, but you could actually skip a lot if you know the trig identity that states:
$\sec^2 x = 1 + \tan^2 x$
That said, it’s always nice to know that everything under the hood makes sense.
The Other Inverse Trig Functions
The chapter closes by going through the seldom-used arccosecant, arcsecant, and arccotangent. If you want’em, they’re in a tidy list on page 70. But, I suspect this is about the last you’ll ever see of them.
And that’s the end of Chapter 1! I’ll do a quick review post, then we’re on to REAL LIVE CALCULUS!
This entry was posted in Autodidaction, calculus. Bookmark the permalink.
### 2 Responses to Calculus! #12: Early Transcendentals 1.6C
1. Jason Dick says:
Actually, you have to restrict it to just half a cycle, or half a wavelength. With the sine, for instance, you have the half of the cycle where the sine increases, and the half where it decreases. You can only include one of these.
• ZachWeiner says:
Fixed! | 2014-10-25 03:27:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8411713242530823, "perplexity": 718.1338277010805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647626.5/warc/CC-MAIN-20141024030047-00320-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://ncatlab.org/nlab/show/cell+complex | model category
for ∞-groupoids
# Contents
## Idea
An cell complex is an object in a category which is obtained by successively “gluing cells” via pushouts.
## Definition
Let $C$ be a category with colimits and equipped with a set $\mathcal{I} \subset Mor(C)$ of morphisms.
In practice $C$ is usually a cofibrantly generated model category with set $\mathcal{I}$ of generating cofibrations and set $\mathcal{J}$ of acyclic generating cofibrations.
An $\mathcal{I}$-cell complex in $C$ is an object $X$ which is connected to the initial object $\emptyset \to X$ by a transfinite composition of pushouts of the generating cofibrations in $\mathcal{I}$.
A relative $\mathcal{I}$-cell complex (relative to any object $A$) is any morphism $A \to X$ obtained this starting from $A$.
## References
A discussion in the context of algebraic model categories is in
Revised on December 30, 2013 00:03:00 by Tim Porter (90.24.132.252) | 2015-01-31 14:38:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 25, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7014739513397217, "perplexity": 464.67541724969186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115869404.51/warc/CC-MAIN-20150124161109-00162-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/2833151/detailed-study-guides-for-mathematical-subjects-from-undergraduate-courses/2833176 | # Detailed study guides for mathematical subjects from undergraduate courses?
Several years ago, I found this course schedule from Cambridge. I also found some synopses from Oxford here$[1]$. They have been very useful because they contain complete course guides, for example (Cambridge):
Notice that it is very detailed, it has the total number of lectures, the number inside brackets is the (approximate) number of lectures to finish the contents in each paragraph. There are also very good book recomendations with a cross to mark some book which is particularly fit for the course.
Aside from that, the only other material I found that looks similar is Garrity's All the Mathematics You Missed: But Need to Know for Graduate School, it doesn't have all the details but it has a nice conversation about the subjects. I also found the following website: How to Become a Pure Mathematician (or Statistician).
I found these guides to be extremely useful.
• Do you know more universities that provide such useful syllabi/synopses/schedules? I have been looking for some time, but it seems that Oxford/Cambridge is really unique with respect to this: Some of them just give the name of one book and doesn't detail it, other may have something like this but it's restricted for official students.
• Do you know other books/websites/etc such as the ones I mentioned?
$[1]:$ I am a bit confused by the bureaucratic usage of the words "syllabus", "synopses", "schedules", etc. When I first searched for it, the name of the document was "syllabus", now it is "schedule".
Do you know more universities that provide such useful syllabi/synopses/schedules?
As you said, these are usually restricted to enrolled students, or - more commonly - they don't seem to exist at all. If anything, it's not clear to me why Cambridge produces these, at least for Part IA courses, since I'm fairly sure most e.g. introductory analysis courses will cover roughly the same material; these might be useful for the course lecturers, or perhaps for students who choose to avoid the lectures / lecture notes entirely while learning, but not much else.
Are you aware that many Cambridge lecturers actually provide their own notes for free online? These are variable in quality, and not always easy to find, but they exist and are often very good. One that sticks in my mind is Keith Carne's Geometry and Groups notes (PDF): not only does his contents page give a breakdown of what's covered in each lecture, but it even gives a breakdown of what (almost) every individual theorem means!
Do you know other books/websites/etc such as the ones I mentioned?
You might find a few things on MIT OpenCourseWare - e.g. reading off the titles here.
Also see my note at the end of this answer.
I am a bit confused by the bureaucratic usage of the words "syllabus", "synopses", "schedules", etc.
You are right to call it bureaucratic. There is no standardised usage.
Here's a question from me to you: why do you want these? Specifically, why do you want guides produced by universities? After all, this is the purpose that textbooks are designed to serve - and textbooks are written by the same people who teach at universities! So, by avoiding textbooks, it seems to me that you're missing out on quite a lot.
"They're expensive" won't wash, either: Amazon will usually give free previews of the contents pages of at least some modern textbooks. For instance, here is Burkill's Analysis book - the one with a cross next to it in your picture. The contents pages are freely viewable, and about as detailed as the Cambridge syllabus you've posted above. (This is hardly surprising, given that Burkill was at Cambridge when he wrote it.)
Similarly, many universities don't produce syllabi precisely because their lecturers will be working from an easily available textbook. This textbook will sometimes even be written by one of the lecturers, but will almost always be written by a trusted lecturer at some university. It will usually have developed out of a course that the lecturer has taught in the past, too.
• I have found the lecture notes from Oxford, there is a big file somewhere (easily findable). I'm not necessarily looking for something produced by universities, I just pointed that it seems they produce it and if there is more, I'd like to see. I have also already noticed that they point out a lot of books by professors working there, but I don't think this is a huge problem: A lot of the books are good. – Billy Rubina Jun 27 '18 at 2:23
• "[...] it's not clear to me why Cambridge produces these, [...]" Two reasons: firstly, like any venerable institution, because that's how things have been done at Cambridge for a long time, so why change? Secondly, this allows the lecture much more freedom: because the schedules are minimal for lecturing and maximal for examining, the lecturer may cover the material however they want (within reason) and lecture whatever else they want: it is strictly their version of the course, not some textbook or other's. (1/2) – Chappers Jun 27 '18 at 13:00
• In my first year, I had lecturers who discussed the transcendentality of $e$ (in Numbers and Sets), and nonmeasurable sets (in Probability). In second year the Complex Analysis lecturer discussed some ideas he had about proving the Jordan Curve Theorem using groupoids (I don't think it worked, but that wasn't the point). In Part II Further Complex Methods, the lecturer explained some of his own research on integral transforms. (Of course, sometimes this doesn't stop lecturers setting inappropriate exam questions anyway.) (2/2) – Chappers Jun 27 '18 at 13:01
• @Chappers: I don't think I follow either of your comments. On the second: you're surely aware that lecturers at other institutions have exactly this freedom too? And in any case, this doesn't really address the question of why the university publishes these schedules; of course there always is a schedule, it's just normally kept inside the lecturer's mind or on a webpage available to students, not published online and en masse in the Reporter. I suspect the real answer is the one you first suggested - "we've always done x" - but I don't find that a convincing reason to do anything... – Billy Jun 27 '18 at 14:13
• Convincing or not, it's quite a common reason. And the Schedules aren't published in the Reporter; the Lecture List is, and the Class Lists, but not the Schedules. They're distributed to undergraduates, and made available on the Faculty website, as past Tripos papers are, and the Example Sheets are made available on the departments' websites. Some lecturers do guard their printed notes rather more jealously, often because people from other institutions have lifted from them without attribution (i.e. plagiarised them). – Chappers Jun 27 '18 at 14:54 | 2019-11-21 04:41:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5912806987762451, "perplexity": 981.8921884663886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670729.90/warc/CC-MAIN-20191121023525-20191121051525-00305.warc.gz"} |
http://mathhelpforum.com/algebra/17370-few-more-problems-class-today-print.html | # A few more problems from class today
• July 31st 2007, 01:05 PM
Ty Durdan
A few more problems from class today
1). "If 1 bogus = 15 zing, 42 klang = 25zing, 10 klang = 88 zoot, how many bogus make 1 zoot?"
2)."Find the dimensions of a rectangle whose area is 240cm2 and perimeter is 62cm."
• July 31st 2007, 01:21 PM
Ty Durdan
• July 31st 2007, 01:36 PM
Jonboy
The area of a rectangle is $A\,=\,l\,\cdot\,w$
The perimeter of a rectangle is $P\,=\,2l\,+\,2w$
So we know the area is 240.
Using the area equation we get: $240\,=\,l\,\cdot\,w$
Also we know the perimeter is 62.
Utilizing the perimeter equation we get: $62\,=\,2l\,+\,2w$
So we have the system:
$240\,=\,l\,\cdot\,w$
$62\,=\,2l\,+\,2w$
The dimension are l and w. Solve for both using any methods you've learned about systems. :D
• July 31st 2007, 02:20 PM
topsquark
Quote:
Originally Posted by Ty Durdan
1). "If 1 bogus = 15 zing, 42 klang = 25zing, 10 klang = 88 zoot, how many bogus make 1 zoot?"
$\frac{1~zoot}{1} \cdot \frac{10~klang}{88~zoot} \cdot \frac{25~zing}{42~klang} \cdot \frac{1~bogus}{15~zing} \approx 0.004509~bogus$
-Dan | 2016-08-25 22:59:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7480368614196777, "perplexity": 7208.214439966051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982294158.7/warc/CC-MAIN-20160823195814-00214-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://www.lmfdb.org/EllipticCurve/Q/98a/ | # Properties
Label 98a Number of curves 6 Conductor 98 CM no Rank 0 Graph
# Related objects
Show commands for: SageMath
sage: E = EllipticCurve("98.a1")
sage: E.isogeny_class()
## Elliptic curves in class 98a
sage: E.isogeny_class().curves
LMFDB label Cremona label Weierstrass coefficients Torsion structure Modular degree Optimality
98.a5 98a1 [1, 1, 0, -25, -111] [2] 16 $$\Gamma_0(N)$$-optimal
98.a4 98a2 [1, 1, 0, -515, -4717] [2] 32
98.a6 98a3 [1, 1, 0, 220, 2192] [2] 48
98.a3 98a4 [1, 1, 0, -1740, 22184] [2] 96
98.a2 98a5 [1, 1, 0, -8355, 291341] [2] 144
98.a1 98a6 [1, 1, 0, -133795, 18781197] [2] 288
## Rank
sage: E.rank()
The elliptic curves in class 98a have rank $$0$$.
## Modular form98.2.a.a
sage: E.q_eigenform(10)
$$q - q^{2} + 2q^{3} + q^{4} - 2q^{6} - q^{8} + q^{9} + 2q^{12} + 4q^{13} + q^{16} - 6q^{17} - q^{18} - 2q^{19} + O(q^{20})$$
## Isogeny matrix
sage: E.isogeny_class().matrix()
The $$i,j$$ entry is the smallest degree of a cyclic isogeny between the $$i$$-th and $$j$$-th curve in the isogeny class, in the Cremona numbering.
$$\left(\begin{array}{rrrrrr} 1 & 2 & 3 & 6 & 9 & 18 \\ 2 & 1 & 6 & 3 & 18 & 9 \\ 3 & 6 & 1 & 2 & 3 & 6 \\ 6 & 3 & 2 & 1 & 6 & 3 \\ 9 & 18 & 3 & 6 & 1 & 2 \\ 18 & 9 & 6 & 3 & 2 & 1 \end{array}\right)$$
## Isogeny graph
sage: E.isogeny_graph().plot(edge_labels=True)
The vertices are labelled with Cremona labels. | 2020-08-05 22:34:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9154930710792542, "perplexity": 8042.344668166692}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735989.10/warc/CC-MAIN-20200805212258-20200806002258-00530.warc.gz"} |
https://www.thejournal.club/c/paper/103715/ | #### Improved Parallel Construction of Wavelet Trees and Rank/Select Structures
##### Julian Shun
Existing parallel algorithms for wavelet tree construction have a work complexity of $O(n\log\sigma)$. This paper presents parallel algorithms for the problem with improved work complexity. Our first algorithm is based on parallel integer sorting and has either $O(n\log\log n\lceil\log\sigma/\sqrt{\log n\log\log n}\rceil)$ work and polylogarithmic depth, or $O(n\lceil\log\sigma/\sqrt{\log n}\rceil)$ work and sub-linear depth. We also describe another algorithm that has $O(n\lceil\log\sigma/\sqrt{\log n} \rceil)$ work and $O(\sigma+\log n)$ depth. We then show how to use similar ideas to construct variants of wavelet trees (arbitrary-shaped binary trees and multiary trees) as well as wavelet matrices in parallel with lower work complexity than prior algorithms. Finally, we show that the rank and select structures on binary sequences and multiary sequences, which are stored on wavelet tree nodes, can be constructed in parallel with improved work bounds, matching those of the best existing sequential algorithms for constructing rank and select structures.
arrow_drop_up | 2021-12-06 06:12:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5380746722221375, "perplexity": 1074.569569608376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363290.39/warc/CC-MAIN-20211206042636-20211206072636-00385.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=2010_AMC_10B_Problems/Problem_6&oldid=34570 | # 2010 AMC 10B Problems/Problem 6
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Assuming the reader is not readily capable to understand how $\angle ACB$ will always be right, the I will continue with an easily understandable solution. Since $O$ is the center, $OC, OB, \text{ and } OA$ are all radii, they are congruent. Thus, $\triangle COB$ and $\triangle COA$ are isosceles triangles. Also, note that $\angle COB$ and $\angle COA$ are supplementary, then $\angle COA = 180 - 50 = 130 \degrees$ (Error compiling LaTeX. ! Undefined control sequence.). Since $\triangle COA$ is isosceles, then $\angle OCA \cong \angle OAC$. They also sum to $50 \degrees$ (Error compiling LaTeX. ! Undefined control sequence.), so each angle is $\boxed{\mathrm{(B)} 25 \degrees}$ (Error compiling LaTeX. ! Undefined control sequence.). | 2022-09-28 13:14:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7808691263198853, "perplexity": 1755.8685696807493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00411.warc.gz"} |
https://math.stackexchange.com/questions/2144556/calculating-geodesic-curvature-for-a-general-curve | # Calculating geodesic curvature for a general curve
A cylinder of radius $R$ can be parameterized by $X(\theta, z) = [R\cos\theta, \sin\theta, z]$, where $-\pi < \theta < \pi$ and $\infty < z < \infty$.
Part b of a question I'm working on (studying for an exam) asks me to calculate the geodesic curvature for a general curve - I am stuck on this. Part a asks to find the metric and the normal to the surface, so I assume those quantities are useful in the part I am stuck on.
If anyone could give me guidance on how to calculate geodesic curvature for a general curve on the above surface, that would be great.
Thanks.
On the cylinder $C$ consider the curve $\gamma(t)=(\cos t,\sin t, h(t))$, where $h:\mathbb{R}\to\mathbb{R}$ is some smooth function.
the geodesic curvature of $\gamma$ is $$κ_g=\dfrac{h''(t)}{(1+h'(t)^2)^{3/2}}$$
The geodesic curvature of $\gamma$ vanishes if and only if $h(t)=at+b$ for certain constants a and b (line a plane).
Use the isometry between the cylinder and plane to argue that the geodesic curvature of the curve $\gamma$ on the cylinder must be the same as that of the graph of $u= h(v)$ in the plane. | 2019-09-20 21:15:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9307169914245605, "perplexity": 99.82952662993026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574077.39/warc/CC-MAIN-20190920200607-20190920222607-00357.warc.gz"} |
http://math.stackexchange.com/questions/111225/number-of-linear-orders | # number of linear orders
It is well known that for every infinite cardinal $\kappa$ the number of non-isomorphic total orders of cardinality $\kappa$ is $2^\kappa$. Who first proved this, and in what context? Was it proved for $\kappa=\aleph_0$ first, and then for uncountable $\kappa$, or for all $\kappa$ right away?
-
Maybe you should provide a sketch of the proof or a reference to it. – Quinn Culver Feb 20 '12 at 14:30
Thank you for the suggestion. I am not sure what the original proof is, but I would prove it as follows: Take two sufficiently different countable linear orders $A$ and $B$. For every subset $S \subseteq \kappa$, replace each $i\in S$ by a copy of $A$, and each $j\in \kappa\setminus S$ by a copy of $B$; this will yield a linear order $L_S$, and if $A$ and $B$ were chosen suitably, all the different $L_S$ will be non-isomorphic. – g.castro Feb 20 '12 at 20:41
(continued) For example, one can choose $A$ to be $\omega+1$, and $B$ the converse ordering. From $L_S$ one can recover $S$ and $\kappa\setminus S$ by only looking at the non-isolated points in $L_S$, and checking whether they have an upper or lower neighbor. Or, as David Marker suggests in an exercise of his model theory book, let $A$ be $\mathbb Q + 1 + 1 + \mathbb Q$ (a copy of the rationals, followed by 2 discrete points, followed by another copy of the rationals), and let $B=\mathbb Q + 1 + 1 + 1 + \mathbb Q$. – g.castro Feb 20 '12 at 20:48
In short, the result for $\kappa=\aleph_0$ is due to Cantor (at least $2^{\aleph_0}$) and to Bernstein and Hausdorff, in 1901, independently (at most $2^{\aleph_0}$). | 2015-11-29 14:46:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9266807436943054, "perplexity": 146.3369293025923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398458511.65/warc/CC-MAIN-20151124205418-00131-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://chemistry.stackexchange.com/questions/70172/do-the-letters-ol-in-a-substance-only-indicate-that-the-substance-is-an-alcohol | # Do the letters ol in a substance only indicate that the substance is an alcohol if they came at the end of the name of a substance?
Do the letters ol if they are together only indicate the substance is an alcohol if they come together at the end of the name of a substance? for example,is glycerol an alcohol because the letters o and l come together at the end of its name and olives and olive oil are not alcohols because the letters o and l do not come together at the end of any words in their names?
Yes, the ending ol in a name is a good hint that the compound contains an $\ce{OH}$ group.
• $\alpha$-tocopherol is not an aliphatic alcohol, but a substituted phenol
• mannitol does not contain one $\ce{OH}$ group, but six of them
It's getting even more confusing if you consider other languages, such as German. Benzol is the German name of benzene, which has no $\ce{OH}$ group at all. The same is true for styrol, the German word for styrene ;-) | 2021-06-23 05:34:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6916589140892029, "perplexity": 970.4117869167868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488534413.81/warc/CC-MAIN-20210623042426-20210623072426-00347.warc.gz"} |
https://learn.careers360.com/engineering/question-i-have-a-doubt-kindly-clarify-dual-nature-of-matter-and-radiation-jee-main/ | Q
# I have a doubt, kindly clarify. - Dual Nature of Matter and Radiation - JEE Main
Question is based on the following paragraph.
Wave property of electrons implies that they will show diffraction effects. Davisson and Germer demonstrated this by diffracting electrons from crystals. The law governing the diffraction from a crystal is obtained by requiring that electron waves reflected from the planes of atoms in a crystal interfere constructively (see figure).
Question : If a strong diffraction peak is observed when electrons are incident at an angle $\dpi{100} i$ from the normal to the crystal planes with distance $\dpi{100} d$ between them (see figure), de Broglie wavelength $\dpi{100} \lambda _{dB}$ of electrons can be calculated by the relationship ($\dpi{100} n$ is an integer )
Option 1) $d\; \cos i=n\lambda_{dB}\; \;$ Option 2) $d\sin i=n\lambda _{dB}$ Option 3) $2d\; \cos i=n\lambda_{dB}\; \;$ Option 4) $2d\sin i=n\lambda _{dB}$
127 Views
As we learnt in
Bragg’s formula -
$2d\sin \Theta = n\lambda$
- wherein
$d-distance\: between \: diffracting\: planes$
$\Theta = \frac{180-\phi }{2}$
$\lambda = 1.65 A^{\circ}$
From condition of constructive interference of Bragg's law:
$2d\ sin\theta=n{\lambda_{dB}}$
$\theta=90^{o}-i$
$2d\ cosi=n{\lambda_{dB}}$
Option 1)
$d\; \cos i=n\lambda_{dB}\; \;$
This is an incorrect option.
Option 2)
$d\sin i=n\lambda _{dB}$
This is an incorrect option.
Option 3)
$2d\; \cos i=n\lambda_{dB}\; \;$
This is the correct option.
Option 4)
$2d\sin i=n\lambda _{dB}$
This is an incorrect option.
Exams
Articles
Questions | 2019-11-22 02:42:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 19, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8031781315803528, "perplexity": 1070.9595326684318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671106.83/warc/CC-MAIN-20191122014756-20191122042756-00268.warc.gz"} |
https://brilliant.org/discussions/thread/physics-doubts/ | ×
# Soumo's doubts in Physics
This note is for discussing my doubts in physics.
Note by Soumo Mukherjee
1 year, 6 months ago
Sort by:
Does charge need to be associated with mass?
Does charge has inertia?
"There is no known fundamental principle that forbids the existence of a mass-less charged particle, but such a particle has never been observed." - Somewhere on Internet
@Michael Mendrin , @John Muradeli help · 1 year, 6 months ago
Right now, there are only two known massless particles, photons and gluons. While photons do not carry electromagnetic charge, gluons do carry color charge, Gluons are the carriers, or bosons, of the strong nuclear force, while photons are the carriers, or bosons, of the electromagnetic force. So, don't confuse the different kinds of charges here.
This "somewhere on the internet" where that statement came from is Yahoo! Answers, not exactly a repository of accurate information on particle physics. · 1 year, 6 months ago
Yes, we can't rely on YA. But then we have Mr. Mendrin here :)
My doubt aroused in this problem. The second statement.
John might require help from you. · 1 year, 6 months ago
That's funny; I was just trying to track down the answers to those questions yesterday. In some models, apparently, it is theoretically possible for a massless charged particle to exist, although none have ever been detected. As for charge having inertia ... well, I suppose it does if we look at it as "electrostatic mass". However, I'd like to see what Michael has to say regarding both questions before I venture any further on these matters. · 1 year, 6 months ago
Brian, supposing there was a massless particle with an electric charge. How do we make sense out of its "infinite acceleration" in any electric field? Then we'd have to artificially invoke the concept of a massless particle that still possesses an inertia. Well, I'm not aware of any feasible models of particle physics that would have such a thing--massless particles that act as if they nonetheless have mass! That's not to say that such models cannot theoretically exists, I just don't know of any. · 1 year, 6 months ago
Ah, o.k., thanks for setting me straight, (yet again). :) I was reading so much conflicting commentary and speculation on the subject I didn't know what to think. :P · 1 year, 6 months ago
Yeah Mendrin's your guy. Though I find that claim 'somewhere on the internet' quite stupid, since photons are the electromagnetic force-transmitting particles, that are massless. Though they don't have a 'charge' of their own; they ARE the charge.
As far as the relationship between them, I think there is one; check out Gravitomagnetism. I just know about the name, nothing else xD
Cheers! · 1 year, 6 months ago
So according to parametric definition of dimensions "an inflated swimming tube" is a $$2D$$ object?
@Michael Mendrin · 1 year, 6 months ago
As answered elsewhere, since a torus can be defined with $$2$$ parameters, it's a $$2D$$ object. · 1 year, 6 months ago
Yeah I'm afraid so - and it was annoying for me to get used to this. But this is VERY important if you're going to study the temporal dimensions: Our universe is 4th dimensional, EMBEDDED in a 5th dimension. Otherwise all notions of 'probability' would lose meaning and our life would be a tape play-through.
But of course you can adjust your terminology and definitions to suit your needs - depending on the problem. Here's my rule of thumb: If something's 'set in stone', pour cement over it, rewrite. · 1 year, 6 months ago | 2016-10-28 21:57:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7042814493179321, "perplexity": 1487.895688209863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725475.41/warc/CC-MAIN-20161020183845-00258-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://blog.frankgrecojr.com/elm-functional-programming-in-the-browser/ | I just got home from the Elm Workshop hosted by the Milwaukee Functional Programming User Group and RokkinCat.
Having only been introduced to the Elm language a few hours ago, I am going to attempt to give a high level overview and brief introduction of what it is, what's so special about it, and detail out a simple example - so bear with me. By the end of this post I hope to motivate you to research Elm on your own.
Before continuing, here are some key resources that will put you well on your way to becoming an Elm expert!
### What is it?
Elm was first introduced by Evan Czaplicki in his thesis, Elm: Concurrent FRP for Functional GUIs which he wrote in 2012.
At its core, Elm is just another functional programming language. There is however one key difference - it compiles into JavaScript. This allows it to be used as a tool that can be used to create web applications. As a matter of fact, the Elm homepage was created using it! But why, amongst all of the other technology stacks that you can use to create your website, would you give any thought to Elm? Since Elm is a functional language, there are no mutable states. If there are no mutable states, how is content ever changed? It turns out that every change results in a new state. If this is beginning to sounds familiar that is because Elm competes with React.js. In fact, Elm outperforms React.js in certain benchmarks.
### Why is it so Special?
Elm was already created so that it would be very easy to learn and use. There are many other things that make Elm attractive, like the way it uses a Virtual DOM like React.js. However one of the key benefits in my opinion is everything that it inherits from being a functional language. In addition, Elm was created with:
• No runtime errors in practice. No null. No undefined is not a function.
• Well-architected code that stays well-architected as your app grows.
• Automatically enforced semantic versioning for all Elm packages.
Elm also uses a pattern called the Elm Architecture for nesting components. Because of its modularity, code reuse, and testability, the Redux project translates the Elm architecture into JavaScript. This illustrates that even if you choose not to use Elm in a production application, there is still value in implementing its pattern into your application.
### Example
The following example is one on the official Elm guide. It reverses a string that is inputed by the user:
import Html exposing (Html, Attribute, div, input, text)
import Html.App as Html
import Html.Attributes exposing (..)
import Html.Events exposing (onInput)
import String
main =
Html.beginnerProgram { model = model, view = view, update = update }
-- MODEL
type alias Model =
{ content : String }
model =
Model ""
-- UPDATE
type Msg
= Change String
update : Msg -> Model -> Model
update msg model =
case msg of
Change newContent ->
{ model | content = newContent }
-- VIEW
view : Model -> Html Msg
view model =
div []
[ input [ placeholder "Text to reverse", onInput Change ] []
, div [] [ text (String.reverse model.content) ]
]
Let's break this down...
Setup
The first few lines of code import modules that will be used in this app.
• import Html exposing (Html, Attribute, div, input, text) lets us use the Html, Attribute, div, input, and text functions in our app. Html is the core HTML library for Elm.
• import Html.App as Html let us use an alias for Html.App.
• import Html.Attributes exposing (..) lets us use every function from the Html.Attributes module.
• import Html.Events exposing (onInput) lets us use the onInput event.
• import String lets us use String, one of the core Elm libraries.
The next line of code acts as the entry point into the application. Every Elm application must have this main value defined:
main = Html.beginnerProgram
{ model = model,
view = view,
update = update }
This code tells the app what the names of your model, view, and update functions are. These three functions form the basis of the Elm architecture. Note that when the state of the app changes, the update function is called which creates a new model which is shown via the view function.
Model
The next thing the code does is create a type alias. Model represents a type and is simply an alias for { content: String }. Type aliases come in handy when your app becomes more complex.
Next, we create a variable model that is initialized to a Model where Model.content is the empty string.
Update
type Msg = Change String is a type constructor. This is common in functional languages. Change is a data constructor and takes a String parameter. Whenever we create a value using the Change constructor, the type of that value is Msg.
Next, we define our update function. But before we do, we declare a type annotation.
update : Msg -> Model -> Model
update msg model =
case msg of
Change newContent ->
{ model | content = newContent }
Elm will infer types for us, however for readability and to help out the compiler it is best to define type annotations.
Here, we have a type named update that represents a curried function. Note that Msg -> Model -> Model is left associative by default. It be read as a function that takes a function as input and returns a Model. The input function takes a Msg as input and returns a Model.
Next we have a function named update that takes two parameters: msg and model. This function checks the type of the msg parameter and if it is of type Change, it takes the parameter of the type constructor newContent and creates a new model where Model.content = newContent.
View
view : Model -> Html Msg is another type annotation. This time we have a type named view that represents a function that takes a Model as input and returns a value of type Html (which is a type constructor with a Msg parameter).
view model =
div []
[ input [ placeholder "Text to reverse", onInput Change ] []
, div [] [ text (String.reverse model.content) ]
]
Now div and input are simply Elm functions from the Html module that take two parameters:
• a list of attributes
• a list of child nodes
So, input [ placeholder "Text to reverse", onInput Change ] [] is the Html.input function that takes a placeholder and onInput function as its attributes and has no child nodes.
### Conclusion
Elm is a great functional programming language that allows you to create web applications. I encourage everyone to play around with it and give it a shot and even if you decide not to use it for your project, perhaps you can still benefit from its patterns. | 2018-08-18 20:08:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18179747462272644, "perplexity": 2640.3767326510033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213737.64/warc/CC-MAIN-20180818193409-20180818213409-00568.warc.gz"} |
https://www.vedantu.com/question-answer/let-a-and-b-be-independent-events-with-pa03-and-class-10-maths-cbse-5ef9bb858ba6a01435137bef | QUESTION
Let A and B be independent events with P(A)=0.3 and P(B)=0.4. Finda)$P(A\cap B)$b)$P(A\cup B)$c)P(A|B)d)P(B|A)
Hint: Here we need to use the basic rules and formulas for the probability of union and intersection of events and conditional probability of two events.
We shall discuss the subparts of the question in the following points
By probability theory, the probability of intersection of two events corresponds to the situation where both events have occurred. The general formula is given by
$P(A\cap B)=P\left( A \right)\text{ }P\left( B|A \right)\text{ }=\text{ }P\left( B \right)P\left( A|B \right)$
Where P(B|A) corresponds to the probability of the occurrence of B when A has already occurred and P(A|B) corresponds to the probability of the occurrence of A when B has already occurred. As the events are independent, the probability of occurrence of A or B is independent of occurrence of B or A respectively. Thus, P(B|A)=P(B) and P(A|B)=P(A).
Thus,
$P(A\cap B)=P\left( A \right)\text{ }P\left( B|A \right)\text{ }=\text{ }P\left( B \right)P\left( A|B \right)=0.3\times 0.4=0.12$
By probability theory, the probability of union of two events corresponds to the probability of occurrence of A or B or both. The general formula is given by
$P(A\cup B)=P\left( A \right)\text{ }+P\left( B \right)$
Using the values given in the question
$P(A\cup B)=P\left( A \right)+P\left( B \right)=0.3+0.4=0.7$
By probability theory, P(A|B) corresponds to the probability of the occurrence of A when B has already occurred. As the events are independent, the probability of occurrence of A is independent of occurrence of B. Thus, A is equally probable to occur and does not depend on whether B has occurred or not. Thus,
$P\left( A|B \right)=P\left( A \right)=0.3$
By the theory of probability, P(B|A) corresponds to the probability of the occurrence of B when A has already occurred. As the events A and B are given as independent, B is equally probable to occur and does not depend on whether A has occurred or not. Thus,
$P\left( B|A \right)=P\left( B \right)=0.4$
Note: We note that the rules for union and intersection of probability of two events is not the same as in set theory as in set theory, it corresponds to the space occupied by the events whereas in probability, it corresponds to the probability of the occurrence of the events. | 2020-07-11 05:53:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8471354246139526, "perplexity": 128.93936127795942}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655921988.66/warc/CC-MAIN-20200711032932-20200711062932-00242.warc.gz"} |
https://socratic.org/questions/what-is-the-line-formula-of-ch-3ch-2ch-2c-ch-3-3 | # What is the line formula of CH_3CH_2CH_2C(CH_3)^3?
##### 1 Answer
Apr 17, 2015
Every end and every top is carbon atom. As you see in fomula, there are 7 carbon atoms. | 2019-03-24 17:54:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6563049554824829, "perplexity": 4022.883331641908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203464.67/warc/CC-MAIN-20190324165854-20190324191854-00137.warc.gz"} |
https://9to5science.com/prove-parametric-equations-trochoid | # Prove parametric equations trochoid
3,159
take a hard look at picture take some fixed point anywhere on the circle and when the circle is rolled around the curve drawn by the the fixed point is the TROCHOID
in the diagram i have drawn i labelled it as point D some where on circle also i have taken the angle to be "a"
from the diagram you can see that point D is online CP and P started initially from origin and it has travelled by an angle of "a" so PR=OR= ra ( PCR is a sector with angle a and arc length ra)
our job is to find the x and y co-ordinates of D which is making the curve
from triangle CDQ
CD=d which is given in our question
CQ= d $cosa$ and DQ= d $sina$
now x-coordinate of point D= OR- DQ
SO x= r*a-d $sina$
and y- coordinate of point D = CR- QR
y=r- d $cosa$
hence the parametric representation of the curve is
x= r*a-d $sina$
y=r- d $cosa$
Share:
3,159
Author by
### Sam Creamer
Updated on August 01, 2022
• Sam Creamer 1 day
I have to show that the parametric equations of a trochoid are:
$x = r\theta - d\sin\theta$ and $y=r-d\cos\theta$
where r is radius and d is the distance between center of the circle and a point P.
Can someone please explain this to me? I'm in my second week of advanced Calculus, thanks
• Spine Feast almost 9 years
en.wikipedia.org/wiki/Trochoid has a reasonable derivation, have you studied it?
• Sam Creamer almost 9 years
not really... it was just thrown at me and I had never seen it before and I don't think I fully understand it
• Mr. Math almost 9 years
my pleasure hope it helps | 2022-10-06 20:14:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7716321349143982, "perplexity": 1111.0167576019194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00413.warc.gz"} |
https://mathhelpboards.com/threads/aju051000s-questions-at-yahoo-answers-involving-trigonometry.6263/ | # aju051000's questions at Yahoo! Answers involving trigonometry
#### MarkFL
Staff member
Here are the questions:
Ok so our teacher gave us 34 questions to do for an assignment and I got all of them except two Please help me figure these out!
A two-person tent is to be made so that the height at the center is a = 4 feet (see the figure below). If the sides of the tent are to meet the ground at an angle 60°, and the tent is to be b = 8 feet in length, how many square feet of material will be needed to make the tent? (Assume that the tent has a floor and is closed at both ends, and give your answer in exact form.)
The figure below shows a walkway with a handrail. Angle α is the angle between the walkway and the horizontal, while angle β is the angle between the vertical posts of the handrail and the walkway. Use the figure below to work the problem. (Assume that the vertical posts are perpendicular to the horizontal.)
Find α if β = 62°.
thank you so so much!
I have posted a link there to this topic so the OP can see my work.
#### MarkFL
Staff member
Hello aju051000,
1.) I would first draw a diagram:
We are given the following:
$$\displaystyle a=4\text{ ft},\,b=8\text{ ft},\,\theta=60^{\circ}$$
So, we need to find the $s$ and $w$. We may use:
$$\displaystyle \tan\left(60^{\circ} \right)=\frac{a}{w/2}=\frac{2a}{w}$$
$$\displaystyle w=2a\cot\left(60^{\circ} \right)=\frac{8}{\sqrt{3}}\text{ ft}$$
We should recognize that the triangular ends of the tent are equilateral (and so $s=w$), but if we didn't we could write:
$$\displaystyle \sin\left(60^{\circ} \right)=\frac{a}{s}$$
$$\displaystyle s=a\csc\left(60^{\circ} \right)=\frac{8}{\sqrt{3}}\text{ ft}$$
So, to find the surface area of the tent, we see that there are two congruent triangles and 3 congruent rectangles.
The area of the two triangles is:
$$\displaystyle A_T=2\cdot\frac{1}{2}\cdot\frac{8}{\sqrt{3}}\text{ ft}\cdot4\text{ ft}=\frac{32}{\sqrt{3}}\text{ ft}^2$$
The area of the three rectangles is:
$$\displaystyle A_R=3\cdot\frac{8}{\sqrt{3}}\text{ ft}\cdot8\text{ ft}=64\sqrt{3}\text{ ft}^2$$
Hence, the total surface area $A$ of the tent is:
$$\displaystyle A=A_T+A_R=\frac{32}{\sqrt{3}}\text{ ft}^2+64\sqrt{3}\text{ ft}^2=\frac{224}{\sqrt{3}}\text{ ft}^2$$
2.) Again, let's first draw a diagram:
We can now easily see that $\alpha$ and $\beta$ are complementary, hence:
$$\displaystyle \alpha+\beta=90^{\circ}$$
$$\displaystyle \alpha=90^{\circ}-\beta$$
With $\beta=62^{\circ}$, we find:
$$\displaystyle \alpha=(90-62)^{\circ}=28^{\circ}$$ | 2021-12-07 23:33:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8559593558311462, "perplexity": 495.6166158790795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363420.81/warc/CC-MAIN-20211207232140-20211208022140-00539.warc.gz"} |
https://physics.stackexchange.com/questions/438621/significance-of-the-complex-component-in-the-underdamped-harmonic-motion-equatio | # Significance of the complex component in the underdamped harmonic motion equation [closed]
The following differential equation represents the motion of a body of mass $$m$$ and displacement $$x$$ from the mean position, that is attached to a spring of force constant $$a$$ and viscous damping coefficient $$b$$ : $$\bbox[5px,border:1px solid black] { m\frac{d^2x}{dt^2}=-ax-b\frac{dx}{dt}}$$
On rearrangement of the above equation we get the following: $$\bbox[5px,border:1px solid black] { \frac{d^2x}{dt^2}+\frac bm\cdot\frac{dx}{dt}+\frac amx=0 }$$ Defining $$k\triangleq\frac{b}{2m}$$ and $$\omega^2=\frac am$$, we have: $$\bbox[5px,border:1px solid black] { \frac{d^2x}{dt^2}+2k\frac{dx}{dt}+\omega^2x=0 }$$
Hence, we solve it as:
In the situation of the undamped oscillation (shown in the image below):
• What is the significance of the term $$i(A-B)\sin(\omega't)$$?
• And also is the line highlighted with green justified? Why or why not?
## closed as off-topic by Aaron Stevens, Kyle Kanos, ZeroTheHero, Jon Custer, user191954 Nov 16 '18 at 17:03
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "Homework-like questions should ask about a specific physics concept and show some effort to work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – ZeroTheHero, Jon Custer, Community
If this question can be reworded to fit the rules in the help center, please edit the question.
• Term with iota? Did you mean "phi" ($\phi$)? – Vinicius ACP Nov 3 '18 at 8:50
• No i(A-B)sin(w't) – Abhishek Ghosh Nov 3 '18 at 8:51
• Also, I'm voting to move this to Mathematics SE, since this is not a physics question, even though it has applications in physics. – Aaron Stevens Nov 3 '18 at 11:03
• The green part along with the purple part are just assignment of two new variables (C and $\phi$) instead of the old variables (A and B). Note that the new variables can be complex. The final line in the image presents the solution using these new variables. The advantage of the new presentation is that you can easily relate it to initial conditions, say $x(t=0)$ and $v(t=0)$. Initial conditions, being real, will enforce C and $\phi$ to be real as well. – npojo Nov 3 '18 at 14:40
We have a 2nd order Linear Homogeneous Ordinary Differential Equation (LH-ODE): $$\frac{d^2x}{dt^2}+2k\frac{dx}{dt}+\omega^2x=0 \tag{I}$$
Its general solution is, as pointed in the question: $$x(t)=e^{-kt}\cdot(Ae^{i\omega't}+Be^{-i\omega't})$$
## What the significance of the term $$i(A-B)\sin(\omega't)$$ ?
Using Euler's formula $$(e^{i\theta}=\cos\theta+i\sin\theta)$$, we have: $$\begin{cases} e^{i\omega't}=\cos(\omega't)+i\sin(\omega't) \\ e^{-i\omega't}=\cos(-\omega't)+i\sin(-\omega't)=\cos(\omega't)-i\sin(\omega't) \end{cases}$$
Therefore: \begin{align} x(t) & = e^{-kt}\cdot\big[A\cos(\omega't)+iA\sin(\omega't)+B\cos(\omega't)-iB\sin(\omega't)\big]\\ & =e^{-kt}\cdot\big[(A+B)\cos\omega't+i(A-B)\sin\omega't\big]\\ & =e^{-kt}\cdot(A+B)\cos\omega't+i\cdot e^{-kt}(A-B)\sin\omega't \tag{II} \end{align}
Note that the equation $$\text{(II)}$$ above is still the general solution of $$\text{(I)}$$. Now let's remember the linearity of the solutions and the superposition principle valid for any LH-ODE:
If $$x_1(t)$$ and $$x_2(t)$$ are particular solutions, then $$x_1(t)+x_2(t)$$ is also a particular solution;
If $$x(t)$$ is a particular solution, then $$u\cdot x(t)$$, where $$u\in\mathbb{C}$$ is an arbitrary constant, is also a particular solution;
The general solution in given by: $$x(t)=u\cdot x_1(t)+v\cdot x_2(t)$$, where $$u,v\in\mathbb{C}$$ are arbitrary constants.
So, comparing the above general solution with $$\text{(II)}$$: $$\begin{cases} x_1(t)=e^{-kt}\cdot\cos(\omega't) & \text{and}\space\space\space\space u=A+B \\ x_2(t)=e^{-kt}\cdot\sin(\omega't) & \text{and}\space\space\space\space v=i\cdot(A-B) \end{cases}$$ Therefore, $$\space i\cdot(A-B)\sin\omega't\space$$ has no meaning, but $$\space i\cdot (A-B)e^{-kt}\sin\omega't\space$$ has: is one of the particular solutions of $$\text{(I)}$$.
## What is the justification for the use of $$\space i\cdot(A-B)=C\cos\phi$$ ?
First, let's find the values of $$A$$ and $$B$$ : \begin{align} \begin{cases} A+B=C\sin\phi\\ i\cdot(A-B)=C\cos\phi \end{cases} \iff \begin{cases} A=\frac C2\left(\sin\phi+i\cos\phi\right)\\ B=\frac C2\left(\sin\phi-i\cos\phi\right)\\ \end{cases} \,\end{align} We can see that $$A=B^{\space*}$$, where $$*$$ denotes complex conjugation. Moreover, $$\frac C2\left(\sin\phi+i\cos\phi\right)$$ and $$\frac C2\left(\sin\phi-i\cos\phi\right)$$ are the polar forms of $$A$$ and $$B$$. If the values of $$A$$ and $$B$$ are substituted in $$\text{(II)}$$, we have: \begin{align} x(t) &= e^{-kt}\cdot\big[C\space\sin\phi\space\cos(\omega't)+C\space\cos\phi\space\sin(\omega't)\big]\\ &=C e^{-kt}\cdot\sin(\omega't+\phi) \end{align} So, the question now becomes "Why were the constants $$A$$ and $$B$$ chosen in this way?"
For three reasons, which I'll list below:
• In a physical problem, $$x(t)$$ must be real (We can't have the position $$x(t)=7+3i$$ meters, for example) and the choice of $$A=B^{\space*}$$ ensures that $$x(t)$$ is real. This becomes more evident if we take the rectangular form of $$A$$ and $$B$$ and substitute them in $$\text{(II)}$$:
\left. \begin{aligned} A &\triangleq a+ib\\ B &\triangleq a-ib\ \end{aligned} \right\} \implies \begin{aligned}[t] x(t) &= e^{-kt}\cdot\big[(a+ib+a-ib)\cos\omega't+i(a+ib-a+ib)\sin\omega't\big]\\ &= e^{-kt}\cdot\big[(2a)\cos\omega't+(-2b)\sin\omega't\big]\\ &\therefore\space \forall t, \space x(t)\in\mathbb{R} \end{aligned}
• The choice of $$\sin\phi+i\cos\phi$$ instead of the "more natural" way $$\cos\phi+i\sin\phi$$ is arbitrary, because if we use $$\phi=\frac{\pi}{2}-\theta\space$$ (without loss of generality), it's possible to see that: $$\sin\phi+i\cos\phi=\sin\left(\frac{\pi}{2}-\theta\right)+\cos\left(\frac{\pi}{2}-\theta\right)=\cos\theta+i\sin\theta$$ So if we make the substitutions in $$\text{(II)}$$:
– If we choose $$A=B^{\space*}=\frac C2\left(\sin\phi+i\cos\phi\right)$$, then we have $$x(t)=C e^{-kt}\cdot\sin(\omega't+\phi)$$
– If we choose $$A=B^{\space*}=\frac C2\left(\cos\phi+i\sin\phi\right)$$, then we have $$x(t)=C e^{-kt}\cdot\cos(\omega't+\phi)$$
(And both forms are valid to represent the position $$x(t)$$ in the underdamped harmonic motion)
• The choice of $$\frac C2$$ and not just $$C$$ is for convenience:
– If we choose $$A=B^{\space*}=\frac C2\left(\sin\phi+i\cos\phi\right)$$, then we have $$x(t)=C e^{-kt}\cdot\sin(\omega't+\phi)$$
– If we choose $$A=B^{\space*}=C\left(\sin\phi+i\cos\phi\right)$$, then we have $$x(t)=2 C e^{-kt}\cdot\sin(\omega't+\phi)$$
(And both forms are valid to represent the position $$x(t)$$ in the underdamped harmonic motion, but no one uses the latter, because if we define $$D\triangleq2C$$ it's easy to see that the two forms are equivalent)
• I don't have atleast 15 reputation so I can't upvote – Abhishek Ghosh Nov 3 '18 at 18:54
• But I wish I could – Abhishek Ghosh Nov 3 '18 at 18:55
This differential equation is linear with constant coefficients. Mathematically, the simplest way to solve such equations is to find the solutions which are complex numbers, even when the coefficients of the equation are real.
The reason is that if the auxiliary polynomial (used to find the values of $$\alpha$$ in the notation of the OP's images) is of degree $$n$$, it always has $$n$$ complex roots, which lead to different solutions of the differential equation.
If the coefficients of the differential equation are all real, the roots are either real or are in complex conjugate pairs, as in the OP's example when the motion is underdamped, so you can combine the two complex solutions for a complex pair of roots to get one solution which is real and another which is pure imaginary (i.e. the real part is zero).
Since the equation is linear, you can multiply any solution by a constant (either real or complex), so multiplying the pure imaginary solution by $$i$$ (or more usually, by $$-i$$ since $$-i^2 = 1$$) gives a second real solution.
It is often "neater" to do the math using the complex solutions until the final step of interpreting the results, which physically must be real of course, since $$Ce^{i\omega t}$$ where $$C$$ is a complex constant is more compact than $$A \sin(\omega t) + B \cos(\omega t)$$ or $$A\sin(\omega t + \phi)$$.
• you mean to say that in the above situation exp(-kt).(A+B)cos(w't) is a purely real solution while exp(-kt).(i(A-B)sin(w't)) is the purely imaginary solution ... The differential equation being linear we can multiply exp(-kt)(i(A-B)sin(w't)) with '-i' to get another purely real solution Did I get you right?? – Abhishek Ghosh Nov 3 '18 at 13:58 | 2019-08-21 02:43:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 83, "wp-katex-eq": 0, "align": 3, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9577454924583435, "perplexity": 350.3959315305912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315750.62/warc/CC-MAIN-20190821022901-20190821044901-00399.warc.gz"} |
http://billybob884.deviantart.com/art/Fallout-3-Ammo-Assembled-154119191 | ### Details
Submitted on
February 14, 2010
Image Size
5.5 MB
Resolution
2285×3799
Thumb
Embed
Views
45,764 (8 today)
Favourites
405 (who?)
84
3,488 (2 today)
### Camera Data
Make
Canon
Model
Canon PowerShot SD750
Shutter Speed
1/13 second
Aperture
F/2.8
Focal Length
6 mm
ISO Speed
200
Date Taken
Feb 14, 2010, 3:27:51 PM
# Fallout 3 Ammo Assembledby billybob884
~^~^~ UPDATE 3! ~^~^~
Added a shot of everything I have down here with me for scale. w00t!
Also, the missile wouldn't fit in the case too... but considering I only designed it for the shotgun, it didn't work out too badly.
- - - - - - - - - - - - - - - - - - - - - -
~^~^~ UPDATE 2! ~^~^~
Added some shots of the Mesmetron Power Cell; looks exactly like the Energy Cell, except the batteries are blue instead of yellow. I actually used the same gray block for both, just swapped the batteries for the shot.
Also, replaced the glowing Alien Power Cell picture with a better shot.
- - - - - - - - - - - - - - - - - - - - - -
~^~^~ UPDATE ~^~^~
At the request of PitchblackDragon, I have added the Railway Spikes to this collection. They are technically untested, but they're pretty simple, I doubt there will be a problem.
- - - - - - - - - - - - - - - - - - - - - -
I've gotta admit, my motivation behind this set was two fold. It was at least partially due to my frustration with the inefficiency of ammo farming off of raiders to get more Alien Power Cells, so I figured, if I couldn't (easily) have more in the game, why not make more for my model?!
Yea... ok, and I just haven't done anything in a while and wanted to try and force myself to get back into it with something easy. Sue me (not you, Bethesda!). So after I quick did one for the APC, I had to do one for the Combat Shotgun, since I brought it down to school with me and everything, and then again for the Laser Pistol, and then it just kinda snowballed from there.
I've actually included several other types in the PDF, but didn't actually bother to build them all because they either didn’t really interest me, or only appear as simple boxes of bullets, and what fun is that? The only ones I didn't actually make patterns for were some of the ones that came in messy crates I was just too lazy to do, like the .308 and the 5mm, oh, and it turns out I forgot the Railroad Spikes... whatever. If I do get interest for any of them I suppose I could make a pattern, but otherwise the list stands as-is.
Set Includes:
.32 Caliber (.32 Pistol, Hunting Rifle) . . . . . . . . . . . . . . . . . . . . . . .Tin
.44 Round, Magnum (Scoped .44 Magnum) . . . . . . . . . . . . . . . . . .Box
5.56mm Magazine (Assualt Rifle) . . . . . . . . . . . . . . . . . . . . . . . . . Magazine
10mm (10mm Pistol, 10mm SMG, etc) . . . . . . . . . . . . . . . . . . . . . .Box
Alien Power Cell (Alien Blaster) (2 versions) . . . . . . . . . . . . . . . . . . Cell
BB's (BB Gun) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tin
Darts (Dart Gun) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Box
Electron Charge Pack (Gatling Laser) . . . . . . . . . . . . . . . . . . . . . .Cell
Energy Cell (Laser Pistol, Plasma Pistol) . . . . . . . . . . . . . . . . . . . .Cell Clip
Flamer Fuel (Flamer) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Tank
Mesmetron Power Cell (Mesmetron) . . . . . . . . . . . . . . . . . . . . . . .Cell Clip
Microfusion Cell (Laser Rifle, Plasma Rifle) . . . . . . . . . . . . . . . . . . Cell
Mini Nuke (Fat Man, MIRV) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bomb
Missile (Missile Launcher) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Bomb
Railway Spikes (Railway Rifle) . . . . . . . . . . . . . . . . . . . . . . . . . . . Spikes
Shotgun Shell (Sawed-Off Shotgun, Combat Shotgun) . . . . . . . . Box
They're all made to scale, except for the Darts. I would have had to cut the entire thing in half since it’s just too long to fit on a page, and it would have looked kinda crappy.
Heh, the Mesmetron Power Cell and Electron Charge Pack were actually the easiest for me to do, as they have the exact same mesh as the Energy Cell and Microfusion Cell (respectively). All I had to do was swap out the textures in PePaKuRa, and wham! Two more on the list.
The reason there are two version of the Alien Power Cell is better explained in the PDF (on the first page), but one is made to have the LEDs and the other isn't. Anyway! PDO’s are included, as usual, though you really shouldn’t need them for half of these... so have fun! Oh, and my final builds measured to:
Alien Power Cell . . . . . . . . . . . . . . . . . . . . . . . . .1.5" (3.5cm) x 1.5" (3.5cm) x 4" (10cm)
.32 Caliber (approx.) . . . . . . . . . . . . . . . . . . . . . . 4" (10.5cm) x 7" (17.5cm) x 2" (5.5cm)
.44 Round, Magnum (approx.) . . . . . . . . . . . . . . . 5.5" (14cm) x 2.5" (6.5cm) x 3" (7.5cm)
10mm (approx.) . . . . . . . . . . . . . . . . . . . . . . . . . . 3" (7.5cm) x 5.5" (14cm) x 2.5" (6.5cm)
5.56mm Magazine (approx.) . . . . . . . . . . . . . . . . .5.5" (14.5cm) x 1" (2cm) x 6.5" (17cm)
BB's . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3" (7.5cm) x 3" (7.5cm) x 1" (2.5cm)
Darts (approx.) . . . . . . . . . . . . . . . . . . . . . . . . . . .4.5" (11.5cm) x 1.5" (4.5cm) x 8.5" (22cm)
Energy Cell/Mesmetron Power Cell . . . . . . . . . . 2.5" (6cm) x 2.5" (5.5cm) x 1" (3cm)
Flamer Fuel (approx.) . . . . . . . . . . . . . . . . . . . . . .4.5" (11cm) x 4.5" (11cm) x 11.5" (28.5cm)
Microfusion Cell/Electron Charge Pack . . . . . . . .2" (5cm) x 2" (5cm) x 3" (7.5cm)
Mini Nuke . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5" (12.5cm) x 5" (12.5cm) x 9.5" (23.5cm)
Missile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5" (6.5cm) x 2.5" (6.5cm) x 17" (42.5cm)
Railway Spikes (approx.) . . . . . . . . . . . . . . . . . . . 5" (12.5cm) x 3" (7.5cm) x 3" (7cm)
Shotgun Shells . . . . . . . . . . . . . . . . . . . . . . . . . .4" (10cm) x 4" (10cm) x 3" (7.5cm)
PDF of Pieces
PePaKuRa PDO's
This is part of a series:
Fallout 3 Ammo
Combat Shotgun
Alien Blaster
AEP7 Laser Pistol
`.=-=Difficulty-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-.=-=-=-=-=-=Rating-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=. . . . . . . . . . . . . . . . . . . . . . . . . . . . -==- . . . . . . . . . . . . . . . . . . . . . . . . . . . .=--=. . .Piece . . Not. . .Getting. . . . . . . .Holy. . -==- . .o' Cake . .Bad. . . There. . . Gahh. . .Crap!. .=--=. . . .1--------2--------3--------4--------5. . . -==- . . . . . . . . . . ./\ . . . . . . . . . . . . . . . .=--= . . . . . . . . . . . . . . . . . . . . . . . . . . . .-==-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-'=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-'`
Note, difficulty varies between models, above rating is "on average"
Have you built this or one of my other models? Feel free to let me know how hard you thought they were!
How did you take the game models and make them into pepakura files?
Hi, nice collection, I use your Mini Nuke model to decorate Fallout RobCo terminal. toudi2.deviantart.com/art/Fall…
looks great!
Holy talon company. Paint me green and call me leo, this is impressing. Looks like the absolute thing
Nov 3, 2012 Student Digital Artist
Sep 15, 2012 Student Digital Artist
this looks awesome!! Really good job!
thanks!
Jul 28, 2012 General Artist
I like the mini nuke the best. good work on all of them.
Where the hell can I find these things to print out..... | 2014-04-18 18:10:51 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9064458608627319, "perplexity": 252.8808050311412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/calculus-algebra-question.64924/ | # Calculus - Algebra question
1. Feb 25, 2005
### lektor
The Numerical Values of the circumference and area of a circle add up to 16.
Determine the radius to 4 sf.
Well so far me and my friend have been working on this and all of our results have been different to the final question, :<
We first tried to manipulate with the formula pi x r^2 = a without any success, it would be appreciated if someone could clarify this question.
Yes, maybe not a very hard question but it has confused us.
Last edited: Feb 25, 2005
2. Feb 25, 2005
### Integral
Staff Emeritus
You know that the:
Area + Circumferance = 16
Area = $\pi r^2$
Circuference = $\pi r$
all you need do is solve the above relationship for r.
3. Feb 25, 2005
### danne89
Denote the area as x and the circumference as y
x + y = 16
x=Pi(y/2)^2
Then substitute y in terms of x in the second equation and solve it.
4. Feb 25, 2005
### scholar
Probably a typo, but I'm fairly sure that Circuference = $2\pi r$
5. Feb 25, 2005
### lektor
hah
Btw for thoose who are wondering the answer was 1.4683
6. Feb 25, 2005
### WORLD-HEN
why is this a "calculus-algebra" question?
7. Feb 25, 2005
### lektor
In the New Zealand Curriculum.
Caculus is comprised of
Complex numbers/algebra
Differentiation
Intergration
Conics
So by New Zealand standards this is an Calculus - Algebra question | 2018-01-20 23:28:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48032161593437195, "perplexity": 2204.7109027344936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889736.54/warc/CC-MAIN-20180120221621-20180121001621-00354.warc.gz"} |
https://forum.dynare.org/t/epstein-zin-preferences-in-basic-new-keynesian-model/18137/6 | # Epstein-Zin Preferences in basic New Keynesian model
Dear Dynare forum,
I am trying to implement a basic NK model with Epstein-Zin preferences. I am aware that professor Pfeifer has replicated some papers that uses EZ preferences (thanks a lot for that btw), but I am avoiding the explicit preferences suggested by EZ and using a more friendlier format:
V_t = u(C_t, L_t) + \beta ( E_t V_{t+1}^{1-\gamma_V} )^{1/(1-\gamma_V)}
The main issue of this implementation is that I am using an instantaneous utlity CRRA format, implying that u(.) < 0 everywhere. To adapt this feature I adjusted the recursive preferences format accordingly:
V_t = u(C_t, L_t) - \beta( E_t (-V_{t+1})^{1-\gamma_V} )^{1/(1-\gamma_V)}
This imply some small changes in the stochastic discount factor. Also implies that the risk aversion parameter, \gamma_V, should be negative to capture risk aversion.
I am having trouble to solve the model. Blanchard-Kahn and Rank conditions are not being met. I don’t know what is wrong because the calibration seems standard, it should work fine, unless this specification of EZ preferences does not work.
Please find the mod file attached. For a reference on the specification of EZ preferences please check: “The Bond Premium in a DSGE Model with Long-Run Real and Nominal Risks”, from Glenn D. Rudebusch and Eric T. Swanson (2012).
Appreciate any help.
epstein_dynare.mod (3.3 KB)
I hope this will help:
Please first verify that the model works with CRRA preferences. One immediate issue is An infinity of steady states with Taylor rules
It helps a lot ! I am a fan of professor Eric Sims notes. I am quite suprised that I did not saw these. In the notes, he cites the problem of the negativity of utility, but does not include the solution proposed by the authors.
Although his implementation appears to work fine, those equations does not avoid taking a root of a negative number to compute model steady state. I will send an email to him requesting the mod file. Thanks !
Thanks for you reply professor ! I was not aware of this problem, it makes a lot of sense. To correct it I defined an additional parameter called “Pi_ss” instead of using the STEADY_STATE() operator in the Taylor rule. I’ve attached this updated mod file.
epstein_dynare.mod (3.3 KB)
However, Blanchard Kahn conditions are still not being met. The model works fine with standard CRRA preferences.
Your problem may be numeric. See
STEADY-STATE RESULTS: | 2022-08-11 07:46:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6469219326972961, "perplexity": 2569.7912226795743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571246.56/warc/CC-MAIN-20220811073058-20220811103058-00085.warc.gz"} |
https://math.stackexchange.com/questions/376900/binomial-model-probability | # Binomial Model Probability
Can someone explain how to solve the following stats problem:
68% of students study for an exam. Of those who study, 97% pass. Of those who do not study, 60% pass. What is the probability that a teenager who passes the exam did not study?
Let $P$ denote the event the student passes, and let $NS$ denote the event the student did not study. We want the conditional probability $\Pr(NS|P)$. By the definition of conditional probability, we have $$\Pr(NS|P)=\frac{\Pr(NS\cap P)}{\Pr(P)}.$$ We want to calculate the two probabilities on the right.
Let's do the hard part first, and find $\Pr(P)$. Passing can happen in two ways: (i) did not study and passed or (ii) studied and passed.
For the probability of (i), the probability a student does not study is $0.32$. Given she does not study, the probability she passes is $0.60$. So the probability of (i) is $(0.32)(0.60)$.
Remark: There is no strong connection between this problem and the binomial distribution.
Similarly, the probability of (ii) is $(0.68)(0.97)$.
For $\Pr(P)$, add the answers to (i) and (ii).
Now we want the numerator, the probability of $NS\cap P$. We have already computed this, it is the probability of (i).
• This is extremely helpful, thank you. – user75133 Apr 30 '13 at 5:02
• You are welcome. Perhaps if you have further questions, you could indicate what you have tried, so that answers can focus on your individual source of difficulty. – André Nicolas Apr 30 '13 at 5:05 | 2019-10-13 20:42:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8848338723182678, "perplexity": 145.08981506338262}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986647517.11/warc/CC-MAIN-20191013195541-20191013222541-00354.warc.gz"} |
https://www.clawpack.org/release_5_8_0.html | # v5.8.0 release notes¶
Clawpack 5.8.0 was released on February 4, 2021. See Installing Clawpack.
Permanent DOI: http://doi.org/10.5281/zenodo.4503024
Changes relative to Clawpack 5.7.1 (Sept. 11, 2020) are shown below.
To see more recent changes that are in the the master branch but not yet released, see Changes to master since v5.8.0.
## Changes that are not backward compatible¶
• For AMRClaw and GeoClaw, the data file amr.data now created from setrun.py now includes an additional line with the parameter memsize specifying the initial length of the alloc array used for allocating memory to patches when adaptive refinement is used. This can be specified in setrun.py by setting amrdata.memsize. If it is not set, then default values are used that are similar to past default values; see Specifying AMRClaw run-time parameters in setrun.py. So this is backward compatible in the sense that no changes to setrun.py are required, but the old amr.data files will not work so you may need to do make data to create a new version.
• In GeoClaw, refinement “regions” can no longer be specified implicitly when listing a topo dtopo or qinit file. See the geoclaw section below. Note: You may need to explicitly declare new regions or flagregions to produce the same behavior as in past versions of GeoClaw.
• The GeoClaw transverse Riemann solver rpt2_geoclaw.f has been improved and results in slightly different computated results in some cases. For more details see the riemann and geoclaw sections below.
• For AMRClaw and GeoClaw, an additional short array is saved in a checkpoint file for use in a restart. Due to this change, a checkpoint file created using a previous version of Clawpack cannot be used for a restart with the new version.
## General changes¶
The travis tests that automatically run on pull requests no longer test using Python2, only Python3. See Dropping support for Python 2.7.
## Changes to visclaw¶
• ClawPlotAxes.skip_patches_outside_xylimits does not work properly if there is a mapc2p function defining a grid mapping, so it is now ignored in this case.
## Changes to riemann¶
• The GeoClaw transverse solver rpt2_geoclaw.f was modified to fix some long-standing bugs and change some of the logic.
The new version gives slightly different results on most problems, but extensive testing indicates the new results are at least as good as the old. The new version has also been refactored to make the logic clearer and to avoid some unnecessary work, and generally runs faster. In some cases where instabilities had been observed in long-duration runs (particularly for storm surge), the new version appears to provide better stability.
In particular, the left- and right-going waves are now split up transversely using states in the cell to the left (resp. right) in which the splitting is performed, rather than using Roe averages based on the cell from which the wave originates.
## Changes to amrclaw¶
• An additional short array is saved in a checkpoint file for use in a restart. Due to this change, a checkpoint file created using a previous version of Clawpack cannot be used for a restart with the new version.
• A memsize parameter can now be set in setrun.py, see above and Specifying AMRClaw run-time parameters in setrun.py.
• src/2d/prepc.f was improved to use less storage from the work array alloc that is used for memory allocation for AMR patches. For large-scale problems this can be a substantial savings and allow running larger problems.
## Changes to geoclaw¶
Several changes were made to fix long-standing bugs. These fixes lead to slightly different results than those obtained with previous versions of GeoClaw. In all the tests performed so far the changes are minor and it is thought that the new version is at least as accurate as the old version. Please let the developers know if you run into problems that may be related to these changes.
• In filpatch.f90: The slope chosen for interpolating from a coarse grid to the ghost cells of a fine-grid patch had an index error that could affect the sign of the slope used in momentum components. Also slopes were not always initialized to zero properly at the start of a loop
• Some index errors were fixed in fgmax_interp.f90.
• Changes to riemann/src/rpt2_geoclaw.f90. These cause some change in results but tests have shown the new results appear to be at least as good as previous results and the code may be more stable in some situations. For more detail see the “Changes to riemann” above.
• The new flagregions introduced in v5.7.0 (see Specifying flagregions for adaptive refinement) were not implemented properly in GeoClaw, and in some situations refinement to a maxlevel that was indicated only in flagregion was not allowed as expected. This is now fixed.
• In previous versions of GeoClaw one could implicitly define AMR flag regions that are aligned with the spatial extent of topo, dtopo, or qinit files by specifying minlevel, maxlevel (and in the case of topo files, a time interval t1, t2) when the file name is given. This feature did not always work as advertised and was often confusing. If these values are specified then they are now ignored, as explained in more detail in the following items. Not that you may have to explicitly declare new flag regions now in order to have the expected refinement regions.
• When specifying topo files in setrun.py using the format:
[topotype, minlevel, maxlevel, t1, t2, fname]
the values minlevel, maxlevel, t1, t2 will now be ignored. To avoid warning messages, instead specify:
[topotype, fname]
• When specifying dtopo files in setrun.py using the format:
[topotype, minlevel, maxlevel, fname]
the values minlevel, maxlevel will now be ignored. To avoid warning messages, instead specify:
[topotype, fname]
• When specifying qinit files in setrun.py using the format:
[minlevel, maxlevel, fname]
the values minlevel, maxlevel will now be ignored. To avoid warning messages, instead specify:
[fname]
• A memsize parameter can now be set in setrun.py, see above and Specifying AMRClaw run-time parameters in setrun.py.
• An additional short array is saved in a checkpoint file for use in a restart. Due to this change, a checkpoint file created using a previous version of Clawpack cannot be used for a restart with the new version.
## Changes to PyClaw¶
For changes in PyClaw, see the PyClaw changelog.
See pyclaw diffs
# Other Clawpack Repositories¶
The repositories below are not included in the Clawpack tarfile or pip install, but changes to these repositories may also be of interest. | 2021-08-01 17:47:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4058953821659088, "perplexity": 2525.3040369811188}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154214.63/warc/CC-MAIN-20210801154943-20210801184943-00636.warc.gz"} |
https://michalwojcik.com.pl/2020/11/14/tests-in-bash/ | # Tests in bash
Tests are fundamental feature in bash scripting. However, tests in bash are not the same as test in some application – they do not check if your script works correctly, it’s a way to write an expressions that can be true or false.
$[ 1 = 0 ]$ echo $? 1 // false$ [ 1 = 1 ]
$echo$?
0 // true
$? variable is a special variable that gives you a number telling you result of the last-executed command. If it returned true, the number will (usually) be ‘0’. If it didn’t, the number will (usually) not be ‘0’. Unlike in many programming languages, there is no difference in using single equal sign to double. $ [ 1 = 1 ]
$echo$?
0 // true
$[ 1 == 1 ]$ echo $? 0 // true # Space required Hence [ and ] are builtins, the space is required after them. The [ is the separate command, and bash spacing is the way how bash determines where the one command ends and another begins. $ [1 == 1]
$echo$?
127
$[ 1 == 1 ]$ echo $? 0 // true # [ vs [[ The difference between them is very subtle, try below code to find out what it actually is. $ unset DOESNOTEXIST
$[${DOESNOTEXIST} = '' ]
bash: [: =: unary operator expected
$echo$?
2 // misuse of builtin
$[[${DOESNOTEXIST} = '' ]]
$echo$?
0
First command with single [ evaluates command to [ = '' ], thus error is thrown. Command with [[ transforms empty variable to empty string [ '' = '' ] and the test is true.
In practice you should use [[ until there is a good reason not to. More on the differences between these two can be found here.
## Unary operators
$echo$PWD
/home/root
$[[ -z$PWD ]]
$echo$?
1 // false
-z returns true only if the argument is an empty string. Interestingly, this test will pass even we provide empty variable:
$echo$FAKE
$[[ -z$PWD ]]
$echo$?
0 // TRUE!
Another most common unary operators are -a and -d.
$touch file$ [[ -a file ]]
$echo$?
0 // TRUE = file exists
$[[ -a second_file ]]$ echo $? 1 // FALSE = file does not exist $ mkdir folder
$[[ -d folder ]]$ echo $? 0 // TRUE = folder exists$ [[ -a second_folder ]]
$echo$?
1 // FALSE = folder does not exist
## Binary operators
$[[ 10 -lt 2 ]] // less than$ [[ 10 -gt 1 ]] // greater than
$[[ 10 -eq 1 ]] // equals$ [[ 10 -ne 1 ]] // not equals
## If statements
In the end, we are going to use tests very frequently in the if statements.
--- script.sh ---
#!/bin/bash
if [[ 10 -lt 5 ]]; then
echo 'if block'
elif [[ 10 -gt 4 ]]; then
echo 'elif block'
else
echo 'else block'
fi
\$ ./script.sh
elif block | 2022-07-01 13:35:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19212515652179718, "perplexity": 12237.339791801065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103941562.52/warc/CC-MAIN-20220701125452-20220701155452-00296.warc.gz"} |
http://mathematica.stackexchange.com/questions/28283/unwrapping-a-list-when-invoking-a-function | # Unwrapping a list when invoking a function [duplicate]
The minimal working example of my problem is as follows:
l = {1, 2, 3, 4}
f[a_, b_, c_, d_] = a + b + c + d
Now, I'd like to evaluate
f[l[[1]],l[[2]],l[[3]],l[[4]]]
, but with a syntax like f[Unwrap[l]].
I don't have access to the code of 'f', and I can't simply change the way it is defined to accept a list
Basically, I am missing the functionality present in Python with the *,
l=[0,1,2,3]
def f(a,b,c,d):
return a+b+c+d
print f(*l)
-
## marked as duplicate by rm -rf♦Jul 8 '13 at 18:48
list = {1, 2, 3, 4}; f[a_, b_, c_, d_] = a + b + c + d; f[Sequence @@ list] ? – belisarius Jul 8 '13 at 18:36
f@@l does the trick ;) (python pah!) – Stefan Jul 8 '13 at 18:37
Related question. – Leonid Shifrin Jul 8 '13 at 18:49
@rm-fr If this link should be considered canonical Q&A maybe You will put there Operate based answer. – Kuba Jul 8 '13 at 19:15
There are two ways of doing this that are mostly equivalent. First,
f[ Sequence @@ l ]
(* 10 *)
But, the use of Sequence is to many characters, in my opinion, and there is a better way. Essentially, the notation @@ is shorthand for the function Apply which replaces the Head of an expression with another head. In the prior case, Sequence replaced the head List which was then passed into f. But, this can be used directly,
f @@ l
(* 10 *)
which replaces the head List with f.
-
and the third one (thanks to rm -rf)
l = {a, b, c, s};
Operate[f &, l]
f[a, b, c, s]
but in such simple case f@@l is what I use.
- | 2014-12-21 18:05:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4949418604373932, "perplexity": 2875.0938321557483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772125.148/warc/CC-MAIN-20141217075252-00170-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://socratic.org/questions/the-position-of-an-object-moving-along-a-line-is-given-by-p-t-t-2-6t-3-what-is-t | # The position of an object moving along a line is given by p(t) = t^2 - 6t +3. What is the speed of the object at t = 3 ?
Mar 5, 2017
As the speed is the derivative of the position function, at $t = 3$, its speed is zero.
#### Explanation:
I have to assume you are working with calculus in this Physics course. The first derivative of the position with respect to time will give the velocity function:
$\frac{\mathrm{dp} \left(t\right)}{\mathrm{dt}} = v \left(t\right)$
$\frac{d}{\mathrm{dt}} \left({t}^{2} - 6 t + 3\right) = 2 t - 6$
If we evaluate this function at $t = 3$
$v = 2 \left(3\right) - 6 = 0$
The object has (momentarily) stopped at $t = 3$ s. (But note that this does not imply the position or the acceleration is zero. In fact it's position is ${3}^{2} - 6 \left(3\right) + 3 = - 6$
And, for the record, its acceleration is the second derivative of $p \left(t\right)$ or the first derivative of $v \left(t\right)$, namely 2. | 2019-03-25 01:25:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8394557237625122, "perplexity": 158.29786226666775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203547.62/warc/CC-MAIN-20190325010547-20190325032547-00155.warc.gz"} |
http://marketplace.sasview.org/models/103/ | Core-Chain-Chain (CCC) Model
Description:
This form factor describes scattering from spherical cores (nanoparticle, micellar, etc.) that have chains coming off normal from their surface. In the case of
the Core-Chain-Chain (CCC) Model, these chains have two different regions of conformation, size, and scattering length density. The transition from region 1 (near
the nanoparticle core, CPB) to region 2 (SDPB) is given by the parameter $r_{c}$, which is the distance from the center of the nanoparticle to the junction point. See
fig. 1 of reference 1 for a schematic of the geometry.
The scattering intensity is the sum of 8 terms, and an incoherent background term $B$. This model returns the following scattering intensity:
$$I(q) = \frac{scale}{(V_{core} + N_{c}V_{c})} \times \left[ P_{core}(q) + N_{c}P_{CPB}(q) + N_{c}P_{SDPB}(q) + 2N_{c}F_{core}(q)j_{0}(qr_{core})F_{CPB}(q) + 2N_{c}F_{core}(q)j_{0}(qr_{c})F_{SDPB}(q) + N_{c}(N_{c} - 1)F_{CPB}(q)^{2}j_{0}(qr_{core})^2 + N_{c}(N_{c} - 1)F_{SDPB}(q)^{2}j_{0}(qr_{c})^2 + N_{c}^{2}F_{CPB}(q)j_{0}(qr_{core})j_{0}(qr_{c})F_{SDPB}(q) \right] + B$$
where $N_{c}$ is the number of chains grafted to the nanoparticle, $r_{core}$ is the nanoparticle radius, and $r_{c}$ is the position of the junction between the CPB and
SDPB regions. The terms $`P_{core}(q)$, $P_{CPB}(q)$, and $P_{SDPB}(q)$ are the form factors of the spherical core, polymer in the CPB region, and polymer in the SDPB
region, respectively. $F_{core}(q)$, $F_{CPB}(q)$, and $F_{SDPB}(q)$ are the form factor amplitudes of the respective regions. For the nanoparticle core, these terms are related as:
$$P_{core}(q) = \left| F_{core}(q) \right|^{2} = \left| V_{core}(\rho_{core} - \rho_{solv})\frac{3j_{1}(qr_{core})}{qr_{core}} \right|^{2}$$
where $\rho_{core}$ is the scattering length density of the nanoparticle core, $\rho_{solv}$ is the scattering length density of the solvent/matrix, $V_{core}$ is the
volume of the nanoparticle core, and $j_{1}(.)$ is a spherical Bessel function.
Scattering from the CPB and SDPB regions is described by the form factor amplitudes and form factors of these regions. For a given region $i$, these terms read:
$$P_{i}(q,N_{i}) = V_{i}^{2}(\rho_{i} - \rho_{solv})^{2} \left[ \frac{1}{\nu_{i} U_{i}^{1/2\nu_{i}}}\gamma \left( \frac{1}{2\nu_{i}}, U_{i}\right) - \frac{1}{\nu_{i} U_{i}^{1/\nu_{i}}}\gamma \left(\frac{1}{\nu_{i}}, U_{i} \right) \right]$$
and
$$F_{i}(q,N_{i}) = V_{i}(\rho_{i} - \rho_{solv})\frac{1}{2\nu_{i} U_{i}^{1/2\nu_{i}}}\gamma \left( \frac{1}{2\nu_{i}}, U_{i}\right)$$
where $N_{i}$ is the degree of polymerization of the portion of polymer in region $i$, $V_{i}$ is the volume of polymer in region $i$, $\nu_{i}$ is the excluded volume parameter of the polymer in region $i$, $\rho_{i}$ is the scattering length density of polymer in region $i$, $U_{i} = q^{2} b^{2} N_{i}^{2\nu_{i}}/6$, $b$ is the polymer's Kuhn length, and $\gamma$ is the lower incomplete gamma function.
Details:
Created By mjahore Uploaded Aug. 23, 2018, 12:09 a.m. Category Sphere Score 0 Verified This model has not been verified by a member of the SasView team In Library This model is not currently included in the SasView library. You must download the files and install it yourself. Files core_chain_chain.py | 2019-02-21 12:52:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7340837717056274, "perplexity": 858.3812094560842}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247504594.59/warc/CC-MAIN-20190221111943-20190221133943-00381.warc.gz"} |
https://zbmath.org/?q=an:1370.57003 | # zbMATH — the first resource for mathematics
On homotopy 3-spheres (reprint of the 1966 original). (English) Zbl 1370.57003
In this 50 year old paper the author, who later solved the knot problem for diagrams of the unknot and, with the help of Appel and a computer, proved the Four Color Theorem, reduces the Poincaré conjecture to an analysis of the singularities of mappings of a disc (Theorem 2), a 2-sphere (Theorem 3) and a 3-sphere (Theorem 1) in the homotopy 3-sphere, with a remarkable series of explicit illustrations revealing that his impressive techniques are primarily visual – really no surprise for a microwave technologist who started out as a part-time topologist and earned his doctorate from Johann Wolfgang Goethe-Universitat the hard way: honorarily.
See also the review of the original [W. Haken, Ill. J. Math. 10, 159–178 (1966; Zbl 0131.20704)].
##### MSC:
57M25 Knots and links in the $$3$$-sphere (MSC2010)
Full Text: | 2021-05-07 19:20:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5585751533508301, "perplexity": 1701.4689666658849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988802.93/warc/CC-MAIN-20210507181103-20210507211103-00633.warc.gz"} |
http://www.krejcovstvi-blatka.cz/products/c930eecd011447.html | ## iceland vertical cylindrical tank fire volume
### Can a Fireguard tank be pressure testable in the factory?Can a Fireguard tank be pressure testable in the factory?Because of its unique construction,each tank is pressure-testable in the factory and at the jobsite.With Fireguard ®,there is no question of compliance with fire codes; the tank is shipped with factory-installed emergency vents on both the primary and the secondary containment tanks for protection if exposed to fire or excessive pressure.Highland Tank Fireguard UL-2085 Thermally Protected FeedbackHighland Tank Fireguard UL-2085 Thermally Protected
Available in 500-12,000 gallon capacities,there is no question of compliance with fire codes; the tank is shipped with factory-installed emergency vents on both the primary and the secondary containment tanks for protection if exposed to fire or excessive pressure. How to calculate the volume of a vertical capsule tank?How to calculate the volume of a vertical capsule tank?To calculate the volume of a vertical capsule tank treat the capsule as a sphere of diameter d split in half and separated by a cylinder of diameter d and height a.Where r = d/2.Tank Volume Calculator
### What should the pressure of a vertical tank be?What should the pressure of a vertical tank be?Vertical tanks are required to be tested only to a pressure that exceeds 1.5 psig (10.3 kPag); however,they are also subject to liquid release upon failure,so the same service restrictions apply.The International Organisation for Industrial Hazard Management - JOIF4.8/5(12)Category CalculatorsOperating System LINUX WINDOWSEstimated Reading Time 5 minsicelandicelandImages of Iceland vertical Cylindrical Tank Fire Volume
imagesCYLINDRICAL STEEL TANK STANDARD SPECIFICATIONgalvanized cylindrical steel bolted liquid storage tanks.These tanks are primarily used for the storage of water in the potable water,fire sprinkler and irrigation markets.The galvanized cylindrical tanks are site assembled using overlapping and bolted galvanized steel panels that are manufactured within the companys facility in the UK.90000 Gallon Galvanized Steel Water TankIntroduction to Corgal Tanks for Fire Protection Professionals Size Info Capacity (Gallons) 95000 Dimensions 39' Diameter x 22' 2 Height Liquid Accessibility Liquid Access Customized to Project Needs Physical Features Product Type Vertical Water Tank Tank Shape Cylindrical Tank Usability Stationary Environment Above Ground
### API 2550 - Method of Measurement and Calibration of
scope This standard describes the procedures for calibrating upright cylindrical tanks larger than a barrel or drum.It is Tesented in two parts Part I (Sections to 41) outlines procedures for making necessary measurements to determine total and incremental tank volumes; Part II (Sections 42 to 58) presents the recommended procedure for computing volumes.Aboveground Storage Tanks RegulationsAboveground storage tanks (ASTs) are used to house a variety of liquids from waste water to petrochemicals.Because most ASTs house material that is flammable or toxic,the Federal government has enacted strict aboveground storage tank regulations to reduce or eliminate personnel injury or environmental contamination due to explosion or spilling.Calculating Tank Wetted Area - Chemical ProcessingPage 2 of 17 Variables and Definitions Sidebar (See Figs.1-5) a is the distance a horizontal tank's heads extend beyond (a > 0) or into (a < 0) its cylindrical or elliptical body section or the depth the bottom extends below the cylindrical or elliptical body section of a vertical
### Chapter 3 Integral Relations - SFU
3.12 The pipe flow in Fig.P3.12 fills a cylindrical tank as shown.At time t = 0,the water depth in the tank is 30 cm.Estimate the time required to fill the remainder of the tank.Fig.P3.12 Solution For a control volume enclosing the tank and the portion of the pipe below the tank,out in 0 d dv m m dt + = 2 ()0 out Content of Horizontal - or Sloped - Cylindrical Tank and PipeVolume of partly filled horizontal or sloped cylindrical tanks and pipes - an online calculator Sponsored Links The online calculator below can be used to calculate the volume and mass of liquid in a partly filled horizontal or sloped cylindrical tank if you know the inside diameter and the level of the liquid the tank.Cylindrical Tank High Resolution Stock Photography and Fire officials said the fire broke out after tremors forced the lid of the cylindrical tank to partially open,allowing some oil to spill out.Pictures of the Year 2003 REUTERS/Kimimasa Mayama KM/TW An oil storage tank containing naphtha in flames in Tomakomai,northern Japan September 28,2003 as a fire engine rushes toward the tank.
### Cylindrical Tanks McMaster-Carr
Choose from our selection of cylindrical tanks,including tanks,round plastic batch cans,and more.In stock and ready to ship.DESIGN RECOMMENDATION FOR STORAGE TANKS ANDA4 Above-ground,Vertical,Cylindrical Storage Tanks ----- 154 Appendix B Assessment of Seismic Designs for Under-ground Storage Tanks ----- 160 .Chapter2 1 1.General 1.1 Scope This Design Recommendation is applied to the structural design of water storage Design,Construction and Operation of the FloatingFigure 1.1 Fire and explosion incidents in the tanks 6 Figure 1.2 Types of storage tank 7 Figure 1.3 Types of Fixed Roof Tanks 8 Figure 1.4 Single Deck Pontoon Type Floating Roof 9 Figure 1.5 Double Deck Type Floating Roof 10 Figure 1.6 Single Deck Floating Roof Tank 12
### Dished End Horizontal Cylinder Tank Calculator Spirax Sarco
Dished End Horizontal Cylinder Tank.Determine the size of the steam coil and its associated control valve and steam trap for a horizontal cylindrical tank.Note - You cannot use commas (,) as decimal points.Please use periods (.) Example 1.02 not 1,02Dished End Horizontal Cylinder Tank Calculator Spirax SarcoDished End Horizontal Cylinder Tank.Determine the size of the steam coil and its associated control valve and steam trap for a horizontal cylindrical tank.Note - You cannot use commas (,) as decimal points.Please use periods (.) Example 1.02 not 1,02FIBREGLASS STORAGE TANKS FOR ALL INDUSTRIESSectional tanks fits into any limited space as its structure utilizes horizontal and vertical spaces at the maximum through the use of diverse sizes of panels 1mx1m ,1mx0.5 m ,1mx0.3m ,0.5mx0.3m ,1mx1.5m ,0.5mx1.5m ,1mx2m ,0.5mx2m.
### File Size 203KBPage Count 9Vertical cylindrical tank for petroleum products
If it is necessary to drain a large amount of liquid from a vertical cylindrical tank for large-volume oil products,then a siphon tap is used.The diameter of this device can be either 80 mm or 100 mm.The installation of these holes is carried out in the vertical walls of the tankFile Size 824KBPage Count 16icelandicelandSpill Prevention Control and Countermeasure (SPCC) PlanTank Volume VTank B (ft ) = 2,140 x 0.1337 = 286 ft Shell Capacity (gal) ft3/gal x July 2011 -Page 6 of 9First Revision No.147-NFPA 20-2013 [ Global Input ]Fuel supply tank(s) shall have a capacity at least equal to 1 gal per hp (5.07 L per kW),plus 5 percent volume for expansion and 5 percent volume for sump.11.4.2 1.3.2 Whether larger-capacity fuel supply tanks are required shall be determined by prevailing conditions,such
### Flow of Liquids from Containers - Volume Flow and
For height 1.5 m the volume flow is 0.026 m 3 /s.For height 0.5 m the volume flow is 0.015 m 3 /s.Draining Tank Calculator.This calculator is based on eq.(1b) and can be used to estimate the volume flow and time used to drain a container or tank through an aperture.Get Simple Volume - Microsoft Store en-ISSimple Volume version 1.2.1 This program calculates the volume of liquid in certain types of tanks based on the depth of the liquid when measured from the bottom of the tank.The types of tanks are spherical,horizontal cylindrical,vertical cylindrical and rectangular.The dimensions to be entered are either in inches or centimeters.Horizontal Tank - an overview ScienceDirect TopicsFor a horizontal tank 75% of the total surface area or the surface area to 9.14 m (30 ft) above grade,whichever is more.For a vertical tank the first 9.14 m (30 ft) above grade of the vertical shell surface area; if the tank is on legs use engineering judgment to evaluate the portion of
### How to Classify Oil Tanks?
Vertical fixed roof oil tanks,which are composed of fixed tank roof and vertical cylindrical tank wall,are mainly used for storage of non volatile oil,such as diesel oil and the similar oil.The most commonly used volume of fixed roof oil tanks right from 1,000m³ to 10,000m³.L/D Ratio of storage tank - Chemical engineering other Jan 31,2018·What is the significance of Choosing optimum L/D Ratio in Atmospheric storage tanks.? As a normal practice the L/D ratio of atmospheric storage tank is considered within the limit of 0.5 to 1.1.However,it is noticed that for some tanks it doesn't follow this limit.For example,fresh Amine storage tank.Liquid Volume of Horizontal Tank with Dished Ends Jan 18,2013·I am the Head of Department for quality control Environment.we calculated our Horizontal Tank volume-Tori spherical Head,As per our knowledge it is giving spurious reading.Tank Dia is 2400 mm,LOS is 3000 mm,Torispherical head length from center are 472 mm.Internal lining material is natural rubber with thickness 4.3 mm (three layers).
### NFPA 30-2008 Basic Requirements for Storage Tanks
Feb 22,2011·UL 2080,Fire Resistant Tanks UL 2085,Protected Aboveground Tanks 1.14 Chapter 21 Chapter 21 GeneralGeneral maximum operating pressures for ambient pressure tanks 0.5 psi gauge for vertical cone roof tanks 1.0 psi gauge,if designed to Appendix F of API Standard 650 1.0 psi gauge for horizontal cylindrical or rectangular tanksRelated searches for iceland vertical cylindrical tank fire volcylindrical tank volumehorizontal cylindrical tank volume formulahorizontal cylindrical tank volume calculatorcylindrical tank volume formulavolume of cylindrical tank calculatorcylindrical water tankvertical tank volume calculatorcylindrical volume formulaTank Baffle - EuromixersNon-cylindrical mixing tanks are usually either rectangular,or horizontal tanks,and as previously mentioned these tanks usually do not require baffles unless the required level of agitation is high.These tanks are asymmetrical with respect to the mixer shaft and as a result are self-baffling for applications where the applied mixer power
### Tank Storage Glossary Oiltanking
Oil is usually stored in vertical cylindrical tanks made of steel.The appropriate type of construction and materials for storing these products is defined by DIN standards.Beyond this,the respective federal state building regulations based on the Construction Products Act,and all applicable fire protection regulations must also be observed Tank Volume Calculator - Inch Calculatortank volume = 73,287 cu in Thus,the capacity of this tank is 73,287 cubic inches.Step Four Convert Volume Units The resulting tank volume will be in the cubic form of the initial measurements.Tank Volume CalculatorA = r 2 where r is the radius which is equal to d/2.Therefore V (tank) = r2h.The filled volume of a vertical cylinder tank is just a shorter cylinder with the same radius,r,and diameter,d,but height is now the fill height or f.Therefore V (fill) = r2f.
### The International Organisation for Industrial Hazard
1/3 x diameter of tank Horizontal and vertical tanks with emergency relief venting to limit pressures to 2.5 psi (gauge pressure of 17 kPa) Approved inerting systemb on the tank or approved foam system on vertical tanks 1/2 x value in Table 22.4.1.1(b) 1/2 x value in Table 22.4.1.1(b)Useful Calculation sheets (excel and mathcad files) for DESIGN OF STEEL STORAGE TANKS AS PER API-650 SELF-SUPPORTED CONE ROOF DESIGN DATA Service HSD SERVICE Allowable Design St Capacity 21 KL Allowable Test Stre Type of tank Self Supported Cone Roof Specific Gravity of L Dia of tank (feet) 10.004 Corrosion Allowance Height of tank (feet) 9.512 Slope of roof 1 5 Slope of bottom Flat Bottom Plate DataVERTICAL CYLINDER CALCULATORIf you want to do calculations for a horizontal cylinder,then go to this link Horizontal Cylinder Calculator.Example Inputting tank height = 12,liquid level = 3 and tank diameter = 6,then clicking Inches will display the total tank volume in cubic inches and US Gallons and will also show the volume at the 3 inch level.If the tank is filled with water,this calculator also displays the
### Volume and Wetted Area of Partially Filled Horizontal
The calculation of a horizontal vessels wetted area and volume is required for engineering tasks such fire studies and the determination of level alarms and control set points.However the calculation of these parameters is complicated by the geometry of the vessel,particularly the heads.This article details formulae for calculating the wetted area and volume of these vessels for various Volume of a circular truncated cone Calculator - High Calculates the volume,lateral area and surface area of a circular truncated cone given the lower and upper radii and height.lower radius r1 upper radius r2 height h volume V .lateral area F .surface area S .C i r c u l a r t r u n c a t e d c o n e (1) v o l u m e V = 1 3 (r 1 2 + r 1 r 2 + r 2 2) h (2) icelandiceland(PDF) HOW TO CALCULATE THE VOLUMES OF PARTIALLYTo calculate the fluid volume in a vertical or horizontal tank can be complicated depending of the fluid height and the caps.This article makes a synthesis of the calculations for the volumes of
Sep 05,2017·both horizontal and vertical tanks with spherical heads.The calculation of the liquid in the heads is approxi-mate.The graph shows lines for tank diameters from 4 to 10 ft,and tank lengths from 1 to 50 ft.The accuracy of the liquid volume depends on certain approximations and the precision of interpolations that may be required.icelandicelandAboveground Tanks Fireguard&Horizontal TanksSizes available 250 gallons to 20,000 gallons.Cylindrical,rectangular or vertical designs available.Tanks can have multiple compartments (multi-product) Insulated with a light weight monolithic material,for secondary containment,that is 75% lighter than concrete (110% containment of primary volume.) Interstitial space can be monitored for leak detection.icelandicelandChapter 2.Secondary Containment FacilitySpherical tank fluid volume (end sections of tank) Vh1.5Dh 3 Total tank fl = = 12 uid volume VV V=+ a.Horizontal Where 34 VDh2 4 = ()() 22 2 22 Fluid level above cone VhDdDd Dhc Fluid level within cone VhKdKd 3 hh K= d 1 cD c =+++ =++ + b.Vertical tanks c.Cone bottom tanks Figure 2.3.
### icelandicelandCircular Cylinder Rectangular Prism Volume Conversion CalculatorExplore further
Tank Volume Calculator - Tank Capacitiy CalculatorgigacalculatorTANK VOLUME CALCULATOR [How to Calculate Tank Capacity concalculatorTank Volume Calculator - Inch CalculatorwwwchcalculatorTank Volume Calculator for Ten Various Tank ShapesomnicalculatorTank Volume Calculator Volume of Water CalculatoreasycalculationRecommended to you based on what's popular FeedbackTANK VOLUME CALCULATOR [How to Calculate TankJun 20,2019·Vertical Cylindrical Tank.The total volume of a vertical cylindrical tank is calculated by using the formula $$Volume = \pi × Radius^2 × Height$$ Where $$Radius = {Diameter \over 2}$$ Rectangular Tank.The total volume of a rectangular tankicelandicelandNFPA 30-2008 Basic Requirements for Storage TanksFeb 22,2011·UL 2080,Fire Resistant Tanks UL 2085,Protected Aboveground Tanks 1.14 Chapter 21 Chapter 21 GeneralGeneral maximum operating pressures for ambient pressure tanks 0.5 psi gauge for vertical cone roof tanks 1.0 psi gauge,if designed to Appendix F of API Standard 650 1.0 psi gauge for horizontal cylindrical or rectangular tanksicelandicelandPAPER OPEN ACCESS Numerical Simulation of LargeApr 09,2020·tanks,and these accidents often cause very serious losses.Compared to other types of fires,full-surface fires formed after a fire in a crude oil storage tank have the characteristics of high burning rate,high flame temperature,intense radiant heat and high challenge in safety assessment of the tank after fire.
### icelandicelandWhat is the total volume of a cylinder shaped tank?What is the total volume of a cylinder shaped tank?Total volume of a cylinder shaped tank is the area,A,of the circular end times the height,h.A = r 2 where r is the radius which is equal to d/2.Tank Volume CalculatoricelandicelandRelated searches for iceland vertical cylindrical tank fire vol
cylindrical tank volumehorizontal cylindrical tank volume formulahorizontal cylindrical tank volume calculatorcylindrical tank volume formulavolume of cylindrical tank calculatorcylindrical water tankvertical tank volume calculatorcylindrical volume formulaSome results are removed in response to a notice of local law requirement.For more information,please see here.12345NextTank Volume Calculator for Ten Various Tank ShapesJan 14,2020·Cylindrical tank volume formula.To calculate the total volume of a cylindrical tank,all we need to know is the cylinder diameter (or radius) and the cylinder height (which may be called length,if it's lying horizontally)..Vertical cylinder tank; The total volume of a cylindrical tank may be found with the standard formula for volume - the area of the base multiplied by height.icelandicelandSeptember 2017 VENTING GUIDE - National PetroleumFind tank size on Table B which can be found on page 8-9.Table lists wetted area and SCFH for common sized vertical tanks.For a 10 x 17 tank wetted area = 534 sq.ft.and required vent capacity = 366,920 SCFH.Proceed to Step 5.STEP 2 Wetted Area Table If tank size is NOT listed on Table B,page 8-9,wetted area
### icelandicelandSpill Prevention Control and Countermeasure (SPCC) Plan
Displacement Volume,DVTank 2 (ft 3) = x n (ft ) c (ft) c is the containment wall height used in Step 2 of A.= ft o Repeat to calculate the displacement of each additional horizontal cylindrical tank located with the largest tank in the dike or berm.2.Calculate the total displacement volume from the additional vertical cylindrical tanks in theicelandicelandTanks - ScienceDirectJan 01,2014·Volume of liquid in vertical cylindrical tanks.Measure the depth of the liquid and either the diameter or circumference of the tank,then the volume in Gallons = 0.0034 d 2 h or 0.00034 c 2 h.Barrels = 0.000081 d 2 h or 0.00082 c 2 h.Gallons = 5.88 D 2 H or 0.595 C 2 H.Barrels = 0.140 D 2 H or 0.0142 C 2 H.where d = Diameter,inches.c = Circumference,inchesicelandicelandThe International Organisation for Industrial Hazard 1/3 x diameter of tank Horizontal and vertical tanks with emergency relief venting to limit pressures to 2.5 psi (gauge pressure of 17 kPa) Approved inerting systemb on the tank or approved foam system on vertical tanks 1/2 x value in Table 22.4.1.1(b) 1/2 x value in Table 22.4.1.1(b)
### Large Storage Tank
Maybe You Like
iceland vertical cylindrical tank fire volume price, Best price iceland vertical cylindrical tank fire volume, iceland vertical cylindrical tank fire volume chemical composition, iceland vertical cylindrical tank fire volume yield strength, iceland vertical cylindrical tank fire volume equivalent, iceland vertical cylindrical tank fire volume properties, iceland vertical cylindrical tank fire volume in China, what is iceland vertical cylindrical tank fire volume,
## Get Free Consultation
Get the most professional technical answers and quotation solutions
Or if you need quick assistance
Mail Us 24/7 For Customer Support At storagetank@yeah.net | 2021-09-28 21:56:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5738661885261536, "perplexity": 6736.335628479925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060908.47/warc/CC-MAIN-20210928214438-20210929004438-00115.warc.gz"} |
https://tex.stackexchange.com/questions/309100/arial-font-baseline-for-letters-i-and-l/309120 | # Arial font baseline for letters 'i' and 'L'
I am attempting to use the uarial font in latex. But the vertical alignment of characters appears to be off. If
\documentclass{article}
\usepackage[english]{babel}
\usepackage[T1]{fontenc}
\usepackage{uarial}
\renewcommand{\familydefault}{\sfdefault}
\begin{document}
\Huge{tial}
\end{document}
Produces the following:
Notice that the letters 'i' and 'l' have a different baseline than do 't' and 'a'. Is there a way to adjust this? That is not a feature of Arial
• Take or leave, I'm afraid. – egreg May 11 '16 at 16:54
I don't think that uarial is a good choice. It is a rather curious mix between Arial and Helvetica. As you can see in the following picture the "C", "t" and "a" are from Helvetica, while the "G" and "R" is from arial. Also as you discovered the metrics are not really good. It is naturally possible to correct this by manipulating the tfm, but I don't think that it is worth the trouble.
%needs lualatex or xelatex
\documentclass{article}
\usepackage{fontspec}
\setmainfont{Arial}
\setsansfont{TeX Gyre Heros}
\begin{document}
\Huge
CGRtial (Arial)\par
{\sffamily CGRtial} (Helvetica/TeX Gyre Heros)\par
\fontencoding{T1}\fontfamily{ua1}\selectfont CGRtial (uarial)
\end{document} | 2019-05-19 06:18:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8817377090454102, "perplexity": 2524.6516577846965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254253.31/warc/CC-MAIN-20190519061520-20190519083520-00056.warc.gz"} |
http://www.phy.ntnu.edu.tw/ntnujava/msg.php?id=4342 | 1. For the pendulum system, the equation of motion is
math_failure (math_unknown_error): \frac{d^2 \theta}{dt^2}= -\frac{\ell}{g} \sin\theta
The motion of the pendumum will be the same if the water always leak out in the radial direction
(Did not change momentum in tangential direction).
However, the pendumum motion will be changed if the water leak out has momentum in the tangential direction of the pendulum motion.
2. Does the mass always doubled every time it reach maximum length or just doubled once?
You can guess what will happen with the following analysis:
The potential energy for a spring is $U(x)=\frac{1}{2}k x^2$, where x is the displacement.
All the potential energy will convert to kinetic energy when it reach the equilibrium position.
i.e. $\frac{1}{2}k x^2 =\frac{1}{2} m v^2$.
If the mass is doubled, then the velocity will become smaller $v'=v/\sqrt{2}$,
and the oscillation frequency will be smaller too. $\omega= \sqrt{k/m}$ | 2017-10-18 11:29:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6535189747810364, "perplexity": 1118.2904898414322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822930.13/warc/CC-MAIN-20171018104813-20171018124813-00059.warc.gz"} |
https://www.physicsforums.com/threads/curved-space-and-curvilinear-coordinates.875153/ | Curved space and curvilinear coordinates
• I
mertcan
hi, I really wonder what the difference between curvilinear coordinates in a Euclidean space and embedding a curved space into Euclidean space is ? They resemble to each other for me, so Could you explain or spell out the difference? Thanks in advance...
Staff Emeritus
Homework Helper
Gold Member
2021 Award
An embedding is usually of a lower-dimensional manifold. Curvilinear coordinates are used to describe the Euclidean space itself.
You can not introduce Euclidian local coordinates in a curved space
mertcan
I think curvilinear coordinates generally define tangent space, but in curved space also defines normal component besides the tangent space. Am I right? I saw some close definition like this. Is it true? | 2022-12-10 06:39:28 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8270660638809204, "perplexity": 877.969685332189}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00745.warc.gz"} |
https://math.stackexchange.com/questions/1383725/what-is-the-difference-between-orthogonal-and-orthonormal-in-terms-of-vectors-an/1383726 | What is the difference between orthogonal and orthonormal in terms of vectors and vector space?
I am beginner to linear algebra. I want to know detailed explanation of what is the difference between these two and geometrically how these two are interpreted?
Two vectors are orthogonal if their inner product is zero. In other words $\langle u,v\rangle =0$. They are orthonormal if they are orthogonal, but each vector has norm $1$. In other words $\langle u,v \rangle =0$ but $\langle u,u\rangle = \langle v,v\rangle =1$.
Example
For vectors in $\mathbb{R}^3$ let
$$u \;\; =\;\; \left[ \begin{array}{c} 1\\ 2\\ 0\\ \end{array} \right ] \hspace{2pc} v \;\; =\;\; \left [ \begin{array}{c} 0\\ 0\\ 3\\ \end{array} \right ].$$
The vectors $u$ and $v$ are orthogonal since
$$\langle u, v\rangle \;\; =\;\; 1\cdot 0 + 2\cdot 0 + 0\cdot 3 \;\; =\;\; 0$$
but they are not orthonormal since $||u|| = \sqrt{\langle u,u\rangle } = \sqrt{1 + 4} = \sqrt{5}$ and $||v|| = \sqrt{\langle v,v\rangle } = \sqrt{3^2} = 3$. If we define new vectors $\hat{u} = \frac{u}{||u||}$ and $\hat{v} = \frac{v}{||v||}$ then $\hat{u}$ and $\hat{v}$ are orthonormal since they each now have norm $1$, and orthogonality is preserved since $\langle \hat{u}, \hat{v}\rangle = \frac{\langle u,v\rangle }{||u||\cdot ||v||} = 0$.
You can think of orthogonality as vectors being perpendicular in a general vector space. And for orthonormality what we ask is that the vectors should be of length one. So vectors being orthogonal puts a restriction on the angle between the vectors whereas vectors being orthonormal puts restriction on both the angle between them as well as the length of those vectors. These properties are captured by the inner product on the vector space which occurs in the definition.
For example, in $\mathbb{R}^2$ the vectors $(0,2)$ and $(1,0)$ are orthogonal but not orthonormal because $(0,2)$ has length $2.$
• Does orthogonality of vectors means they are always perpendicular or any perpendicular vectors are orthogonal and rest are not ? What is mean by orthogonality of vector spaces? – Sanjeev Aug 4 '15 at 5:01
• I feel that orthogonality is a generalization of perpendicularity. For visual purposes or to get a feel for what exactly is happening you can think or vectors being orthogonal as the same as vectors being perpendicular, say in $\mathbb{R}^2$. But for an abstract vector space we don't define perpendicularity, we say vectors are orthogonal which have similar properties to perpendicular vectors in $\mathbb{R}^2$. – Makarand Sarnobat Aug 4 '15 at 5:15
• For the second question we can say when two vector spaces are orthogonal if the both are contained in an ambient space which is endowed with a inner product. In that case we say that two subspaces $V$ and $W$ are orthogonal if $<v,w> = 0$ for all $v \in V$ and $w \in W.$ – Makarand Sarnobat Aug 4 '15 at 5:19 | 2020-01-18 12:51:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8721372485160828, "perplexity": 138.8672781553166}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592565.2/warc/CC-MAIN-20200118110141-20200118134141-00357.warc.gz"} |
http://awty.osteriarougeroma.it/natural-cubic-spline-calculator.html | ## Natural Cubic Spline Calculator
A cubic spline. 1, and with N number of experimental data points, N f1 number of splines [poly-nomials, f(x)] define the overall curve [1]. https://en. The advantage of the spline model over the full ARIMA model is that it provides a smooth historical trend as well as a linear forecast function. Then we can create a function that represents this data by simply connected each point with a straight line:. You can use a cubic meter calculator to work between SI (international system) units, also called metric units, and the traditional feet and inches in You can refer to any number of online calculator tools to work with ease between cubic meters and other units of volume. 时间 2015-08-16. This produces a so-called natural cubic spline and leads to a simple tridiagonal system which can be solved easily to give the coefficients of the polynomials. Cubic spline is a piecewise polynomial with a set of extra constraints (continuity, continuity of the first derivative, and continuity of the second derivative). 1: Cubic Splines Interpolating cubic splines need two additional conditions to be uniquely defined Definition. The cubic Hermitian spline method is the piecewise cubic Hermitian interpolation. Cubic splines create a series of piecewise cubic polynomials. This type of cubic spline fits a natural cubic spline to the 4-point neighborhood of known data points surrounding the x value at which we wish to evaluate. The matrix equation for the remaining coe¢ cients is: 2 6 6 6 6 6 6 6 6 6 6 6 6 4 2 0 0 12 0 2 2 2 n 2 0 nn1 2 1 0 n2 3 7 7 7 7 7 7 7 7 7 7 7 7 5 2 6 6 6 6 6 6 6 6 6 6 6 6 4 C 0 C 1 C 2 C n 2 C 1 C 3 7 7 7 7. The algorithm given in w:Spline interpolation is also a method by solving the system of equations to obtain the cubic function in the symmetrical form. 'first column is a cubic spline interpolation of your data; each subsequant 'column is a higher-order derivative. The smooth linear spline is composed of linear segments with quintics connecting them - these quintics operate at the specified maximum acceleration (curvature). Use natural cubic spline. 1 Derivation using Cubic Hermie interpolation Since we have similar piecewise cubic polynomials to the Piecewise Cubic Hermite polynomials on each subinterval. Panel B shows the. Cubic spline interpolation is satisfactory in many engineering applications,. Hussain and Sarfraz used a C 1 piecewise rational cubic function to visualize the data arranged over a rectangular grid [17]. This interpolant is a piecewise cubic function, with break sequence x, whose cubic pieces join together to form a function with two continuous derivatives. S₁(x) = 4 + k1(x) + 2x^2 - x^3/6 check at end point of region [0 , 1]. Uniform cubic B-spline curves are based on the assumption that a nice curve corresponds to using cubic functions for each segment and constraining the points that joint the segments to meet three continuity requirements: 1. How to Solve a Cubic Equation – Part 1 Another way to write this is ()212 23 2 2 2 2 tu t s tv su s vu δδ δδ δδ δδ ⎡⎤ ⎡⎤ v ⎡ ⎤⎡ ⎢⎥=− ⎢⎥ ⎤ ⎢ ⎥⎢⎥ ⎣⎦⎣ ⎦⎣⎣⎦ ⎦ This is just the transformation equation for a quadratic polynomial. Let us see if the cubic spline can do better. Applied Mathematics and Computation 29 :3, 231-244. Using this function's linear interpolation option, I get 0. To approximate it with polyline we should do the following:. Linear interpolant is the straight line between the two known co-ordinate points (x0, y0) and (x1, y1). Yet, I have not found out the solution:confused:. resulting in the natural cubic spline. Differentiate at point t. 1, and with N number of experimental data points, N f1 number of splines [poly-nomials, f(x)] define the overall curve [1]. What is cubic spline? Cubic splines are a straight forward extension of the methods underlying parabolic spline. pj(x) = aj + bj(x − xj−1) + cj(x − xj−1)2 + dj(x − xj−1)3 Suppose we know the nodal curvature Mj := pj (xj) as well as the nodal values yj. An alternative, and often superior, approach to modeling nonlinear relationships is to use splines (P. The favorable range for p is often near 1/(1 + h 3 /6), where h is the average spacing of the data sites. (See Numerical Recipes in C book for code. The resulting curve is a natural cubic spline through the values at the knots (given two extra conditions specifying that the second derivative of the curve should be zero at the two end knots). It is possible to also introduce quadratic spline, i. https://en. 1D Spline example. A cubic spline. Natural spline interpolation Cubic corrections. Since is piecewise cubic, if those four conditions hold, then is a single cubic on the intervals , and , not two cubics meeting at. Natural Splines Example A Example B Natural Spline Interpolant Example: The Integral of a Spline Approximate the integral of f(x) = ex on [0,3], which has the value Z 3 0 ex dx = e3 −1 ≈ 20. We can easily mix terms in GAMs,some linear and some Non Linear terms and then compare those Models using the anova() function which performs a Anova test for goodness of fit. It applies only in one dimension, but is useful for modeling yield curves, forward curves, and other term structures. Ariffin and Karim [9] used two types of cubic spline functions—cubic spline interpolation with C2continuity and Piecewise Cubic Hermite Spline (PCHIP) with C1 continuity for interpolating data. Select desired data. 'Parameter: NMAX is the largest anticipated value of n. The natural cubic spline has zero second derivatives at the endpoints. Arc Length Parameterization of Spline Curves John W. The cubic Hermitian spline method is the piecewise cubic Hermitian interpolation. This leads us to our next section. Use a natural cubic spline to interpolate through the discrete data values. The term “natural” cubic spline refers to the property that x(t)is a linear function of toutside the knot range, and consists of cubic polynomial pieces that are continuous and have continuous first and second derivatives at the knot times. Piecewise cubic spline interpolation and approximated calculation of first and second derivative at the interpolation point. A restricted cubic spline (aka natural cubic spline) is a cubic spline with an additional restriction where the first and last sub-functions beyond the boundary knots are linear functions instead of cubic functions. Here is an example *simulate som data; *using probabilites depening on sin(t); data simulation; do i=1 to 10000; t=rand('uniform',0,10); p=1/(1+exp(sin(t))); y=rand('bernoulli',p); output; end; run; *model a natural cubic spline; *and store the result in "mystore"; proc. This command takes the form » yy = spline. Performs and visualizes a cubic spline interpolation for a given set of points. Linear and Cubic Spline Interpolation On June 7, 2015 June 13, 2015 By Elena In Numerical Analysis In this post on numerical methods I will share with you the theoretical background and the implementation of the two types of interpolations: linear and natural cubic spline. For each profile peak j= 1,…, m, determine the supremum height Zpj. Generalization to splines of general order is relatively straightforward. If there is no additional information, it is considered that these are natural conditions. The origins of thin-plate splines in 2D appears to be [1,2]. 5 1 x1 x2 x3 x4 x5 Data Spline Clamped Splines Specify the first derivative is at the first and last points −1 −0. Input the set of points, choose one of the following interpolation methods ( Linear interpolation, Lagrange interpolation or Cubic Spline interpolation) and click "Interpolate". The cubic spline interpolation pooling method proposed in the present study is excellent to avoid the abovementioned problems. pj(x) = aj + bj(x − xj−1) + cj(x − xj−1)2 + dj(x − xj−1)3 Suppose we know the nodal curvature Mj := pj (xj) as well as the nodal values yj. Martin x Abstract In this paper some of the relationships between B-splines and linear control theory is examined. the distinct x values in increasing order, see the ‘Details’ above. Enter data as comma separated pairs (X,Z), with each pair on a new line (or copy and past cells from a spreadsheet). In addition to their use in interpolation, they are of particular interest to engineers because the spline is defined as the shape that a thin flexible beam (of constant flexural stiffness) would take…. There is also a constrainedcubicspline() function which has clamped ends. but a picture says more than a thousand words: Basically, you define a number of points in 2D or 3D space, and using these points to create a "spline", a curve which smoothly goes through all points. q Consider the same data:. A cubic spline is a spline constructed of piecewise third-order polynomials which pass through a set of control points. means that there is a tangent to the curve of the cubic spline. Enter n for (n+1) nodes, n: 3. To quantify the convex-shape-preserving capability of spline fits, we consider a basic shape of convex corner with two line segments in a given window. It generates a basis matrix for representing the family of piecewise-cubic splines with the specified sequence of interior knots, and the natural boundary conditions. Among other numerical analysis modules, scipy covers some interpolation algorithms as well as a different approaches to use them to calculate an interpolation, evaluate a polynomial with the representation of the interpolation, calculate derivatives, integrals or roots with functional and class. In addition, for cubic splines ( $$k=3$$) with 8 or more knots, the roots of the spline can be estimated. Which is simplified by using the substitution , giving: To guarantee the smooth continuity of the interpolating Spline , we have the following conditions: 1) So that the splines properly interpolate the given points. It generates a basis matrix for representing the family of piecewise-cubic splines with the specified sequence of interior knots, and the natural boundary conditions. Lecture7: SplinesandGeneralizedAdditiveModels Splines Splinesforclassification ExampleinR class<-glm(I(wage>250) ˜ ns(age,3),data=Wage,family=’binomial’). Hi, I am new calculator. It is possible to also introduce quadratic spline, i. Splines describe a smooth function with a small number of parameters. CUBIC SPLINE INTERPOLATION Natural Splines: S00(x 1) = S00(x n) = 0, so c 1 = c n = 0 Linear system equations are a \tridiagonal" system c Example: \Runge. Task 4 Cubic splines. Computes the H-infinity optimal causal filter (indirect B-spline filter) for the cubic spline. Clearly this behaviour is unacceptable for. 811 and the slope of the last point to 2. means that there is a tangent to the curve of the cubic spline. The natural cubic spline has zero second derivatives at the endpoints. This ensures from the outset that values and first derivatives match, and you only have to solve a linear system that forces second derivatives to match, too. of the natural cubic splines in 1D. The second derivative of each polynomial is commonly set to zero at the endpoints, since this provides a boundary condition that completes the system of equations. In class we have studied cubic splines, i. The MPIMotionTypeSPLINE generates a "natural" cubic spline. In fact, the natural cubic spline is the smoothest possible function of all square integrable functions. cubicspline finds a piecewise cubic spline function that interpolates the data points. A cubic spline, or cubic. The other possibility is that the utility is performing cubic spline interpolation but is making some assumption about the end boundary conditions. I have two lists to describe the function y(x): x = [0,1,2,3,4,5] y = [12,14,22,39,58,77] I would like to perform cubic spline interpolation so that given some value u in the domain of x, e. spline" with components. 94); linear cubic-bezier cubic bezier curves value of t mdn cubic-bezier cubic bezier curves equation svg formula cubic bezier bezier vs cubic spline understand cubic bezier numbers. Results can be compared using correlation. A cubic spline is a spline in which all sub-functions are cubic curves. Enter x(0) and f(x(0)) on separate lines. , having zero residuals). 08553692 −1 = 19. In this case, INTERPOLATE will remove those entries. Natural Cubic Splines Natural Cubic Splines Cubic spline is a spline constructed of piecewise third-order polynomials which pass through a set of m control points. There are five stages nessesary in the cluster analysis and calculation of node positions, summerised as follow: 1. I will store splines as a list of maps. Bruce and Bruce 2017). com Abstract It is often desirable to evaluate parametric spline curves at points based on their arc-length instead of the curveÕs original parameter. Natural Cubic Splines. 'Parameter: NMAX is the largest anticipated value of n. If method = "fmm", the spline used is that of Forsythe, Malcolm and Moler (an exact cubic is fitted through the four points at each end of the data, and this is used to determine the end conditions). Natural cubic splines - example • We find z 0 = 0. Annals of the Faculty of Engineering Hunedoara-International Journal of Engineering, Vol. The second derivative is chosen to be zero at the first point and last point. If small deflections are considered, the curvature is approximated by the second derivative of the assumed curve. The other possibility is that the utility is performing cubic spline interpolation but is making some assumption about the end boundary conditions. Linear spline: with two parameters and can only satisfy the following two equations required for to be continuous:. Stay on top of important topics and build connections by joining Wolfram Community groups relevant to your interests. By default, the algorithm calculates a "natural" spline. pj(x) = aj + bj(x − xj−1) + cj(x − xj−1)2 + dj(x − xj−1)3 Suppose we know the nodal curvature Mj := pj (xj) as well as the nodal values yj. (See Numerical Recipes in C book for code. But how do I define natural splines in mathematica, i. At each data point, the values of adjacent splines must be the same. For the data set x x 0 x 1 x n y f 0 f 1 f n where a= x. It is possible to also introduce quadratic spline, i. We can easily mix terms in GAMs,some linear and some Non Linear terms and then compare those Models using the anova() function which performs a Anova test for goodness of fit. That is, spline also gives you a cubic spline, but with a better choice of end conditions than the natural ones, which are often an issue themselves. 1 Cubic Splines The cubic spline is what you may have come across in other drawing programs, a smooth curve that connects three or more points. Linear and Cubic Spline Interpolation On June 7, 2015 June 13, 2015 By Elena In Numerical Analysis In this post on numerical methods I will share with you the theoretical background and the implementation of the two types of interpolations: linear and natural cubic spline. If you want more information about the behavior of the. Cubic Spline Interpolation clamped boundary condition. Extrapolate leading and trailing nulls, besides cubic spline interpolation. If small deflections are considered, the curvature is approximated by the second derivative of the assumed curve. order to generate a unique cubic spline ,two other conditions must be imposed upon the system. Theorem (no proof): If f(x) is four times continuously di erentiable and Sis a cubic spline, then for x2[a;b] jf(x) S(x)j 5 384 h4 max x2[a;b] jf(4)(x)j; where. How the basis matrix is generated is quite complicated and probably something you'll just want to take on faith, like I do. csaps spline is cubic only and it has natural boundary condition type. " Without regularity constraints, we have $4|I|-4=12-4$ equations (we have removed $4$ equations, $2$ each in both boundary regions because they involve quadratic and cubic polynomials):. The advantage of the spline model over the full ARIMA model is that it provides a smooth historical trend as well as a linear forecast function. interpolation by a piece wise cubic polynomial with continious first and second derivative. In class we have studied cubic splines, i. •We require adjacent splines to have matching values at the endpoints. Compared to the cubic spline method, the cubic Hermitian method has better local property. Agree with Rick, plotting splines are fairly simple with effect statements. 1 Cubic Splines The cubic spline is what you may have come across in other drawing programs, a smooth curve that connects three or more points. Abstract $$L^1$$ splines have been under development for interpolation and approximation of irregular geometric data. Note that repeating the solve command requires a bit of fiddling as indicated below. This draws a smooth curve through a series of data points. Splines provide a way to smoothly interpolate between fixed points, called knots. Solving for second derivatives, I can then plug back into cubic spline equation '' fii i i() ()xfx 111 22. ( The natural spline version of this basis could be. Bruce and Bruce 2017). Natural Splines¶. In this post I am sharing with you a C program that performs cubic spline interpolation. wikiversity. Uniform Cubic Hermite Splines¶ TODO: Image for the 1D case (points and slopes)? TODO: Image for the 2D case (points and tangent vectors)? Probably combine two 1D examples? TODO: Hermite’s two-point interpolation formula?. A cubic spline is a piecewise cubic polynomial such that the function, its derivative and its second derivative are continuous at the interpolation nodes. cubic: twice derivable and 2nd order derivative is continuous (C2). spline" with components. The origins of thin-plate splines in 2D appears to be [1,2]. the weights used at the unique values of x. This is free software that adds several spline and linear interpolation functions to Microsoft Excel. Regression with restricted cubic splines in SAS. recall your Gerschgorin Disks from MA385 Exercise 93 Find the Natural Cubic from ENG 101 at Heriot-Watt. Computes the H-infinity optimal causal filter (indirect B-spline filter) for the cubic spline. Spline functions include Cubic spline, bessel spline, and 'OneWay' spline (which is a monotonic spline). Interpolation Calculator. S₁(x) = 4 + k1(x) + 2x^2 - x^3/6 check at end point of region [0 , 1]. In this math activity, the students graph parabolas and other functions on the calculator with the intention of analyzing the graph. Spline Returns the Y which lies on the cubic (or natural) spline curve at the given X Interpolate Returns the Y which lies on an interpolated curve at the given X Interp Returns the Y which lies on an interpolated curve at the given X using the defaults of Interpolate XatY Returns the X value at the Max. But how do I define natural splines in mathematica, i. Three types of Splines Natural splines This first spline type includes the stipulation that the second derivative be equal to zero at the endpoints. As p moves from 0 to 1, the smoothing spline changes from one extreme to the other. A tiny Matlab implementation of cubic spline interpolation, based on work done for the 18. Given x i, v i, and dt i, and requiring that the velocity be continuous, it is simple to calculate the equations for motion at any given interval between the specified PVT points. Now we can represent the Model with truncated power. The order of continuity is = \ ( (d – 1) \) , where \ (d\) is the degree of polynomial. What is special about the interpolating Hermite cubic if = 3 4? 10. We can use the cubic interpolation formula to construct the bicubic interpolation formula. calculate the. Remember you will have to get all the fundamental polynomials and add them together to give the lagrange interpolating polynomial. Details about the mathematical background. the fitted values corresponding to x. De nition (Cubic Spline) Let f(x) be function de ned on an interval [a;b], and let x 0;x 1;:::;x n be n + 1 distinct points in [a;b], where a = x 0 < x 1 < < x n = b. Graphing Calculator. , having zero residuals). We take a slightly different approach, by first drawing it as a B-Spline. Whereas the spline function built by natural splines with the same supporting points would look like this There is a small difference between these two graphs: On the periodic spline function the slope and function value at the end and the slope and function value at the beginning are equal. Natural Cubic Spline C Codes and Scripts Downloads Free. Cubic splines for four points. def spline_func(x, y, periodic=False): if periodic: spline = CubicSpline(x, y, bc_type='periodic') else: spline = CubicSpline(x, y) return spline # for a function f(t) of points, computes the complex Fourier coefficients # pts: numpy array of ordered points (n x 2) that define your curve # nvec: number of Fourier components to calculate. Cubic Spline Interpolation. The second derivate of each polynomial is commonly set to zero at the endpoints, since this provides a boundary condition that completes the system of m-2 equations. interpolation of soil water characteristic data with natural cubic spline functions. The segments can be linear, quadratic, cubic, or even higher order polynomials. This leads us to our next section. At the endpoints, the second derivative is set to zero, which is termed a “natural” spline at the. 5 1 x1 x2 x3 x4 x5 s 1(x1. We created new PrusaPrinters website for all Prusa and RepRap fans. Splines are a great way of calculating extra points between these key points to allow you to create much more organic and natural looking regions. Compare your interpolated values with the values of the function f(x) = ex 2 1 + 25x2. Each map is one piece of the spline and has: $$u$$: Start of the interval $$v$$: End of the interval. Spline Returns the Y which lies on the cubic (or natural) spline curve at the given X Interpolate Returns the Y which lies on an interpolated curve at the given X Interp Returns the Y which lies on an interpolated curve at the given X using the defaults of Interpolate XatY Returns the X value at the Max. By default, the algorithm calculates a "natural" spline. A log transformation is a relatively common method that allows linear regression to perform curve fitting that would otherwise only be possible in nonlinear regression. The Matrix equation to calculate the h parameters contains many elements that are 0 and due to this fact there can by an improvement to solve this equation. Text Book: Numerical Analysis by Burden, Faires & Burden. Show that the set of natural cubic splines on a given knot partition x 0 > Natural_spline. A cubic spline is a spline constructed of piecewise third-order polynomials which pass through a set of control points. https://en. com Abstract It is often desirable to evaluate parametric spline curves at points based on their arc-length instead of the curveÕs original parameter. What is special about the interpolating Hermite cubic if = 3 4? 10. An inflection point of a cubic function is the unique point on the graph where the concavity changes The curve changes from being concave upwards to concave downwards, or vice versa. Key words: curve fitting, spiral spline, nonlinear spline, least energy, interpolation. Suppose that there are variables as follows: observetime, censor, variablex (the independent. The idea of a spline interpolation is to extend the single polynomial of linear interpolation to higher degrees. This leads us to our next section. ) Calculate the Hermite cubic function which interpolates x 2 1 0 1 2 f 0 1 4 1 1 4 0 f0 0 0 0 with = 1. For example, if y is a vector, then: y(2:end-1) gives the function values at each point in x. Note that for notational simplicity later on the entries of i are numbered in a non-standard way, starting at = 2. To Interpolate Y from X. These BSplines constitute basis for any Spline. For natural cubic splines "A natural cubic splines adds additional constraints, namely that function is linear beyond the boundary knots. Natural cubic spline has been established to calculate dose maps from field characteristics. Abstract $$L^1$$ splines have been under development for interpolation and approximation of irregular geometric data. 3 Piecewise Cubic Spline interpolation NDOF: 4N ¡3(N ¡1) = N +1+2) specify f(xi) at x0;:::;xN. Natural Cubic Splines. Input MUST have the format: AX3 + BX2 + CX + D = 0. Lectur e #15: Natural Splines, B-Splines, and NURBS Prof. Direct Method of Interpolation: Cubic Interpolation - Part 1. Conceptual background. It is simple to use because the new functions work just like all other existing Excel functions. A web based polynomial or Cubic Splines interpolation tool. ECE 1010 ECE Problem Solving I Chapter 6: Interpolation 6–8 Cubic-Spline Interpolation • As we can see from the previous example, linear interpola-tion produces a rather jagged result if the data points are not closely spaced and don’t lie in a straight line • An improved interpolation procedure is to replace the straight. The favorable range for p is often near 1/(1 + h 3 /6), where h is the average spacing of the data sites. Use natural cubic spline. AMS(MOS) subject classifications: 65D07, 65D10, 41A15. Similar to Cubic spline interpolation, Cubic B-spline interpolation also fits the data in a piecewise fashion, but it uses 3 rd order Bezier splines to approximate the data. Restricted cubic splines are also called "natural cubic splines. % Natural cubic spline interpolation >> Natural_spline. A natural spline defines the curve that minimizes the potential energy of an idealized elastic strip. Cubic spline is comprised from a sequence of cubic polynomials, so to draw the curve we have to approximate each partial cubic polynomial with the polyline. If I use LINEST() to fit a cubic polynomial to points 8-11 (which should be equivalent to your interpolating polynomial algorithm), I get 0. Cubic splines tend to be poorly behaved at the two tails (before the first knot and after the last knot). Cubic splines create a series of piecewise cubic polynomials. ( The natural spline version of this basis could be. pro application using the "/spline" keyword on an irregular grid. EXAMPLE: If you have the equation: 2X 3 - 4X 2 - 22X + 24 = 0. Optimal distribution of interpolation nodes. We calculate the value of polynomial at point x* = 0,5 using Horner scheme: z:=a n Cubic natural spline. Ariffin and Karim [9] used two types of cubic spline functions—cubic spline interpolation with C2continuity and Piecewise Cubic Hermite Spline (PCHIP) with C1 continuity for interpolating data. You can use a cubic meter calculator to work between SI (international system) units, also called metric units, and the traditional feet and inches in You can refer to any number of online calculator tools to work with ease between cubic meters and other units of volume. 1 Derivation using Cubic Hermie interpolation Since we have similar piecewise cubic polynomials to the Piecewise Cubic Hermite polynomials on each subinterval. Morning! I want open conversation about Heidenhain splines and NX. The other method used quite often is w:Cubic Hermite spline, this gives us the spline in w:Hermite form. Cubic Equation Calculator. But a parabola has always a vertex. Using constrained cubic spline instead of natural cubic spline to eliminate overshoot and undershoot in HHT. Natural Cubic Splines •In these kind of spline, if have n+1 control points then we specify n cubic splines. At first author shows how to calculate linear spline interpolation and I did this on my data, and receive this result: It should be similar to this: The overall shape is good but to receive better results I should use cubic spilne intepolation with is extend of linear interpolation but here problems starts. This constraint is what has been chosen for the above cubic spline. The point where two splines meet is sometimes referred to as a node. Spline is a collection of polygonal segments. A restricted cubic spline (aka natural cubic spline) is a cubic spline with an additional restriction where the first and last sub-functions beyond the boundary knots are linear functions instead of cubic functions. Given two (x, y) pairs and an additional x or y, compute the missing value. Linear spline: with two parameters and can only satisfy the following two equations required for to be continuous:. The central for each clustrer become nodes through which a natural spline is fitted. If y is a vector that contains two more values than x has entries, then spline uses the first and last values in y as the endslopes for the cubic spline. An object of class "smooth. Getting Started: Make math and science easier (and more fun) with free graphing calculator programs and games from calculatorti. Linear interpolant is the straight line between the two known co-ordinate points (x0, y0) and (x1, y1). Hit the button Show example to see a demo. Optimal distribution of interpolation nodes. Details about the mathematical background. spline" with components. ) • Finding all the right weights is a global calculation (solve tridiagonal linear system). This applies to all interior points (where two functions meet) 㱺 2(n-1) constraints. Stay on top of important topics and build connections by joining Wolfram Community groups relevant to your interests. The higher the order is, the more smooth the spline becomes. The second derivate of each polynomial is commonly set to zero at the endpoints, since this provides a boundary condition that completes the system of m-2 equations. Then (8) differs from the natural cubic spline only in that the latter is required be linear on the interval. Cubic Splines with knots (cutpoints) at \ (\xi_K , \ K = 1,\ 2…\ k\) is a piece-wise cubic polynomial with continious derivatives upto order 2 at each knot. Anybody knows other method to join the points of an airfoil? Anybody knows a smooth method to use whit airfoils? In the photo, you can see my problem. recall your Gerschgorin Disks from MA385 Exercise 93 Find the Natural Cubic from ENG 101 at Heriot-Watt. Natural Cubic Splines •In these kind of spline, if have n+1 control points then we specify n cubic splines. 6 Other Options The lm() function has several additional parameters that we have not discussed. Not long ago, David Dailey sent us a link to this [ article] (with SVG demo) which is based on work done in this [ article] on the natural version of this spline. resulting in the natural cubic spline. 811 and the slope of the last point to 2. Natural Cubic Spline C Codes and Scripts Downloads Free. Natural Cubic Splines, Derivation of the algorithm. ) • Finding all the right weights is a global calculation (solve tridiagonal linear system). Cubic Splines (cont) • In general, the i th spline function for a cubic spline can be written as: • For n data points, there are n-1 intervals and thus 4(n-1) unknowns to evaluate to solve all the spline function coefficients. "Cubic Spline support" is not very enlightening in the sense that it may actually mean having cubic spline smoothing as an option when creating X-Y plot graphs (this is the most widely used case). This is illustrated in Figures 1 and 2, where a natural cubic spline is fitted to hypothetical and somewhat unusual distillation and pump curves. The degree of the curve is d and must satisfy 1 d n. That is, the function values and derivatives are speci ed at each nodal point. Other Date Calculators. Calculate. A r estricted cubic spline is a cubic spline in which the splines are constrained to be linear in the two tails. But, that’s not all. This is the basic algorithm for Natural Splines. Results can be compared using correlation. Cubic Spline Interpolation clamped boundary condition. MATH 400 SPRING 2005 EFFICIENT ALGORITHM FOR CUBIC SPLINES 3 To determine the cubic spline, we must –nd the coe¢ cients C 1;:::;C n+1. Select desired data. Interpolations include linear, cubic spline, bessel and monotonic 'constrained' splines, as well as a 'flexible spline' that allows you to specify the slope at each data point. This video looks at an example of how we can interpolate using cubic splines, both the Natural and clamped boundary conditions are considered. Martin x Abstract In this paper some of the relationships between B-splines and linear control theory is examined. Cubic Equation Calculator. The resolution of super-resolution microscopy based on single molecule localization is in part determined by the accuracy of the localization algorithm. Natural Cubic Splines. These BSplines constitute basis for any Spline. 3 Piecewise Cubic Spline interpolation NDOF: 4N ¡3(N ¡1) = N +1+2) specify f(xi) at x0;:::;xN. Math 4446 Project I Natural and Clamped Cubic Splines Mark Brandao March 4, 2014 Abstract The goal of this project is to employ our Linear Algebra, Calculus, and Matlab skills for a specific application in the area of spline interpolation. This ensures from the outset that values and first derivatives match, and you only have to solve a linear system that forces second derivatives to match, too. c) Neighboring cubic functions in the common point have equal second derivatives - this means that there are acceleration at the points of interpolation. In addition, the use of so-called “vir-tual data-points” enables the user to perform approxima-tion of these data by hand. Computes the H-infinity optimal causal filter (indirect B-spline filter) for the cubic spline. Natural Cubic Splines, Derivation of the algorithm. Students generate graphs on the calculator. Another choice for the 2 degrees of freedom is to make s'''(x) to be continuous at x(1) and x(n-1). Splines provide a way to smoothly interpolate between fixed points, called knots. Using the ns function in the splines package, we can create a basis matrix that allows us to fit a natural cubic spline using regular regression functions such as lm and glm. Linear interpolant is the straight line between the two known co-ordinate points (x0, y0) and (x1, y1). If method = "fmm", the spline used is that of Forsythe, Malcolm and Moler (an exact cubic is fitted through the four points at each end of the data, and this is used to determine the end conditions). We can use the cubic interpolation formula to construct the bicubic interpolation formula. Optimal distribution of interpolation nodes. This calculator uses provided target function table data in form of points {x, f (x)} to build several regression models, namely, linear regression, quadratic regression, cubic regression, power regression, logarithmic regression, hyperbolic regression, ab-exponential regression, exponential regression. To construct a cubic spline from a set of data point we need to solve for the coefficients sk0, sk1, sk2 and sk3 for each of the n-1 cubic polynomials. Martin x Abstract In this paper some of the relationships between B-splines and linear control theory is examined. A spline function is a function that consists of polynomial pieces joined together with certain smoothness conditions First degree spline function is a polygonal function with linear polynomials joined together to achieve continuity The points t 0, t 1,…, t n at which the function changes its character are. This constraint is what has been chosen for the above cubic spline. > >http://numericalmethods. Solving a cubic spline system • Assume natural splines • This is a tridiagonal system • Can be solved in O(n) operations • How? – Do LU and solve – With tridiagonal structure requires O(7n) operations. In particular, the controls that produce the B-spline basis is constructed and compared to the basis elements for dynamic splines. Using constrained cubic spline instead of natural cubic spline to eliminate overshoot and undershoot in HHT. How the basis matrix is generated is quite complicated and probably something you'll just want to take on faith, like I do. The details of determining this NCS are given in Green and Silverman (1994). I have two lists to describe the function y(x): x = [0,1,2,3,4,5] y = [12,14,22,39,58,77] I would like to perform cubic spline interpolation so that given some value u in the domain of x, e. The second derivative is chosen to be zero at the first point and last point. , having zero second derivative) nor on passing through given points (i. Answer to A natural cubic spline for a function f(x) is defined by Find the value of a1 and a2 A Natural Cubic Spline For A Function F(x) Is Defined By Find The Value Of A1 And A2. The cubic spline interpolation pooling method proposed in the present study is excellent to avoid the abovementioned problems. A log transformation is a relatively common method that allows linear regression to perform curve fitting that would otherwise only be possible in nonlinear regression. The most commonly used spline is a cubic spline, which we now de ne. Linear spline: with two parameters and can only satisfy the following two equations required for to be continuous:. We calculate the value of polynomial at point x* = 0,5 using Horner scheme: z:=a n Cubic natural spline. Ariffin and Karim [9] used two types of cubic spline functions—cubic spline interpolation with C2continuity and Piecewise Cubic Hermite Spline (PCHIP) with C1 continuity for interpolating data. Create a new worksheet with input data. Natural Cubic Spline Function Interpolation. 10 sps considering 4 data points each and then. 时间 2015-08-16. Intermediate values will be calculated by creating a natural cubic spline based on the rates. + 2 extra conditions 4. SolutionsofHomework6: CS321,Fall 2010 Assume the cubic spline polynomial Determine the parameters a,b,c,d and e so that S is a natural cubic spline S(x) =. I think the fact that the SAS documentation refers to the restricted cubic splines as "natural cubic splines" has prevented some practitioners from realizing that SAS supports restricted cubic splines. 62x S 2(x) = 0. We can solve this problem by building cubic spline with spline1dbuildcubic function and calling spline1ddiff for each of the new nodes (see below). Key words: curve fitting, spiral spline, nonlinear spline, least energy, interpolation. What is special about the interpolating Hermite cubic if = 3 4? 10. Cubic Splines (cont) • In general, the i th spline function for a cubic spline can be written as: • For n data points, there are n-1 intervals and thus 4(n-1) unknowns to evaluate to solve all the spline function coefficients. Suppose that there are variables as follows: observetime, censor, variablex (the independent. (1989) End conditions for cubic spline interpolation derived from integration. Date Basic Operations. This applies to all interior points (where two functions meet) 㱺 2(n-1) constraints. Natural cubic spline has been established to calculate dose maps from field characteristics. A cubic spline. Amongst all twice con-tinuously differentiable functions natural cubic splines yield the least oscillation about the function which is interpolated. Let we have a cubic polynomial defined at [x1, x2] interval. We investigate the advantages in terms of shape preservation and computational efficiency of calculating univariate cubic $$L^1$$ spline fits using a steepest-descent algorithm to minimize a global data-fitting functional under a constraint implemented by a local analysis-based. Lecture 11: Splines 36-402, Advanced Data Analysis A natural way to do this, in one dimension, is to minimize the spline ob- are piecewise cubic polynomials. Let g denote the vector (g 1; : : : ; g n) Tand = (2; : : : ;. I googled persistently on "Gnumeric" and "cubic spline interpolation" and found a couple of references on "Time Series Analysis Functions plugin. The new functions can be used for data analysis, forecasting, and many other applications. The second derivate of each polynomial is commonly set to zero at the endpoints, since this provides a boundary condition that completes the system of m-2 equations. Spline is a collection of polygonal segments. Here is the online linear interpolation calculator for you to determine the linear interpolated values of a set of data points within fractions of seconds. The function will return a list of four vectors representing the coefficients. com Abstract It is often desirable to evaluate parametric spline curves at points based on their arc-length instead of the curveÕs original parameter. A new envelope algorithm of Hilbert-Huang transform. What is special about the interpolating Hermite cubic if = 3 4? 10. Hit the button Show example to see a demo. 'Parameter: NMAX is the largest anticipated value of n. The natural cubic smoothing spline estimator can be obtained by. def spline_func(x, y, periodic=False): if periodic: spline = CubicSpline(x, y, bc_type='periodic') else: spline = CubicSpline(x, y) return spline # for a function f(t) of points, computes the complex Fourier coefficients # pts: numpy array of ordered points (n x 2) that define your curve # nvec: number of Fourier components to calculate. The feasibility and possibility of natural cubic spline to calculate dose maps for linac radiation therapy fields in a homogeneous phantom has been demonstrated. They exhibit less severe oscillatory behavior than interpolating polynomials. Cubic spline is comprised from a sequence of cubic polynomials, so to draw the curve we have to approximate each partial cubic polynomial with the polyline. Cubic Spline Interpolation Codes and Scripts Downloads Free. Splines There is a command is MATLAB that will fit a cubic spline to a set of data. Spline is a collection of polygonal segments. Here is an example *simulate som data; *using probabilites depening on sin(t); data simulation; do i=1 to 10000; t=rand('uniform',0,10); p=1/(1+exp(sin(t))); y=rand('bernoulli',p); output; end; run; *model a natural cubic spline; *and store the result in "mystore"; proc. We created new PrusaPrinters website for all Prusa and RepRap fans. 9, Issue 3, 2011, p. Which is simplified by using the substitution , giving: To guarantee the smooth continuity of the interpolating Spline , we have the following conditions: 1) So that the splines properly interpolate the given points. In addition, for cubic splines ( $$k=3$$) with 8 or more knots, the roots of the spline can be estimated. - 2 - On cardinal natural cubic spline functions 1. Medical Cut Off Calculation for Biomaths. The , are given and meet: natural boundary conditions. s i (x) = a i + b i (x − x i) + c i (x − x i) 2 + d i (x − x i) 3. 2 A flexible strip of wood or rubber used by draftsmen in laying out broad sweeping curves, as in railroad work. I am able to input all of the necessary data into R that would be present in matlab, but my spline output is different than matlab's by an ave. (There is a more elegant derivation of this in [3] as well as. values = csapi(x,y,xx) returns the values at xx of the cubic spline interpolant to the given data (x,y), using the not-a-knot end condition. The X and/or Y arrays may have missing values (#N/A). " This section shows how to perform a regression fit by using restricted cubic splines in SAS. It seems Excel uses a spline (as one might expect), but there are many different kinds of splines and he has found the right one. Natural Cubic Spline C Codes and Scripts Downloads Free. Medical Cut Off Calculation for Biomaths. EXAMPLE: If you have the equation: 2X 3 - 4X 2 - 22X + 24 = 0. For example, if y is a vector, then: y(2:end-1) gives the function values at each point in x. 1 De nition of B-Spline Curves A B-spline curve is de ned for a collection of n+ 1 control points fQ i gn i=0 by X(t) = Xn i=0 N i;d(t)Q i (1) The control points can be any dimension, but all of the same dimension. Unlike these splines the performance of csaps algorithm only depends on the data size and the data dimension. Arc Length Parameterization of Spline Curves John W. Natural and Clamped Cubic Splines 1. At first author shows how to calculate linear spline interpolation and I did this on my data, and receive this result: It should be similar to this: The overall shape is good but to receive better results I should use cubic spilne intepolation with is extend of linear interpolation but here problems starts. Not long ago, David Dailey sent us a link to this [ article] (with SVG demo) which is based on work done in this [ article] on the natural version of this spline. the weights used at the unique values of x. The polynomial pieces join continuously at the knots. To Interpolate Y from X. The cubic spline interpolation pooling method proposed in the present study is excellent to avoid the abovementioned problems. At each point, the first derivatives of adjacent splines must be equal (applies to all interior points) 㱺 (n-1. In case I am using the normal cubic interpolation, how about I loop through the "N" sample points i. The math is similar to ridge regression. In case of three points the values for k 0 , k 1 , k 2 {\displaystyle k_{0},k_{1},k_{2}} are found by solving the tridiagonal linear equation system. [From GSL:] Cubic spline with natural boundary conditions. •To complete the description usual set the first and. If y is a vector that contains two more values than x has entries, then spline uses the first and last values in y as the endslopes for the cubic spline. A cubic spline is a function f : → constructed by piecing together cubic polynomials p k (x) on different intervals [x [k], x [k+1]]. If method = "fmm", the spline used is that of Forsythe, Malcolm and Moler (an exact cubic is fitted through the four points at each end of the data, and this is used to determine the end conditions). For each x-y ordered pair. But a parabola has always a vertex. The segments can be linear, quadratic, cubic, or even higher order polynomials. splines that are linear left of the. 11 CubicSplinesIntersection: x value of intersection point between two cubic splines. Given two (x, y) pairs and an additional x or y, compute the missing value. Conceptual background. Natural spline interpolation Cubic corrections. This type of cubic spline fits a natural cubic spline to the 4-point neighborhood of known data points surrounding the x value at which we wish to evaluate. Interpolation Calculator. You can take the log of both sides of the. For example, the nonlinear function: Y=e B0 X 1B1 X 2B2. the weights used at the unique values of x. Cubic Spline Interpolation Codes and Scripts Downloads Free. 1 De nition of B-Spline Curves A B-spline curve is de ned for a collection of n+ 1 control points fQ i gn i=0 by X(t) = Xn i=0 N i;d(t)Q i (1) The control points can be any dimension, but all of the same dimension. Generalization to splines of general order is relatively straightforward. Lecture 11: Splines 36-402, Advanced Data Analysis A natural way to do this, in one dimension, is to minimize the spline ob- are piecewise cubic polynomials. https://en. Spline functions include Cubic spline, bessel spline, and 'OneWay' spline (which is a monotonic spline). 3 Piecewise Cubic Spline interpolation NDOF: 4N ¡3(N ¡1) = N +1+2) specify f(xi) at x0;:::;xN. [From GSL:] Cubic spline with natural boundary conditions. I use splines to improve the visualization, but in the leading edge I have problems, I want to have a curve more smoother, like a circle in the leading edge. (1989) End conditions for cubic spline interpolation derived from integration. Calculate. > You'd need to calculate separate splines for. The resulting spline s is completely defined by the triplet (x,y,d) where d is the vector with the derivatives at the xi: s'(xi)=di (this is called the Hermite form). How To Solve The Interpolation In Calculator Casio Fx991 Ms The Calculator King. 11 CubicSplinesIntersection: x value of intersection point between two cubic splines. Conceptual background. Natural cubic splines Task: Find S(x) such that it is a natural cubic spline. In this case, INTERPOLATE will remove those entries. For each sample length i = 1,…, CN. + 2 extra conditions 4. 8 CubicSplineDifferentiate: A natural cubic spline with continuous second derivative in the interior and zero second derivative at the end points. We will now look at an example of constructing a natural cubic spline function. Wolfram Community forum discussion about [?] Fit a cubic spline to the centerline data points?. Introduction. Computational Maths 2003 - 2004 3 and comment on your results. This applies to all interior points (where two functions meet) 㱺 2(n-1) constraints. In ridge regression, you add a quadratic penalty on the size of the regression coefficients, and so the. At first author shows how to calculate linear spline interpolation and I did this on my data, and receive this result: It should be similar to this: The overall shape is good but to receive better results I should use cubic spilne intepolation with is extend of linear interpolation but here problems starts. If method = "fmm", the spline used is that of Forsythe, Malcolm and Moler (an exact cubic is fitted through the four points at each end of the data, and this is used to determine the end conditions). 12 LinearSplineInterpolate. 'first column is a cubic spline interpolation of your data; each subsequant 'column is a higher-order derivative. Lecture7: SplinesandGeneralizedAdditiveModels Splines Splinesforclassification ExampleinR class<-glm(I(wage>250) ˜ ns(age,3),data=Wage,family=’binomial’). 3 Piecewise Cubic Spline interpolation NDOF: 4N ¡3(N ¡1) = N +1+2) specify f(xi) at x0;:::;xN. In general, a cubic spline with K knots uses cubic spline with a total of 4 + K degrees of freedom. Natural Cubic Splines Natural Cubic Splines Cubic spline is a spline constructed of piecewise third-order polynomials which pass through a set of m control points. – m-2 cubic polynomial curve segments, Q 3…Q m – m-1 knot points, t 3 … t m+1 – segments Q i of the B-spline curve are • defined over a knot interval • defined by 4 of the control points, P i-3 … P i – segments Q i of the B-spline curve are blended together into smooth transitions via (the new & improved) blending functions [t i. This constraint is what has been chosen for the above cubic spline. Compared to the cubic spline method, the cubic Hermitian method has better local property. In addition, the use of so-called “vir-tual data-points” enables the user to perform approxima-tion of these data by hand. 1 Cubic Splines The cubic spline is what you may have come across in other drawing programs, a smooth curve that connects three or more points. For example, if y is a vector, then: y(2:end-1) gives the function values at each point in x. The most commonly used spline is a cubic spline, which we now de ne. com Abstract It is often desirable to evaluate parametric spline curves at points based on their arc-length instead of the curveÕs original parameter. This section provides an example of using splines in PROC GLMSELECT to fit a GLM regression model. 12 LinearSplineInterpolate. Image fr om Carl de BoorÕ s webpage. Catmull-Rom is a good spline algorithm to use if you need the line to pass through the points that you. Whereas the spline function built by natural splines with the same supporting points would look like this There is a small difference between these two graphs: On the periodic spline function the slope and function value at the end and the slope and function value at the beginning are equal. This leads us to our next section. Getting Started: Make math and science easier (and more fun) with free graphing calculator programs and games from calculatorti. Cubic interpolating functions, chosen so that second derivatives are zero at endpoints 12 f 1 x = a 1 x − x 1 3 + b 1 x − x 1 + y 1 x 1 ≤ x ≤ x 2. Cubic $$L^1$$ spline fits have been developed for geometric data approximation and shown excellent performances in shape preservation. 2 Linear Interpolating Splines A simple piecewise polynomial fit is the continuous linear interpolating spline. As you can see from the figure, it provides a smooth curve that appears to fit the data well. The Spline tool uses an interpolation method that estimates values using a mathematical function that minimizes overall surface curvature, resulting in a smooth surface that passes exactly through the input points. The user is asked to enter a set of x and y-axis data-points, and then each of these is joined by a cubic polynomial. At each data point, the values of adjacent splines must be the same. 11 CubicSplinesIntersection: x value of intersection point between two cubic splines. This gives us our spline functions S 0(x) = 0. 1 illustrates the case of N=5 and the. Find the natural cubic spline that interpolates the the points $(1, 1)$, $\left ( 2, \frac{1}{2} \right )$, $\left ( 3, \frac{1}{3} \right )$, and $\left (4 , \frac{1}{4} \right )$. Calculate cubic spline interpolation with natural end conditions (zero bending moment at the end points) from vector data points. You can make the process of transfering the application to your calculator sweet and simple with Texas Instrument’s handy TI connect software. https://en. Cubic splines are used to fit a smooth curve to a series of points with a piecewise series of cubic polynomial curves. In case of three points the values for k 0 , k 1 , k 2 {\displaystyle k_{0},k_{1},k_{2}} are found by solving the tridiagonal linear equation system. spline + Manage Tags. GAMs are additive. The second derivative is chosen to be zero at the first point and last point. Similar to Cubic spline interpolation, Cubic B-spline interpolation also fits the data in a piecewise fashion, but it uses 3 rd order Bezier splines to approximate the data. Date Day Converters. , having zero second derivative) nor on passing through given points (i. In addition to spline conditions, one can choose piecewise cubic polyno-mials that satisfy Hermite interpolation conditions (sometimes referred to by the acronym PCHIP or Piecewise Cubic Hermite Interpolating Polynomials). Cubic splines create a series of piecewise cubic polynomials. x y Figure 1. Natural cubic splines Task: Find S(x) such that it is a natural cubic spline. splines that are linear left of the. Among other numerical analysis modules, scipy covers some interpolation algorithms as well as a different approaches to use them to calculate an interpolation, evaluate a polynomial with the representation of the interpolation, calculate derivatives, integrals or roots with functional and class. % Natural cubic spline interpolation >> Natural_spline. Cubic splines tend to be poorly behaved at the two tails (before the first knot and after the last knot). Bruce and Bruce 2017). 9, Issue 3, 2011, p. Which is simplified by using the substitution , giving: To guarantee the smooth continuity of the interpolating Spline , we have the following conditions: 1) So that the splines properly interpolate the given points. For each x-y ordered pair. Wikipedia has a very nice article on Bézier curves that includes animations that. 10 illustrates the interpolation for the data of October 1998, which is shaded in Exhibit 6. The slope of the line extrapolating the leading nulls is equal to the slope of the cubic spline at the first non-null value ('2013-09-29'). pro application using the "/spline" keyword on an irregular grid. B-splines and control theory Hiroyuki Kano Magnus Egerstedt y Hiroaki Nakata z Clyde F. Construct a natural cubic spline for this data set and use it to calculate interpolated values for each x value. Ariffin and Karim [9] used two types of cubic spline functions—cubic spline interpolation with C2continuity and Piecewise Cubic Hermite Spline (PCHIP) with C1 continuity for interpolating data. This is illustrated in Figures 1 and 2, where a natural cubic spline is fitted to hypothetical and somewhat unusual distillation and pump curves. James OÕBrien Univ ersity of Calif ornia, Berk eley V2006S-15-0. The feasibility and possibility of natural cubic spline to calculate dose maps for linac radiation therapy fields in a homogeneous phantom has been demonstrated. Solving for second derivatives, I can then plug back into cubic spline equation '' fii i i() ()xfx 111 22. To Interpolate Y from X. cubic: twice derivable and 2nd order derivative is continuous (C2). The functions N i;d(t) are the B-spline basis functions, which are de ned. [From GSL:] Cubic spline with natural boundary conditions. Martin x Abstract In this paper some of the relationships between B-splines and linear control theory is examined. The matrix equation for the remaining coe¢ cients is: 2 6 6 6 6 6 6 6 6 6 6 6 6 4 2 0 0 12 0 2 2 2 n 2 0 nn1 2 1 0 n2 3 7 7 7 7 7 7 7 7 7 7 7 7 5 2 6 6 6 6 6 6 6 6 6 6 6 6 4 C 0 C 1 C 2 C n 2 C 1 C 3 7 7 7 7. Although a cubic spline may have two points, it ends up as a straight line. An example of such a tool is. The , are given and meet: periodic boundary conditions. The natural cubic smoothing spline estimator can be obtained by. We want to calculate function values on a new grid x 2 using cubic splines. EFFECT spl = spline(x / knotmethod=percentilelist(5 27. Panel B shows the. 위의 빨간 점들이 1차원 라인에 대한 변위량을 나타낸다고 하면; Cubic spline interpolation을 사용하여 검은 선을 만들 수 있다. 1 Cubic Splines The cubic spline is what you may have come across in other drawing programs, a smooth curve that connects three or more points. calculate the interpolated values for each x value. To Interpolate Y from X. spline" with components. Now, suppose that we have a finite number of data points to plot. To solve for the spline coefficients it is necessary to define two additional constraints, so called boundary conditions: natural spline (most used spline variant): f'' at borders is set to 0. To approximate it with polyline we should do the following:. Clearly this behaviour is unacceptable for. The Nonlinear terms on Predictors $$X_i$$ can be anything from smoothing splines , natural cubic splines to polynomial functions or step functions etc. Anyway, why do you think you need to use a natural cubic spline, anyway? You would usually be better off using that which spline itself produces. In case of three points the values for k 0 , k 1 , k 2 {\displaystyle k_{0},k_{1},k_{2}} are found by solving the tridiagonal linear equation system. %Cubic spline interpolation between discrete points. 2 A flexible strip of wood or rubber used by draftsmen in laying out broad sweeping curves, as in railroad work. Hello Friends, I am Free Lance Tutor, who helped student in completing their homework. This section provides an example of using splines in PROC GLMSELECT to fit a GLM regression model. In addition, for cubic splines ( $$k=3$$) with 8 or more knots, the roots of the spline can be estimated. Hit the button Show example to see a demo. Two types of splines, natural and periodic, are supported. Property 1 supplies n constraints, and properties 2,3,4 each supply an additional n-2 constraints.
0pgjk2fs7k udmn3bmwvj4r 8s7rnv7ijyucj f4j5wzm36eyq9i d1xzahpwqcah mxyi7ie2diu 7rma29kj82a rmh1ejkalj req15gcow4 zkykxidbdahgy tmqhkywnewf5o x1lfpth9luq ei6n77j5p4t k1c16s7yiq 2izdmwvk5kf 8czp65jdq40wp 0oqiukmghq 90dmvcxxpwqd9e ntli9ysfa6b1f 3lj6pay69aypg8 7g2q8q0n1wwi atwfmz8rz92dz9o hiix3mely2gs vrow214jw7 bvat1tw8mq9xyv kt233sgtnoc | 2020-07-13 17:23:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5721167922019958, "perplexity": 906.7917483497629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657146247.90/warc/CC-MAIN-20200713162746-20200713192746-00412.warc.gz"} |
https://zwaltman.wordpress.com/category/category-theory/ | # Today I Learned
Some of the things I've learned every day since Oct 10, 2016
## 89: Functors
In category theory, a functor is a map between categories $\mathcal{C}, \mathcal{D}$ consisting of two components:
1. a map $f$ from the objects of $\mathcal{C}$ to those of $\mathcal{D}$
2. a map from each morphism $g: X \rightarrow Y$ in $\mathcal{C}$ to a morphism $g': f(X) \rightarrow f(Y)$ in $\mathcal{D}$.
This map must preserve the identity morphisms and composition. That is, it must satisfy the following properties:
1. $f(1_X) = 1_{f(X)}$ for objects $X$ in $\mathcal{C}$
2. $f(a \circ b) = f(a) \circ f(b)$
Functors can be thought of as kinds of homomorphisms between categories. With functors as arrows, categories can then form a category called $\mathbf{Cat}$, the category of categories.
## 86: Forgetful Functors
Forgetful functors are, as the name implies, functors between categories that forget something about the structure or properties of the objects and arrows in the source category of the functor.
Examples:
• The functor $U: \mathbf{Grp} \rightarrow \mathbf{Set}$, which maps groups to their underlying sets and group homomorphisms to themselves. $U$ forgets the group structure and that group homomorphisms are anything other than functions between sets.
• The functor $V: \mathbf{Ab} \rightarrow \mathbf{Grp}$ from the category of abelian groups to the category of groups. $V$ simply maps groups and group homomorphisms to themselves, essentially forgetting that the source groups are abelian.
## 85: The Group as a Category
The group can be viewed as equivalent to a specific kind of category, specifically the category $C$ with only a single element and all of whose morphisms are isomorphisms. In this equivalence, the elements of the group correspond to the morphisms of $C$, the group operation to $\circ$ (composition), and the group identity to the identity morphism on the single element of $C$. Since all morphisms in $C$ are isomorphisms, each ‘element’ has an inverse, and $\circ$ is associative, analogous to the associativity of the group operation.
## 79: Hom-Set
In category theory, the hom-set between 2 objects $X, Y$ in a category $C$, often denoted as
$\textrm{hom}_C (X, Y)$
or simply
$\textrm{hom} (X, Y)$
is the collection of arrows (morphisms) in $C$ from $X$ to $Y$. Note that despite the name, the hom-set is not a set in general. | 2018-03-23 16:49:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9504061937332153, "perplexity": 306.0063103930778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648404.94/warc/CC-MAIN-20180323161421-20180323181421-00313.warc.gz"} |
https://tw.answers.yahoo.com/question/index?qid=20060811000014KK13421 | # 麻煩各位幫忙翻譯2句英文,還有一個單字解釋
1. I think therefore I am
2.:Good breeding consists in concealing how much we think of ourselves and how little we think of the other person
concealing <=單字解釋
### 6 個解答
• 匿名使用者
1 0 年前
最佳解答
I think therefore I am.
我思故我在。--- 法國哲學家「笛卡兒」的名言
Good breeding consists in concealing how much we think of ourselves and how little we think of the other person.
良好的教養在於隱藏我們有多為自己著想,又多麼不為他人著想。
--- 美國作家 「馬克‧吐溫」的名言
concealing 是「隱瞞,隱藏」的意思
• 1 0 年前
Sophia's translation is the BEST!! Awesome job, even I couldn't think of it instanly..!
• 1 0 年前
1. 我思故我在
2. 好的教養在於隱藏我們有多為自己著想又多麼不為別人想
concealing 隱瞞,蔭藏
• ?
Lv 6
1 0 年前
句 型 翻 譯 :↓
1. I think therefore I am
答:1. 因此我是認為我
2.:Good breeding consists in concealing how much we think of ourselves and how little we think of the other person
答:2.:我們想起自己多少和我們想起其它人多小,好教養在於隱藏
單 字 翻 譯 : ↓
Concealing
答:隱藏
參考資料: me
• 您覺得這個回答如何?您可以登入為回答投票。
• 1 0 年前
句子:
1.是我認為因此我
I think therefore I am
2.:好飼養在於隱藏多少我們想起我們自己非常很少的我們想起其他的人
Good breeding consists in concealing how much we think of ourselves and how little we think of the other person
單字:
1.隱藏
參考資料: 中文好像怪怪的
• 1 0 年前
1.I think therefore I am
因此我是認為我
2.Good breeding consists in concealing how much we think of ourselves and how little we think of the other person
我們想起自己多少和我們想起其它人多小,好教養在于隱藏
3.conceal (vt.) 隱瞞;隱藏
參考資料: 字典 | 2020-08-15 11:22:19 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.84419846534729, "perplexity": 13940.781147272754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740838.3/warc/CC-MAIN-20200815094903-20200815124903-00475.warc.gz"} |
https://www.codingame.com/training/medium/the-polish-dictionary | • 11
Learning Opportunities
This puzzle can be solved using the following concepts. Practice using these concepts and improve your skills.
Goal
You are given a mathematical expression in Reverse Polish Notation and you need to return the equivalent expression in Infix Notation with parentheses correctly placed.
Infix Notation
Infix notation is the standard for math, operators are inserted between the two operands they work on:
5 + 3
* and / have precedence over + and -. This can be altered by surrounding certain operations with parentheses so they have to be calculated first:
5 + 3 * 6 = 5 + 18 = 23 -> (5 + 3) * 6 = 8 * 6 = 48
Reverse Polish Notation
In reverse polish notation, the operators are placed after the two operands they work on:
5 3 +
When we calculate the result, we imagine going through the operators left to right and replacing each one with the result we get if we apply it onto the operands right beside it:
5 3 + 10 * 8 4 + -8 10 * 8 4 + -80 8 4 + - 80 12 -68
In this notation we do not need to worry about operator precedence or parentheses, as the order in which the operators were written dictates the order.
In this puzzle you also have to worry about "variables", which are just combinations of letters, that are not operators, they have the same role as numbers and should appear in the result in the same way:
apple 3 * -> apple * 3
Input
Line 1: An integer N for the number of operands and operators in the string.
Line 2: A string consisting of N operands and operators separated by spaces in reverse polish notation.
An operator can be either of the following: +, -, *, /.
An operand can be a number or a variable name.
Output
Line 1: The resulting infix notation expression with the minimum number of parentheses.
There should not be a space between the parentheses and their operands.
Constraints
1 ≤ N ≤ 100
1 ≤ Length of the operands in characters ≤ 10
Operands can only be a combination of letters and numbers.
Example
Input
3
5 3 +
Output
5 + 3
A higher resolution is required to access the IDE
Join the CodinGame community on Discord to chat about puzzle contributions, challenges, streams, blog articles - all that good stuff!
Online Participants | 2020-09-21 19:08:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5981858968734741, "perplexity": 676.0953529622996}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202007.15/warc/CC-MAIN-20200921175057-20200921205057-00637.warc.gz"} |
https://newproxylists.com/tag/reach/ | ## Fortigate to Azure VPN – connected but can not reach anything
I have configured an IPSec VPN between a Fortigate and Azure, according to the following instructions:
IPsec VPN to Microsoft Azure
The VPN is connected the first time, but I can not see the virtual server of the local network, nor anything on the local network of the server.
My configuration is as follows:
• Local network: 10.1.0.1/21
• Azure v-net: 10.1.100.0/23
• Azure subnet: 10.1.100.0/25
• Azure Gateway Subnet: 10.1.101.0/24
I have tried to ping or RDP on my server (10.1.100.10) from my computer (on the local network) or to ping my computer from the server. There is no result (firewall disabled or ping other locations).
I've already created the static route and the rules in the Fortigate.
Although not listed in the instructions, I tried to create a routing table in Azure with the subnet of the local network going through the virtual network.
Any ideas on what I should try next?
Thank you!! – Luis
## Spells – Is there a limit to what you can move beyond the weight and reach of your magic projection?
My general manager and I are at odds with a debate on the applicability of an airway fate that I have reproduced below:
Move
Effect: This spell allows the caster to move inanimate objects without physical contact over a distance of a maximum speed equivalent to the value of flight 10. Its maximum weight is 60 pounds.
I would argue that this spell simply grants the caster the ability to move objects within his radius, while the GM has currently decided that he should target an object. As this spell has daily maintenance, this distinction has a great impact on its applicability to many situations. Is one of us right?
## How to reach Australian small businesses?
I want to generate traffic from Australia for my website. How to get a backlink? and how to reach Australian small businesses?
## Active Directory: Can the AD Domain Controller not reach the domain members?
Due to some limitations, a particular AD site can not have local domain controllers, nor site-to-site VPN tunnels can be established to other sites. Instead, domain members here use point-to-site / dial-in VPNs to connect to remote domain controllers.
Domain members can access domain controllers via VPN without problem. However, because of the firewall and the nature of the point-to-site VPN, domain controllers will never be able to establish connections with these isolated domain members.
Is this one-way permanent design acceptable to these domain members? Or will there be complications regarding certain features / scenarios?
## algorithms – What is the surest way to reach a target number without exceeding it?
I'm trying to write a small algorithm where I'm trying to reach a TARGET NUMBER from an INITIAL NUMBER so as not to exceed the TARGET NUMBER and to m <39 ############################################################################################## ensure that the rate at which I reach the number is not too slow or fast but a subtle gradual increase in a pre-defined time interval.
So
``````**CONTRIBUTION**:
var initial_number
var final_number
step_size = final_number - initial_number
``````
So if I have to leave `numéro_initial` , how can I continue to watch `step size` and keep adding it to reach `numéro_final` keep above the conditions.
One way for example:
1. Divide step_size and add half to initial_number
What other subtle ways can we implement this by keeping the algo can be run in a critical system
## [ Korea ] Open Question: Why are the Liberals so happy that Trump did not reach a nuclear deal with North Korea this week?
Do they expect a nuclear war to be able to blame Trump? .
## adb – Can not reach fastboot from recovery mode
Beforehand: Sorry to have asked. I was sure it was not duplicate.
I made a soft brick. The story: used Kingoroot or something similar, so was too hurried "yes, please, destroy my rooted device via a firmware update!"button.
Now I have arrived at this exact situation:
• I get an output in the linux console `fastboot devices` when starting fastboot (it's `WT98360 fastboot`)
• I can reach the recovery mode
• I get the right answer when testing root in recmode
What I think is true:
• I have downloaded the right firmware for my Huawei y360-u61 (shit but I can not afford a better one)
• When I put this file without further processing on an SD card and run in recmode mode, I should be able to choose "apply the update from an external SD card"and retrieve my phone — But unfortunately it does not work. He rather aborts the installation saying "canceled installation".
• When I chooseapply the ADB update"I should be able to `adb sideload update.zip` the file I downloaded without having to perform other operations on this file and recover my phone. — But unfortunately, the phone is not recognized in this case. The only thing I know is that `fastboot devices` do not give me anything when I choose "apply the ADB updateIt's the only thing because I do not know anything about advertising magic.
• I should not be able to load the phone laterally just by starting fastboot, because before the crash, I did not allow what I needed in that case, that is to say. OEM unlock and developers mode. — Indeed start on fastboot (where the phone is found by fastboot from the linux shell) and `adb sideload`the file gives me `error: no device / emulators found`.
My hello guessed: update the boot loader / "open it" for side loading. I am here to ask if this is correct and if so, how do we do it. And of course, I want to know if my assumptions are correct or not.
And for the sake of God: how can I turn off the device when it is in fastboot or recmode mode?
## I think I reach the goal
"I think that achieving this goal depends on negotiation skills, and traders set their goal of earning money based on their learning abilities. my daily goal a saving of 20 pips with this policy This is not an instructive trading method, it still generates profits.In general, I found that many regulated did not prefer to provide this structure of negotiation.
But luckily, I am able to apply this technique in all major currencies of my AAFX trading platform. This regulated trading phase until the end of time provides the best trading environment for scalping, providing only a piping trading spread. So, it is possible for me to make sure that my goal is to make money on this platform in no time. "
## java – RabbitMQ / AMQP – how to reach a consumer for each message in the queue (or near it)?
I have several messages in queue and their processing is very long / heavy. I thought the solution to this type of bottleneck would be to increase the number of consumers removing a message from the queue.
It is unclear how one can reach massive amounts of consumers (or 1 consumer for 1 message) in the queue. But what follows is my attempt.
I have created a SimpleRabbitListenerContainerFactory with a large number of consumers, as in the following.
``````@Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory () {
Factory SimpleRabbitListenerContainerFactory = new SimpleRabbitListenerContainerFactory ();
factory.setConnectionFactory (connectionFactory ());
factory.setConcurrentConsumers (50);
factory.setMaxConcurrentConsumers (200);
factory.setStartConsumeMinInterval (1l);
factory return;
}
``````
I can not know how many messages will be in the queue and I do not want to put too many. I want to exploit the algorithm of intensification of the consumption for a very fast scaling. I understand that I can move up a gear if I reduce the `ConsumeMinInterval`. Is it possible to go even faster than ConsumeMinInterval less than 1?
## combinatory – Waiting for the balls to reach the capacity C with two boxes of unequal probability
The problem is to show that the expected number of bales to be thrown in two bins of possibly unequal probability to fill one of them with balls of C is maximized when the two bins have an equal probability.
At present, I have formulated the problem as expecting that the first basket contains the balls C and the other basket containing the balls 0 to C-1 and vice versa
$$sum_ {i = 0} ^ {C – 1} {{C + i} choose {i}} ((1-p) ^ ip ^ C) (C + i) + sum_ {i = 0} ^ {C – 1} {{C + i} choose {i}} (p ^ i (1-p) ^ C) (C + i)$$
When you take the derivative with respect to p, it is clear that p = 0.5 leads to a derivative of 0, but I am not sure how to show that it is the maximum. I have trouble showing that for 0 <= p <0.5 the gradient is positive and for 0.5 | 2019-03-25 06:52:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33040106296539307, "perplexity": 1546.6544622765994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203755.18/warc/CC-MAIN-20190325051359-20190325073359-00166.warc.gz"} |
https://www.khanacademy.org/test-prep/mcat/social-sciences-practice/social-science-practice-tut/e/behavior-and-genetics---passage-1 | # How similar are the personalities of twins?
### Problem
Over the course of one decade, a continued observation of monozygotic and dizygotic twins, reared apart after being separated during infancy, sought to clarify the sources of human psychological differences. The study subjected 100 sets of raised-apart twins to 3 days of intensive physical and psychological assessment. In support of previous studies, the 3-day observation concluded an approximate 70, percent association of IQ variance with genetic variance in the separated monozygotic twin population. Other areas of significant similarity between reared-apart monozygotic twins and those reared together include: temperament, personality, occupational interest, and social attitudes.
A consolidated report related to personality is presented in the table below. These findings support the hypothesis that genetic variance affects psychological variance through the indirect influence of the environment surrounding development. Strong genetic influence of psychological and behavioral traits does not diminish the value of propaedeutic intervention including parenting and education.
As environmental differences are more tightly controlled in a given population, the heritability of studied traits in that population is:
Please choose from one of the following options. | 2016-10-01 07:16:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.572070837020874, "perplexity": 6184.473442714796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662541.24/warc/CC-MAIN-20160924173742-00152-ip-10-143-35-109.ec2.internal.warc.gz"} |
http://mathmatique.com/naive-set-theory/functions/images-and-preimages | # Naive Set Theory: Functions
## Images and Preimages
For a function $f : A \rightarrow B$ and a subset $A' \subseteq A$, the image of $A'$ under $f$ is the set of all values $b \in B$ such that $b = a$ for some $a \in A'$ and is denoted as $f(A')$. In set builder notations, $f(A') = \{ f(a) : a \in A' \}$. Note that $f(A')$ has a subset of the domain between the parentheses and denotes a set of points called the image of $A'$ under $f$, and $f(a)$ has an element of the domain between the parentheses and denotes a value in the image set of $f$. Also note that the image of $A$ itself under $f$ is in fact the image set of $f$, so the reuse of the word image isn't entirely malicious.
Similarly, the preimage of a subset $B' \subseteq B$ under $f$ is the set of all $a \in A$ such that $f(a) = b$ for some $b \in B'$. The preimage is denoted as $f^{-1}(B')$. In set builder notation, $f^{-1}(B') = \{ a \in A : f(a) \in B' \}$. Note that the notation $f^{-1}(B')$ has so far only been defined when a subset of the codomain is between the parentheses. Placing an element of the codomain between the parentheses instead denotes use of the inverse of $f$, which will be defined in a later section.
## Problems
1. For any function $f : A \rightarrow B$, show that if $A' \subseteq A$, then $f(A') \subseteq f(A)$.
Let $f(a) \in f(A')$. Then $a \in A$, so $f(a) \in f(A)$. Therefore, $f(A') \subseteq f(A)$.
2. Let $f : A \rightarrow B$ be a function, and let $A' \subseteq A$. Show that $A' \subseteq f^{-1}(f(A'))$, then give an example of where $A' \neq f^{-1}(f(A'))$.
Proof: Let $a \in A'$. Then by the definition of image, $f(a) \in f(A')$. By the definition of preimage, $a \in f^{-1}(f(A'))$. Therefore $A' \subseteq f^{-1}(f(A'))$.
Example: Let $f : A \rightarrow B$ be a function where $A = \{-2,-1,0,1,2\}$ and $B = \{0,1,4\}$ whose rule is $f(x) = x^2$, and let $A' = \{0,1,2\}$. Then $f(A') = \{0,1,4\}$. However, we can see that $f^{-1}(f(A')) = \{-2,-1,0,1,2\}$.
3. Let $f : A \rightarrow B$ be a function, and let $B' \subseteq B$. Show that $f(f^{-1}(B')) \subseteq B'$, then give an example of where $f(f^{-1}(B')) \neq B'$.
Proof: Let $b \in f(f^{-1}(B'))$. Then by the definition of image, $b = f(a)$ for some $a \in f^{-1}(B')$. Since $a \in f^{-1}(B')$, then by the definition of preimage, $f(a) \in B'$. Therefore $b \in B'$, and therefore $f(f^{-1}(B')) \subseteq B'$.
Example: Let $f : A \rightarrow B$ be a function where $A = \{1,2,3\}$, $B=\{1,2,3,4\}$, and $f(x) = x + 1$, and let $B' = \{1,2\}$. Then $f(f^{-1}(B')) = f(\{1\}) = \{2\}$.
4. Let $f : A \rightarrow B$ be a function, and let $A_0, A_1 \subseteq A$. Prove the following properties of images
1. Inclusion is preserved: $A_0 \subseteq A_1 \implies f(A_0) \subseteq f(A_1)$.
2. Unions are preserved: $f(A_0 \cup A_1) = f(A_0) \cup f(A_1)$
3. The image of an intersection is a subset of the intersection of the images: $f(A_0 \cap A_1) \subseteq f(A_0) \cap f(A_1)$
4. The difference of two images is a subset of the image of the difference: $f(A_0) - f(A_1) \subseteq f(A_0 - A_1)$.
1. If $f(a) \in f(A_0)$, then $a \in A_0$. By definition of subset, $a \in A_1$. Therefore $f(a) \in f(A_1)$. Thus $f(A_0) \subseteq f(A_1)$.
2. First we show that $f(A_0) \cup f(A_1) \subseteq f(A_0 \cup A_1)$. Let $a \in A_0$ such that $f(a) \in f(A_0)$. Then $f(a) \in f(A_0 \cup A_1)$. Likewise, let $a \in A_1$ such that $f(a) \in f(A_1)$. Then $f(a) \in f(A_0 \cup A_1)$. Therefore $f(A_0) \cup f(A_1) \subseteq f(A_0 \cup A_1)$.
Next we show that $f(A_0 \cup A_1) \subseteq f(A_0) \cup f(A_1)$. Let $a \in (A_0 \cup A_1)$ such that $f(a) \in f(A_0 \cup A_1)$. Then $a$ is either in $A_0$ or $A_1$. If $a \in A_0$, then $f(a) \in f(A_0)$. Alternatively, if $a \in A_1$, then $f(a) \in f(A_1)$. Then $f(a) \in f(A_0) \cup f(A_1)$. Therefore $f(A_0 \cup A_1) \subseteq f(A_0) \cup f(A_1)$.
3. Let $a \in A_0 \cap A_1$ such that $f(a) \in f(A_0 \cap A_1)$. Then $a \in A_0$ and $a \in A_1$. Therefore $f(a) \in f(A_0)$ and $f(a) \in f(A_1)$. Therefore $f(A_0 \cap A_1) \subseteq f(A_0) \cap f(A_1)$.
4. Let $f(a) \in f(A_0) - f(A_1)$. Then $f(a) \in f(A_0)$ but $f(a) \notin f(A_1)$. Therefore $a \in A_0$ but $a \notin A_1$, so $a \in A_0 - A_1$. Accordingly, $f(a) \in f(A_0 - A_1)$. Therefore $f(A_0) - f(A_1) \subseteq f(A_0 - A_1)$.
5. Let $f : A \rightarrow B$ be a function, and let $B_0, B_1 \subseteq B$. Prove the following properties of preimages
1. Inclusions are preserved: $B_0 \subseteq B_1 \implies f^{-1}(B_0) \subseteq f^{-1}(B_1)$.
2. Unions are preserved: $f^{-1}(B_0 \cup B_1) = f^{-1}(B_0) \cup f^{-1}(B_1)$.
3. Intersections are preserved: $f^{-1}(B_0 \cap B_1) = f^{-1}(B_0) \cap f^{-1}(B_1)$.
4. Set differences are preserved: $f^{-1}(B_0 - B_1) = f^{-1}(B_0) - f^{-1}(B_1)$.
1. Let $a \in f^{-1}(B_0)$. Then $f(a) \in B_0$. By definition of subset, $f(a) \in B_1$. Therefore $a \in f^{-1}(B_1)$. As a result, $f^{-1}(B_0) \subseteq f^{-1}(B_1)$.
2. First we show that $f^{-1}(B_0 \cup B_1) \subseteq f^{-1}(B_0) \cup f^{-1}(B_1)$. Let $a \in f^{-1}(B_0 \cup B_1)$. Then $f(a) \in B_0 \cup B_1$. If $f(a) \in B_0$, then $a \in f^{-1}(B_0)$. Alternatively, if $f(a) \in B_1$, then $a \in f^{-1}(B_1)$. Therefore $f^{-1}(B_0 \cup B_1) \subseteq f^{-1}(B_0) \cup f^{-1}(B_1)$.
Next we show that $f^{-1}(B_0) \cup f^{-1}(B_1) \subseteq f^{-1}(B_0 \cup B_1)$. Let $a \in f^{-1}(B_0) \cup f^{-1}(B_1)$. If $a \in f^{-1}(B_0)$, then $a \in f^{-1}(B_0 \cup B_1)$. Likewise, if $a \in f^{-1}(B_1)$, then $a \in f^{-1}(B_0 \cup B_1)$. Therefore $f^{-1}(B_0) \cup f^{-1}(B_1) \subseteq f^{-1}(B_0 \cup B_1)$.
3. First we show that $f^{-1}(B_0 \cap B_1) \subseteq f^{-1}(B_0) \cap f^{-1}(B_1)$. Let $a \in f^{-1}(B_0 \cap B_1)$. Then $f(a) \in B_0 \cap B_1$. By definition of intersection, $f(a) \in B_0$ and $f(a) \in B_1$. Then $a \in f^{-1}(B_0)$ and $a \in f^{-1}(B_1)$. Therefore $a \in f^{-1}(B_0) \cap f^{-1}(B_1)$, so $f^{-1}(B_0 \cap B_1) \subseteq f^{-1}(B_0) \cap f^{-1}(B_1)$.
Net we show that $f^{-1}(B_0) \cap f^{-1}(B_1) \subseteq f^{-1}(B_0 \cap B_1)$. Let $a \in f^{-1}(B_0) \cap f^{-1}(B_1)$. Then $a \in f^{-1}(B_0)$ and $a \in f^{-1}(B_1)$. Therefore $f(a) \in B_0$ and $f(a) \in B_1$, so $f(a) \in B_0 \cap B_1$. As a result, $a \in f^{-1}(B_0 \cap B_1)$. Therefore $f^{-1}(B_0) \cap f^{-1}(B_1) \subseteq f^{-1}(B_0 \cap B_1)$.
4. Set differences are preserved: $f^{-1}(B_0 - B_1) = f^{-1}(B_0) - f^{-1}(B_1)$.
First we show that $f^{-1}(B_0 - B_1) \subseteq f^{-1}(B_0) - f^{-1}(B_1)$. Let $a \in f^{-1}(B_0 - B_1)$. Then $f(a) \in B_0 - B_1$. By definition of set difference, $f(a) \in B_0$ but $f(a) \notin B_1$. As a result, $a \in f^{-1}(B_0)$ but $a \notin f^{-1}(B_1)$, so $a \in f^{-1}(B_0) - f^{-1}(B_1)$. Therefore $f^{-1}(B_0 - B_1) \subseteq f^{-1}(B_0) - f^{-1}(B_1)$.
Next we show that $f^{-1}(B_0) - f^{-1}(B_1) \subseteq f^{-1}(B_0 - B_1)$. Let $a \in f^{-1}(B_0) - f^{-1}(B_1)$. Then $a \in f^{-1}(B_0)$ but $a \notin f^{-1}(B_1)$. Therefore $f(a) \in B_0$ but $f(a) \notin B_1$. As a result, $f(a) \in B_0 - B_1$, so $a \in f^{-1}(B_0 - B_1)$. Therefore $f^{-1}(B_0) - f^{-1}(B_1) \subseteq f^{-1}(B_0 - B_1)$. | 2023-04-02 04:56:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9982906579971313, "perplexity": 76.44075602118549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00606.warc.gz"} |
https://answers.gazebosim.org/answers/15116/revisions/ | # Revision history [back]
I was not setting the size of the points so i was getting problem here. To solve this problem i first allocated memory for the points also than it works fine.
trajectory_msgs::JointTrajectory joint_state;
std::vector<trajectory_msgs::JointTrajectoryPoint> points_n(3);
points_n[0].positions.resize(1);
points_n[0].velocities.resize(1);
... | 2021-05-07 06:26:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36305806040763855, "perplexity": 2124.3185681367677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00075.warc.gz"} |
https://crossminds.ai/video/is-long-horizon-reinforcement-learning-more-difficult-than-short-horizon-reinforcement-learning-606fe060f43a7f2f827bfd3d/ | Is Long Horizon Reinforcement Learning More Difficult Than Short Horizon Reinforcement Learning?
Is Long Horizon Reinforcement Learning More Difficult Than Short Horizon Reinforcement Learning?
Dec 06, 2020
|
35 views
|
Details
Learning to plan for long horizons is a central challenge in episodic reinforcement learning problems. A fundamental question is to understand how the difficulty of the problem scales as the horizon increases. Here the natural measure of sample complexity is a normalized one: we are interested in the number of episodes it takes to provably discover a policy whose value is $\varepsilon$ near to that of the optimal value, where the value is measured by the normalized cumulative reward in each episode. In a COLT 2018 open problem, Jiang and Agarwal conjectured that, for tabular, episodic reinforcement learning problems, there exists a sample complexity lower bound which exhibits a polynomial dependence on the horizon -- a conjecture which is consistent with all known sample complexity upper bounds. This work refutes this conjecture, proving that tabular, episodic reinforcement learning is possible with a sample complexity that scales only logarithmically with the planning horizon. In other words, when the values are appropriately normalized (to lie in the unit interval), this results shows that long horizon RL is no more difficult than short horizon RL, at least in a minimax sense. Our analysis introduces two ideas: (i) the construction of an $\varepsilon$-net for optimal policies whose log-covering number scales only logarithmically with the planning horizon, and (ii) the Online Trajectory Synthesis algorithm, which adaptively evaluates all policies in a given policy class using sample complexity that scales with the log-covering number of the given policy class. Both may be of independent interest. Speakers: Ruosong Wang, Simon Du, Lin Yang, Sham Kakade | 2021-09-28 13:52:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7423455715179443, "perplexity": 596.6181317211303}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060803.2/warc/CC-MAIN-20210928122846-20210928152846-00167.warc.gz"} |