url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://www.physicsforums.com/threads/atwoods-machine-with-a-cylinder.388800/ | # Homework Help: Atwood's machine with a cylinder
1. Mar 22, 2010
### dnp33
1. The problem statement, all variables and given/known data
a massless string of negligible thickness is wrapped around a uniform cylinder of mass m and raidus r. the string passes up over a massless pulley and is tied to a block of mass m at its other end.
the system is released from rest. what are the accelerations of the block and the cylinder? assume that the string does not slip with respect to the cylinder.
Use conservation of energy (after applying a quick F=ma argument to show that the two objects move downward with the same acceleration)
2. Relevant equations
F=ma
K=$$\frac{1}{2}$$mv2+$$\frac{1}{2}$$Iw2
P=mgd
where w=angular frequency
and I=$$\frac{1}{2}$$mr2 is the moment of inertia.
3. The attempt at a solution
I wrote an F=ma equation for each mass, and because they are the same mass should undergo the same acceleration
I wasn't sure if that was as in depth as the question required.
Then i wrote a conservation of energy equation for the system
mgd + mgd = $$\frac{1}{2}$$mv2 + $$\frac{1}{2}$$mv2 + $$\frac{1}{2}$$Iw2
and solved for velocity, where i used the kinematics equation
$$\frac{4}{5}$$g | 2018-12-17 16:56:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34866732358932495, "perplexity": 694.7601401555511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828697.80/warc/CC-MAIN-20181217161704-20181217183704-00021.warc.gz"} |
https://mlopezm.wordpress.com/2014/02/11/retrospective-sampling-or-case-control-sampling/ | # Retrospective sampling or case-control sampling
When the prior probabilities of the classes the we want to classify are very imbalanced the it is good to use the retrospective or case-control sampling.
For example, you can do a logistic regression with case-control sampling. You have to use around 4-6 times more controls than cases, and the to adjust the intercept $\beta_0$ of your model with an adjustment: https://class.stanford.edu/c4x/HumanitiesScience/StatLearning/asset/classification.pdf (page 16), also : http://support.sas.com/kb/22/601.html for an explanation
Anuncios | 2017-11-24 21:56:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6092405915260315, "perplexity": 1483.628854626014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808972.93/warc/CC-MAIN-20171124214510-20171124234510-00374.warc.gz"} |
https://socratic.org/questions/what-is-the-period-and-amplitude-for-y-2tan-3x-pi2 | # What is the period and amplitude for y=2tan(3x-pi2)?
Jun 19, 2015
Amplitude= $\infty$
Period= $\frac{{\pi}^{2} + \pi}{3}$
#### Explanation:
The amplitude is infinity. Because the $\tan$ function is increasing over its whole domain of definition.
graph{tanx [-10, 10, -5, 5]}
The period of any $\tan$ is the value of $x$ when the "inside" of the $\tan \textcolor{red}{}$ function equals $\pi$.
I'll assume that,
$y = 2 \tan \left(3 x - {\pi}^{2}\right)$
For a period $3 x - {\pi}^{2} = \pi$
$\implies x = \frac{{\pi}^{2} + \pi}{3}$ | 2021-04-14 23:29:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9205757975578308, "perplexity": 1414.0481376657265}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038078900.34/warc/CC-MAIN-20210414215842-20210415005842-00084.warc.gz"} |
https://www.askiitians.com/forums/9-grade-science/what-is-molality-and-morality-and-what-is-the-form_264419.htm | # What is molality and morality.and what is the formula for finding it.
Molality Molarity Mass of the solvent Volume of the whole solution Unit sign expressed as $$\small \left (m \right )$$ Unit sign expressed as $$\small \left ( M \right )$$ It has units of $$\small moles / kg$$ It has units of $$\small moles / liter$$ | 2023-02-07 23:41:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7584664225578308, "perplexity": 2740.4014733300737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500664.85/warc/CC-MAIN-20230207233330-20230208023330-00287.warc.gz"} |
https://tex.stackexchange.com/questions/425073/vertical-spacing-for-table-with-equations | # Vertical Spacing for Table with Equations [duplicate]
So I have a table with equations, below. It's a bit crunched up and I want to add vertical space in the rows. I'm using booktabs package for heavy lifting.
\begin{table}
\begin{tabular}{lcc}
\toprule
Name & Function & Derivative & \\
\midrule
Sigmoid & $\phi(x) = \ddfrac{1}{1+e^{-x}}$ & $\phi'(x) = \phi(x)(1-\phi(x))$\\
TanH & $\phi(x) = \ddfrac{2}{1+e^{-2x}} - 1$ & $\phi'(x) = 1-\phi(x)^2$ \\
ReLU & $\phi(x) = \begin{cases} 0 & x \leq 0 \\ x & x > 0 \end{cases}$
& $\phi'(x) = \begin{cases} 0 & x \leq 0 \\ 1 & x > 0 \end{cases}$ \\
Leaky ReLU & $\phi(x) = \begin{cases} \alpha x & x \leq 0 \\ x & x > 0 \end{cases}$
& $\phi'(x) = \begin{cases} \alpha & x \leq 0 \\ 1 & x > 0 \end{cases}$ \\
\bottomrule
\end{tabular}
\caption{Activation functions and their derivatives.}
\label{tab:activation-functions}
\end{table}
Here's the produced output.
I tried adding: \renewcommand{\arraystretch}{2} before I write the table but it messes up more. Here's the result.
So any idea how to make it look pretty with the equations? Looking for a relatively quick solution because this is the only table I'll have with equations. But open to anything that works.
## marked as duplicate by leandriis, Community♦Apr 5 '18 at 20:25
• The answers to this question contain a broad variety of possibilities on how to increase the height of table rows. – leandriis Apr 5 '18 at 19:25
• IMHO, the table will look much better without the vertical lines and left-aligning the formula columns. Additional space between rows can be specified with booktabs' \addlinespace. – Heiko Oberdiek Apr 7 '18 at 15:01
You can use after \\ as \\[3ex] as your desire spacing by changing the value of the numeral.
\documentclass[a4paper]{article}
\usepackage{booktabs}
\usepackage{amsmath}
\begin{document}
\begin{table}
\begin{tabular}{l|cc|cc|}
\toprule
Name & Function & Derivative & \\
\midrule
Sigmoid & $\phi(x) = \dfrac{1}{1+e^{-x}}$ & $\phi'(x) = \phi(x)(1-\phi(x))$\\[3ex]
TanH & $\phi(x) = \dfrac{2}{1+e^{-2x}} - 1$ & $\phi'(x) = 1-\phi(x)^2$ \\[3ex]
ReLU & $\phi(x) = \begin{cases} 0 & x \leq 0 \\ x & x > 0 \end{cases}$
& $\phi'(x) = \begin{cases} 0 & x \leq 0 \\ 1 & x > 0 \end{cases}$ \\[4ex]
Leaky ReLU & $\phi(x) = \begin{cases} \alpha x & x \leq 0 \\ x & x > 0 \end{cases}$
& $\phi'(x) = \begin{cases} \alpha & x \leq 0 \\ 1 & x > 0 \end{cases}$ \\
\bottomrule
\end{tabular}
\caption{Activation functions and their derivatives.}
\label{tab:activation-functions}
\end{table}
\end{document} | 2019-07-22 20:54:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8199402093887329, "perplexity": 2151.887140977478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528220.95/warc/CC-MAIN-20190722201122-20190722223122-00436.warc.gz"} |
https://iidb.org/threads/another-fucking-mass-shooting-at-us-school.26119/page-18 | # Another Fucking Mass Shooting At US School
#### SLD
##### Veteran Member
View attachment 39090
You tell yourself it doesn’t happen around here. It’s other parts of the country that this happens. I drive home this evening and there’s a shit load of cops sheriffs and ambulances outside a neighborhood church. Two dead and one injured. About three blocks from my house. Fuck. In a church. Dude just walks in to a “boomer potluck” dinner and starts shooting. I know people who go there and am friends with many of the victims friends. Surprised I don’t know the victims. Pathetic. So many Christians worship guns more than they worship Jesus.
It’s a really sad day here. I’m just stunned. Utterly stunned.
And today in "Only in America", 2 dead, 1 wounded, this doesn't count as a mass shooting.
Since it happened in a church, Foxnews will play it up for sure. But then they’ll find out it was one of them liberal churches and it will be never mind.
It's been reported that the shooter is a 71 year old man who occasionally attended services at the church.
Boomers go wild!
70, and he was also a federal gun dealer with a history of illegal conduct: https://www.al.com/news/2022/06/chu...ned-by-federal-agents-about-missing-guns.html
Yeah. Talked to someone who knew him. Always been an odd person. And it’s 3 dead now.
#### Rhea
##### Cyborg with a Tiara
Staff member
article said:
The ATF report said agents found 86 firearms in Smith’s possession compared to 97 on his official dealer’s record. Smith failed to record the disposition of some firearms, the report said. He also failed to record the address of gun buyers, the report said.
So he’s selling guns to people that he’s not recording…
and so they sent him a letter.
My personal focus on “gun control” is coming down hard and NOW on sellers and compliance to the law. I don’t think this requires legislation, but rather the executive will to enforce.
So many deaths, including during crimes of robbery or gangs that use guns, would be eliminated by this ALREADY THE LAW enforcement.
#### Loren Pechtel
##### Super Moderator
Staff member
article said:
The ATF report said agents found 86 firearms in Smith’s possession compared to 97 on his official dealer’s record. Smith failed to record the disposition of some firearms, the report said. He also failed to record the address of gun buyers, the report said.
So he’s selling guns to people that he’s not recording…
and so they sent him a letter.
My personal focus on “gun control” is coming down hard and NOW on sellers and compliance to the law. I don’t think this requires legislation, but rather the executive will to enforce.
So many deaths, including during crimes of robbery or gangs that use guns, would be eliminated by this ALREADY THE LAW enforcement.
You send a letter first because it's probably just a simple mistake.
#### lpetrich
##### Contributor
Gun Sellers Stoke Fears to Boost Weapon Sales - The New York Times - "The number of firearms in the U.S. is outpacing the country’s population, as an emboldened gun industry and its allies target buyers with rhetoric of fear, machismo and defiance."
Even though helmets and body armor would make a heck of a lot more sense for self-defense.
Rep. Gerry Connolly on Twitter: "Let’s say you’re assaulted by your new boyfriend. You get a restraining order and he’s convicted of domestic violence.
Because you two weren’t married or living together, federal law says he can go buy a gun the next day.
That’s the boyfriend loophole, and we need to close it." / Twitter
#### lpetrich
##### Contributor
.@AOC tells me she is worried about the criminalization in the gun framework: “particularly, the juvenile criminalization, the expansion of background checks into juvenile records, I want to explore the implications of that and how specifically i’s designed and tailored.” 1/
“After columbine, we hired thousands of police officers into schools and while it didn’t prevent many of the mass shootings that we’ve seen now, it has increased the criminalization of teens in communities like mine.” 2/2
When I asked if she was worried if the mental health aspects would increase stigmatization, she said “absolutely. Because what people are blaming on mental health are really deeper issues of violent misogyny and white supremacy. And while there are mental health issues..”3/
“Attenuated like the deep isolation that we see with a lot of these folks, at the end of the day, we’re not addressing—there are some issues like the boyfriend loophole being closed,” she says. “The connection between domestic violence, and masa shootings, et Cetera.”
STBull on Twitter: "@EricMGarcia @AOC Have you ever considered talking to someone who’s actually getting things done rather than just talking about the problems on social media and undermining Democrats’ progress? Might be a good use of your time." / Twitter
noting
Rep. Lauren Underwood on Twitter: "It’s more important than ever that students are able to access mental health care and support in school.
##### Loony Running The Asylum
Staff member
Uvalde police officers waiting outside a pair of Robb Elementary school classrooms where kids and teachers were trapped with a gunman didn’t try to open the door to save them, according to a new report.
Citing an unnamed law enforcement source close to the case, San Antonio Express-News said surveillance footage shows that officers did not try to open the door that led to the classrooms a single time in 77 minutes. The 18-year-old shooter ultimately killed 21 people, including 19 kids on May 24 and was shot dead by border patrol agents who stormed the classroom.
The report is the latest in a series of damning revelations about the police response to the mass shooting, which survivors and politicians have described as cowardly and negligent.
Uvalde has hired a private law firm in an effort to suppress body camera footage and other records surrounding the mass shooting, Motherboard reported last week. In a letter, the city’s private lawyer argued it should be exempted from releasing records in part because they include “highly embarrassing information” and may cause “emotional/mental distress.”
As many as 19 cops stood in the hallway outside the connecting classrooms while the rampage took place. Police initially said the gunman had locked the door and that they were waiting for keys. However, the source speaking to San Antonio Express-News said that while authorities may have assumed the door was locked—the doors are designed to lock automatically once closed—a malfunction means it may have been open the entire time, but officers didn’t try it.
The source also said the cops had access to a tool called a “halligan” that could have crowbarred a locked door open.
#### Tigers!
##### Veteran Member
Uvalde police officers waiting outside a pair of Robb Elementary school classrooms where kids and teachers were trapped with a gunman didn’t try to open the door to save them, according to a new report.
Citing an unnamed law enforcement source close to the case, San Antonio Express-Newssaid surveillance footage shows that officers did not try to open the door that led to the classrooms a single time in 77 minutes. The 18-year-old shooter ultimately killed 21 people, including 19 kids on May 24 and was shot dead by border patrol agents who stormed the classroom.
The report is the latest in a series of damning revelations about the police response to the mass shooting, which survivors and politicians have described as cowardly and negligent.
Uvalde has hired a private law firm in an effort to suppress body camera footage and other records surrounding the mass shooting, Motherboard reported last week. In a letter, the city’s private lawyer argued it should be exempted from releasing records in part because they include “highly embarrassing information” and may cause “emotional/mental distress.”
If the footage is embarrassing to the police, tough.
The release may cause further distress to families but they are already distressed.
As many as 19 cops stood in the hallway outside the connecting classrooms while the rampage took place. Police initially said the gunman had locked the door and that they were waiting for keys. However, the source speaking to San Antonio Express-News said that while authorities may have assumed the door was locked—the doors are designed to lock automatically once closed—a malfunction means it may have been open the entire time, but officers didn’t try it.
The source also said the cops had access to a tool called a “halligan” that could have crowbarred a locked door open.
Cowards.
#### Jimmy Higgins
##### Contributor
The majority of shoots were fired at 11:37 AM. The timestamp of officers in the hallway is 11:52 or so.
The new video footage isn't helping the Uvalde PD look remotely competent or remotely honest. Dealing with a mass shooting incident can't be easy. However, they did take some pretty picks indicating they had SWAT training. Granted, who knows what that actually consisted of. IE receiving the weapons from Government funding and no actual training or competent (refresher?) training? After all, Uvalde, Texas isn't ever going to have a guy with a semi-automatic weapon killing people. What are the odds? Almost no small town in America ever have that happen (except the few that do).
Hard to tell if they killed the shooter sooner whether anyone's life could have been saved. Certainly the nightmare the children who weren't dead could have ended quicker. The trouble seems to expand for the Uvalde PD when Border Patrol gets there, ready to go, and they are held back.
This is going to be a tragic what did they know, when did they know it, and when did they purposefully conceal or lie about it to the public thing. The tragedy should be the deaths of the children, not the apparent incompetent reaction by the police force.
#### Politesse
##### Lux Aeterna
The tragedy "should be" whatever people find tragic. It doesn't make people insensitive to the deaths of children to ask why the adults in their community failed to protect them. These kids didn't die because of some random natural event, they died because a lot of the people responsible for their care made decisions that got them killed. Some of those people are now doubling down and saying "we will make all those same decisions again. No number of dead children is too many as the price for our 'rights'." And you think we shouldn't find that tragic? It's tragic.
#### lpetrich
##### Contributor
How the NRA Rewrote the Second Amendment - POLITICO Magazine - "The Founders never intended to create an unregulated individual right to a gun. Today, millions believe they did. Here’s how it happened." - May 19, 2014
“A fraud on the American public.” That’s how former Chief Justice Warren Burger described the idea that the Second Amendment gives an unfettered individual right to a gun. When he spoke these words to PBS in 1990, the rock-ribbed conservative appointed by Richard Nixon was expressing the longtime consensus of historians and judges across the political spectrum.
Twenty-five years later, Burger’s view seems as quaint as a powdered wig.
Many are startled to learn that the U.S. Supreme Court didn’t rule that the Second Amendment guarantees an individual’s right to own a gun until 2008, when District of Columbia v. Heller struck down the capital’s law effectively banning handguns in the home. In fact, every other time the court had ruled previously, it had ruled otherwise. Why such a head-snapping turnaround? Don’t look for answers in dusty law books or the arcane reaches of theory.
The National Rifle Association’s long crusade to bring its interpretation of the Constitution into the mainstream teaches a different lesson: Constitutional change is the product of public argument and political maneuvering. The pro-gun movement may have started with scholarship, but then it targeted public opinion and shifted the organs of government. By the time the issue reached the Supreme Court, the desired new doctrine fell like a ripe apple from a tree.
Then going into the history around the 2nd Amendment.
The Federalists wanted a strong central government, but the Anti-Federalists didn't.
The foes worried, among other things, that the new government would establish a “standing army” of professional soldiers and would disarm the 13 state militias, made up of part-time citizen-soldiers and revered as bulwarks against tyranny. These militias were the product of a world of civic duty and governmental compulsion utterly alien to us today. Every white man age 16 to 60 was enrolled. He was actually required to own—and bring—a musket or other military weapon.
On June 8, 1789, James Madison—an ardent Federalist who had won election to Congress only after agreeing to push for changes to the newly ratified Constitution—proposed 17 amendments on topics ranging from the size of congressional districts to legislative pay to the right to religious freedom. One addressed the “well regulated militia” and the right “to keep and bear arms.” We don’t really know what he meant by it. At the time, Americans expected to be able to own guns, a legacy of English common law and rights. But the overwhelming use of the phrase “bear arms” in those days referred to military activities.
There is not a single word about an individual’s right to a gun for self-defense or recreation in Madison’s notes from the Constitutional Convention. Nor was it mentioned, with a few scattered exceptions, in the records of the ratification debates in the states. Nor did the U.S. House of Representatives discuss the topic as it marked up the Bill of Rights. In fact, the original version passed by the House included a conscientious objector provision. “A well regulated militia,” it explained, “composed of the body of the people, being the best security of a free state, the right of the people to keep and bear arms shall not be infringed, but no one religiously scrupulous of bearing arms, shall be compelled to render military service in person.”
So it was over most of the US's history, with the courts upholading laws on everything from where gunpowder could be stored to who could carry a gun.
Four times between 1876 and 1939, the U.S. Supreme Court declined to rule that the Second Amendment protected individual gun ownership outside the context of a militia. As the Tennessee Supreme Court put it in 1840, “A man in the pursuit of deer, elk, and buffaloes might carry his rifle every day for forty years, and yet it would never be said of him that he had borne arms; much less could it be said that a private citizen bears arms because he has a dirk or pistol concealed under his clothes, or a spear in a cane.”
Then the National Rifle Association.
The NRA was founded by a group of Union officers after the Civil War who, perturbed by their troops’ poor marksmanship, wanted a way to sponsor shooting training and competitions. The group testified in support of the first federal gun law in 1934, which cracked down on the machine guns beloved by Bonnie and Clyde and other bank robbers. When a lawmaker asked whether the proposal violated the Constitution, the NRA witness responded, “I have not given it any study from that point of view.” The group lobbied quietly against the most stringent regulations, but its principal focus was hunting and sportsmanship: bagging deer, not blocking laws. In the late 1950s, it opened a new headquarters to house its hundreds of employees. Metal letters on the facade spelled out its purpose: firearms safety education, marksmanship training, shooting for recreation.
But in a 1977 meeting, some activists did the "Revolt at Cincinnati". "Activists from the Second Amendment Foundation and the Citizens Committee for the Right to Keep and Bear Arms pushed their way into power."
This activist revolt was followed by tax revolts and the Sagebrush Rebellion against Interior Department land policies.
Politicians adjusted in turn. The 1972 Republican platform had supported gun control, with a focus on restricting the sale of “cheap handguns.” Just three years later in 1975, preparing to challenge Gerald R. Ford for the Republican nomination, Reagan wrote in Guns & Ammo magazine, “The Second Amendment is clear, or ought to be. It appears to leave little if any leeway for the gun control advocate.” By 1980 the GOP platform proclaimed, “We believe the right of citizens to keep and bear arms must be preserved. Accordingly, we oppose federal registration of firearms.” That year the NRA gave Reagan its first-ever presidential endorsement.
Today at the NRA’s headquarters in Fairfax, Virginia, oversized letters on the facade no longer refer to “marksmanship” and “safety.” Instead, the Second Amendment is emblazoned on a wall of the building’s lobby. Visitors might not notice that the text is incomplete. It reads:
“.. the right of the people to keep and bear arms, shall not be infringed.”
The first half—the part about the well regulated militia—has been edited out.
Then about the individual-rights interpretation, "If one delves into the claims these scholars were making, a startling number of them crumble."
"In the end, it was neither the NRA nor the Bush administration that pressed the Supreme Court to reverse its centuries-old approach, but a small group of libertarian lawyers who believed other gun advocates were too timid."
Then some lessons that left-wing activists can learn from the NRA's triumph. Like patience and there being no substitute for political organizing. "Before social movements can win at the court they must win at the ballot box."
"But even more important is this: Activists turned their fight over gun control into a constitutional crusade." and "Deep notions of freedom and rights have retained totemic power."
So it may be hard to reassure the gun nuts that they can keep their guns if they are well-behaved.
"Liberal lawyers might once have rushed to court at the slightest provocation. Now, they are starting to realize that a long, full jurisprudential campaign is needed to achieve major goals."
That's true of political movement building in general.
#### lpetrich
##### Contributor
How Often Do Police Stop Active Shooters? - The New York Times
Out of 433 US mass-shooting attacks over 2000 - 2021:
• 249 - ended before the police arrive
• 185 - the attacker...
• 113 - left the scene
• 72 - committed suicide
• 64 - a bystander...
• 42 - subdued the attacker
• 22 - shot the attacker
• 12 - ordinary person
• 7 - security guard
• 3 - off-duty cop
• 184 - ended after the police arrive
• 131 - the police...
• 98 - shot the attacker
• 33 - subdued the attacker
• 53 - the attacker...
• 38 - committed suicide
• 15 - surrendered
2011 - 2021: 13, 21, 19, 20, 20, 20, 31, 29, 30, 40, 61
“It’s direct, indisputable, empirical evidence that this kind of common claim that ‘the only thing that stops a bad guy with the gun is a good guy with the gun’ is wrong,” said Adam Lankford, a professor at the University of Alabama, who has studied mass shootings for more than a decade. “It’s demonstrably false, because often they are stopping themselves.”
...
“The actual data show that some of these kind of heroic, Hollywood moments of armed citizens taking out active shooters are just extraordinarily rare,” Mr. Lankford said.
In fact, having more than one armed person at the scene who is not a member of law enforcement can create confusion and carry dire risks. An armed bystander who shot and killed an attacker in 2021 in Arvada, Colo., was himself shot and killed by the police, who mistook him for the gunman.
It was twice as common for bystanders to physically subdue the attackers, often by tackling or striking them. At Seattle Pacific University in 2014, a student security guard pepper sprayed and tackled a gunman who was reloading his weapon during an attack that killed one and injured three others. The guard took the attacker’s gun away and held the attacker until law enforcement arrived.
When a gunman entered a classroom at the University of North Carolina at Charlotte in 2019, a student tackled him. The student was shot and killed, but the police chief said the attack would have had a far worse death toll had the student not intervened.
...
Why attackers stop themselves is a hard thing to know, but Mr. Lankford, after studying shooters for years, has some guesses. One is that sometimes, shooters plan for a dramatic confrontation with the police that does not happen. Another possibility, he said, is that the reality of their actions sets in.
I like this comment from "Reader": "The good guy with a gun is a myth only seen in TV westerns." Some people seem to imagine some Hollywood-Western sort of confrontation, where the two sides face each other and get out their guns.
JS:
If you tell me that the only thing that stops a bad guy with a gun is a good guy with a gun (as Wayne LaPierre said after the Columbine shooting), I'll tell you you're watching too much television.
BTW, less than a year after Columbine, LaPierre went on safari in Botswana where he was unable to kill an elephant at point-blank range. He first wounded it and then fired three shots at it as it lay on the ground but still did not kill it. Read about it in the April 27, 2021 edition of the New Yorker - article by Mike Spies. The article includes a video of LaPierre's inept butchery of the elephant; it had to be finished off by the safari guide.
DeepSouthEric:
The Rambo fantasy too many people carry has to be one of history's greatest absurdities. Talk about playing too many video games...
The data shows that even trained officers, when returning fire under duress, miss their target badly 9 of 10 times. So, our trained police launch nine errant bullets for every one that hits.
Now, imagine your random citizen trying to take down a shooter in a crowd. Oy...
Staff member
#### Gospel
##### Unify Africa
Yeah, so instead of lobbying to amend it to fit changing times they ignore it entirely in order to sell guns for-profit to consumers that are only interested in defending themselves rather than the state. If the federal government decided one day that state rights don't matter (seems like we're already there with the SCOTUS overturning state gun laws) there'd be no militia to keep them in check. Only a bunch of disorganized pussies waving guns at one other. The NRA is our savior indeed.
#### prideandfall
##### Veteran Member
If the federal government decided one day that state rights don't matter (seems like we're already there with the SCOTUS overturning state gun laws) there'd be no militia to keep them in check. Only a bunch of disorganized pussies waving guns at one other. The NRA is our savior indeed.
well true, but the federal government hasn't given a shit about state's rights since probably... 1784 or so.
the notion that any population can rise up in armed conflict against its own government (in a geographically centralized landmass with an established civil and military infrastructure) is fucking absurd, and it always has been.
the 2nd amendment exists for the sole reason that it was a concession to slave owning states who wanted the ability to have private arsenals of weapons to keep their slave populations in check. that's it. that's why the 2nd amendment exists. any other explanation or excuse is a lie made to try to cover up for the fact that it's just there so slave owners can shoot at their property when it tries to get uppity.
#### Gospel
##### Unify Africa
Wow, never heard that take before. I was always under the impression it was written with the purpose of giving the states the right to defend their territory from any threat to the state (including the Federal Government if that were to happen). You've got my head popped right now.
#### prideandfall
##### Veteran Member
Wow, never heard that take before. I was always under the impression it was written with the purpose of giving the states the right to defend their territory from any threat to the state (including the Federal Government if that were to happen). You've got my head popped right now.
this isn't solely out of my ass btw.
TLDR:
when the US was being founded, they needed each territory that would become a state to agree to unify.
the southern states were adamant that the law allow them to have private gun arsenals for the purposes of keeping down slave rebellion.
since the natural inclination of the original founding would have been to follow the model of the rest of the world at the time which was extremely limited private weapon ownership, and mass weapons only being available to governments, they were worried that a federal armed response to a slave revolt wouldn't be fast enough and the entire system would come down.
the 2nd amendment was created to appease them and is worded the way that it is to basically say that slave patrols are fine to keep their own arsenals.
Last edited:
#### Gospel
##### Unify Africa
Bruh. That's wild. I'm not surprised given America's history but that's some wild shit.
Edit: I'll be looking into this but as of this moment my view of the second amendment has changed.
#### Elixir
##### Loony Running The Asylum
Staff member
Wow, never heard that take before. I was always under the impression it was written with the purpose of giving the states the right to defend their territory from any threat to the state (including the Federal Government if that were to happen). You've got my head popped right now.
It's true. Originally the 2nd was "A well regulated Militia, being necessary to the security of a free Nation, the right of the people to keep and bear Arms, shall not be infringed." Southern slave holding states objected. Their militias were the slave patrols. They didn't want the federal government to be able to call their slave patrols into action in other states leaving them defenseless against slave revolts.
So it was agreed to change the 2nd to "A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed" which was then ratified.
Heh, ninja'd by P&F.
#### Gospel
##### Unify Africa
I can't believe I missed this detail in all my years as an all-things written by white people on history hater. That would be my character on Dave Chapelles Blackzilla & Playa Haters' Ball.
#### Politesse
##### Lux Aeterna
I think different segments of the colonial public wanted their private arsenals for various reasons. There was also considerable tension in the North, the western Applachians, and the Ohio River valley, between the invading settler population and the lawful inhabitants of the land. Continued violence in the supposed wake of the "French and Indian War", even as special taxes were being demanded to recuperate the Crown for its supposed defense of those territories, was one of the motivating sticking points against King George for many inhabitants of the northern colonies, hence some of the bigoted language in the Declaration and the attitude of the Constitution regarding the Indian nations. I have a copy of a letter from one of my pioneer ancestors not long after the war, lamenting the need to arm her sons when they went on errands for fear of Indian attacks, a proviso which she clearly saw as necessary but not very pious when the errand was church.
This is not to say that the slave patrols weren't a part of this. The politics of the early Republic were not monolithic by any means.
Bottom line, the new government saw it as in its interest to have a heavily armed population, without whose contributions it might have fallen to a federal military or foreign mercenaries to keep the peace, a task that would have been both impossible to accomplish and extremely unpopular with the citizens. Forced disarmaments of Tory families were also a major source of criticism of the Revolutionary government during the war; this amendment could be seen as kind of a peace token in that regard.
#### Elixir
Company towns were also controlled by armed thugs, north and south. Can’t be making those illegal!
#### Loren Pechtel
##### Super Moderator
Staff member
How Often Do Police Stop Active Shooters? - The New York Times
Out of 433 US mass-shooting attacks over 2000 - 2021:
• 249 - ended before the police arrive
• 185 - the attacker...
• 113 - left the scene
• 72 - committed suicide
• 64 - a bystander...
• 42 - subdued the attacker
• 22 - shot the attacker
• 12 - ordinary person
• 7 - security guard
• 3 - off-duty cop
• 184 - ended after the police arrive
• 131 - the police...
• 98 - shot the attacker
• 33 - subdued the attacker
• 53 - the attacker...
• 38 - committed suicide
• 15 - surrendered
2011 - 2021: 13, 21, 19, 20, 20, 20, 31, 29, 30, 40, 61
“The actual data show that some of these kind of heroic, Hollywood moments of armed citizens taking out active shooters are just extraordinarily rare,” Mr. Lankford said.
In fact, having more than one armed person at the scene who is not a member of law enforcement can create confusion and carry dire risks. An armed bystander who shot and killed an attacker in 2021 in Arvada, Colo., was himself shot and killed by the police, who mistook him for the gunman.
It was twice as common for bystanders to physically subdue the attackers, often by tackling or striking them. At Seattle Pacific University in 2014, a student security guard pepper sprayed and tackled a gunman who was reloading his weapon during an attack that killed one and injured three others. The guard took the attacker’s gun away and held the attacker until law enforcement arrived.
Just because they were twice as likely to be jumped as shot doesn't make shooting the attacker extraordinarily rare. Most mass shootings occur in areas where people aren't allowed to be armed in the first place. In my book, 10% is not "extraordinarily rare".
Staff member
### Journalists in Uvalde are stonewalled, hassled, threatened with arrest
A month after 19 children and two educators were killed at Robb Elementary School, a picture is emerging of a disastrous police response, in which officers from several law enforcement agencies waited for an hour outside an unlocked classroom where children were trapped with the attacker. But journalists who have flocked to Uvalde, Tex., from across the country to tell that story have faced near-constant interference, intimidation and stonewalling from some of the same authorities — and not only bikers claiming to have police sanction.
Journalists have been threatened with arrest for “trespassing” outside public buildings. They have been barred from public meetings and refused basic information about what police did during the May 24 attack. After several early, error-filled news conferences, officials have routinely turned down interview requests and refused to hold news briefings. The situation has been made even more fraught by the spider’s web of local and state agencies involved in responding to and investigating the shooting, some of which now blame each other for the chaos.
#### Artemus
##### Veteran Member
Someone shot up a July 4th parade from a rooftop in Highland Park, Illinois this afternoon. At least 6 dead at the time of writing. Not a school, though, so probably doesn't count as a national tragedy.
#### Cheerful Charlie
##### Contributor
Reports are that the shooter was a white male with long black hair. Lunatic? Incel? Boogaloo boy?
#### Artemus
##### Veteran Member
Reports are that the shooter was a white male with long black hair. Lunatic? Incel? Boogaloo boy?
Clearly Antifa. /s
#### marc
##### Veteran Member
Why wasn’t the door to the parade locked?!
#### Elixir
Reports are that the shooter was a white male with long black hair. Lunatic? Incel? Boogaloo boy?
Been reading that he’s Magat, been at Trump rallies, raised by Trumpsucking 2A zealots … not that it has anything to do with being a mass murderer killing random people in a democratic area…
Prob’ly just another innocent white victim of mental illness caused by the existence of black people or Hillary.
#### Jimmy Higgins
##### Contributor
Reports are that the shooter was a white male with long black hair. Lunatic? Incel? Boogaloo boy?
Been reading that he’s Magat, been at Trump rallies, raised by Trumpsucking 2A zealots … not that it has anything to do with being a mass murderer killing random people in a democratic area…
Prob’ly just another innocent white victim of mental illness caused by the existence of black people or Hillary.
Are you still talking about this? It was time to move on once the bodies got cold. FREEDOM!!!!
#### Copernicus
Bottom line, the new government saw it as in its interest to have a heavily armed population, without whose contributions it might have fallen to a federal military or foreign mercenaries to keep the peace, a task that would have been both impossible to accomplish and extremely unpopular with the citizens. Forced disarmaments of Tory families were also a major source of criticism of the Revolutionary government during the war; this amendment could be seen as kind of a peace token in that regard.
I wouldn't say that the goal was a heavily armed population. It had more to do with gun technology at the time. Militias depended on muzzle-loading weapons, which meant that it took time to reload them. For a military force to be effective with muzzle loaders, it needed to keep up a constant barrage of musket balls. Typically, units formed three rows of shooters. That allowed sufficient time for reloading to take place while one line was always stepping forward and firing. Hence, the need for a well-regulated militia. Coordinated firing was essential.
The two major threats at the state level were slave revolts and attacks from Indians, but militias were also depended upon to stop other rebellions by citizens opposed to, say, taxes on whiskey. The problem with a federal army wasn't just disarmament, but a state becoming too reliant on a federal standing army to protect them from perceived local threats. If abolitionists controlled the federal government, they might not allow federal troops to be used to put down slave rebellions. States needed their own armies to guarantee their security.
#### Politesse
##### Lux Aeterna
Bottom line, the new government saw it as in its interest to have a heavily armed population, without whose contributions it might have fallen to a federal military or foreign mercenaries to keep the peace, a task that would have been both impossible to accomplish and extremely unpopular with the citizens. Forced disarmaments of Tory families were also a major source of criticism of the Revolutionary government during the war; this amendment could be seen as kind of a peace token in that regard.
I wouldn't say that the goal was a heavily armed population. It had more to do with gun technology at the time. Militias depended on muzzle-loading weapons, which meant that it took time to reload them. For a military force to be effective with muzzle loaders, it needed to keep up a constant barrage of musket balls. Typically, units formed three rows of shooters. That allowed sufficient time for reloading to take place while one line was always stepping forward and firing. Hence, the need for a well-regulated militia. Coordinated firing was essential.
You honestly believe that the reason for that admonition in the Constitution was to provide formation advice? Why would that be a legal question?
#### Copernicus
Bottom line, the new government saw it as in its interest to have a heavily armed population, without whose contributions it might have fallen to a federal military or foreign mercenaries to keep the peace, a task that would have been both impossible to accomplish and extremely unpopular with the citizens. Forced disarmaments of Tory families were also a major source of criticism of the Revolutionary government during the war; this amendment could be seen as kind of a peace token in that regard.
I wouldn't say that the goal was a heavily armed population. It had more to do with gun technology at the time. Militias depended on muzzle-loading weapons, which meant that it took time to reload them. For a military force to be effective with muzzle loaders, it needed to keep up a constant barrage of musket balls. Typically, units formed three rows of shooters. That allowed sufficient time for reloading to take place while one line was always stepping forward and firing. Hence, the need for a well-regulated militia. Coordinated firing was essential.
You honestly believe that the reason for that admonition in the Constitution was to provide formation advice? Why would that be a legal question?
#### Jarhyn
##### Wizard
Bottom line, the new government saw it as in its interest to have a heavily armed population, without whose contributions it might have fallen to a federal military or foreign mercenaries to keep the peace, a task that would have been both impossible to accomplish and extremely unpopular with the citizens. Forced disarmaments of Tory families were also a major source of criticism of the Revolutionary government during the war; this amendment could be seen as kind of a peace token in that regard.
I wouldn't say that the goal was a heavily armed population. It had more to do with gun technology at the time. Militias depended on muzzle-loading weapons, which meant that it took time to reload them. For a military force to be effective with muzzle loaders, it needed to keep up a constant barrage of musket balls. Typically, units formed three rows of shooters. That allowed sufficient time for reloading to take place while one line was always stepping forward and firing. Hence, the need for a well-regulated militia. Coordinated firing was essential.
You honestly believe that the reason for that admonition in the Constitution was to provide formation advice? Why would that be a legal question?
So, the issue here is that the federal government gave a right to the states that the states abused.
The abusive states just declared the whole state a militia and minimized requirements.
Sounds like the federal government must define "militia" legally, and crack down on illegal militias.
#### Copernicus
So, the issue here is that the federal government gave a right to the states that the states abused.
The abusive states just declared the whole state a militia and minimized requirements.
Sounds like the federal government must define "militia" legally, and crack down on illegal militias.
There is a whole mythology surrounding the interpretation of "militia" wrt the 2nd amendment. The NRA would have us believe that all military-aged citizens belong to it. Strictly speaking, our modern National Guard is a creation of the early 20th century. Before that, National Guard units and Militia units had been treated separately by state governments. Nowadays, a few states still maintain militias separate from their National Guard units, but they don't play much of a role. The modern National Guard has superseded them and is thoroughly under federal control.
But what does a gun have to do with being well-organized? Any angry teenager can learn to use a modern military-style assault weapon and reload it quickly with high capacity magazines. Not a lot of training is necessary, just Youtube videos, web sites, and chat rooms to show them the basics. It wouldn't be that simple, if they needed to load powder, wadding, and ball down the barrel every time they had to fire the weapon.
#### Politesse
##### Lux Aeterna
That isn't what I said. The expression "well-regulated" was likely a reference to the type of training needed to make a group of soldiers with muzzle loaders an effective fighting force. Without that training, they would not be as effective. It's not about a specific formation but about training.
If that was the intent, it was never followed; the states never maintained permanent militias with ongoing training regimes, to my knowledge. Actually, the nation struggled to fund even the temporary associations they tried to levy, in the early years. They owed huge debts to their existing veterans, and raising militias to combat enemies internal or external proved a massive and persistent problem until the Civil War era.
There is nothing in the amendment about personal use, although the Heller decision extended the meaning to cover that implied interpretation. Weapons could be "kept", but not necessarily at home. That is what armories were used for. Nothing in there about restrictions or lack thereof on private ownership. State governments could have imposed standards on the type of guns to be used by the militia.
It's clear enough that military, not personal recreational use, is the intended focus of this Amendment; I think we agree on that much. I do think most communities assumed a store of weapons at home, though. This is implied by the documents I've read from the time, which due to my genealogy addiction, is more than a few. I think most frontier households had arms in the home and used them regularly (though more for animal defense than human). Perhaps less true of urban populations. But even the cities weren't really organized the same way they are now, especially not where coercive force was concerned. Most communities had no formal police or military structure beyond local voluntary organizations that functioned more like clubs than armies. If an actual war started, the state or nation paid you on a defined contract and assumed the cost of training alongside the year-and-day of your service. You could be conscripted without your consent, but not for longer than your term of service, and only in an emergency situation.
I also don't think the organization of the militia is as such the purpose of the Second, since that would be superfluous; the main document already establishes the right of the Congress to oversee the creation and regulation of the militias. The Bill of Rights is about personal, not institutional rights. I do not think it is credible therefore to assert that this Amendment was only meant to apply to collective armories; personal gun ownership has to have been its intended target. But I do agree that the use of arms they were thinking about was participation in the militias, not hanging out at the shooting range with the boys, shooting primary schoolers for the lulz, or collecting guns for their own sake.
Honestly, I think the entire country would be better off following the model of some of European nations in which all adult citizens have mandatory firearm training and perhaps the responsibility to maintain a government-provided rifle, but otherwise have fairly limited and clearly regulated gun rights beyond what is necesary to maintain the citizen militia / conscription readiness. But the immovable our citizens are already heavily armed, and public opinion is not in favor of such a system.
EDIT TO CLARIFY: I'm not in favor the so-called Originalist approach to interpreting the Constitution in the first place. It sets up a self-contradictory situation in which the document itself clearly attempts to describe a democracy, but is interpreted in such a way as to over-ride democratic opinion in favor of oligarchic rule by a nine person Court and their self-interested "interpretation" of the whims of ghosts. The law should reflect the desires and understanding of the people who are currently living and can express their own opinions, not the desires of deceased men whose hypothetical opinion on present situations can only be guessed at. The Founders were not unanimous in their opinions nor inflexible in changing them; using their sepulchres the foundation of law is inherently doomed to inconsistency.
#### Copernicus
That isn't what I said. The expression "well-regulated" was likely a reference to the type of training needed to make a group of soldiers with muzzle loaders an effective fighting force. Without that training, they would not be as effective. It's not about a specific formation but about training.
If that was the intent, it was never followed; the states never maintained permanent militias with ongoing training regimes, to my knowledge. Actually, the nation struggled to fund even the temporary associations they tried to levy, in the early years. They owed huge debts to their existing veterans, and raising militias to combat enemies internal or external proved a massive and persistent problem until the Civil War era.
There is nothing in the amendment about personal use, although the Heller decision extended the meaning to cover that implied interpretation. Weapons could be "kept", but not necessarily at home. That is what armories were used for. Nothing in there about restrictions or lack thereof on private ownership. State governments could have imposed standards on the type of guns to be used by the militia.
It's clear enough that military, not personal recreational use, is the intended focus of this Amendment; I think we agree on that much. I do think most communities assumed a store of weapons at home, though. This is implied by the documents I've read from the time, which due to my genealogy addiction, is more than a few. I think most frontier households had arms in the home and used them regularly (though more for animal defense than human). Perhaps less true of urban populations. But even the cities weren't really organized the same way they are now, especially not where coercive force was concerned. Most communities had no formal police or military structure beyond local voluntary organizations that functioned more like clubs than armies. If an actual war started, the state or nation paid you on a defined contract and assumed the cost of training alongside the year-and-day of your service. You could be conscripted without your consent, but not for longer than your term of service, and only in an emergency situation.
I also don't think the organization of the militia is as such the purpose of the Second, since that would be superfluous; the main document already establishes the right of the Congress to oversee the creation and regulation of the militias. The Bill of Rights is about personal, not institutional rights. I do not think it is credible therefore to assert that this Amendment was only meant to apply to collective armories; personal gun ownership has to have been its intended target. But I do agree that the use of arms they were thinking about was participation in the militias, not hanging out at the shooting range with the boys or collecting guns for their own sake.
Honestly, I think the entire country would be better off following the model of some of European nations in which all adult citizens have mandatory firearm training and perhaps the responsibility to maintain a government-provided rifle, but otherwise have fairly limited and clearly regulated gun rights beyond what is necesary to maintain the citizen militia / conscription readiness. But our citizens are already heavily armed, and public opinion is not in favor of such a system.
#### Jarhyn
##### Wizard
That isn't what I said. The expression "well-regulated" was likely a reference to the type of training needed to make a group of soldiers with muzzle loaders an effective fighting force. Without that training, they would not be as effective. It's not about a specific formation but about training.
If that was the intent, it was never followed; the states never maintained permanent militias with ongoing training regimes, to my knowledge. Actually, the nation struggled to fund even the temporary associations they tried to levy, in the early years. They owed huge debts to their existing veterans, and raising militias to combat enemies internal or external proved a massive and persistent problem until the Civil War era.
There is nothing in the amendment about personal use, although the Heller decision extended the meaning to cover that implied interpretation. Weapons could be "kept", but not necessarily at home. That is what armories were used for. Nothing in there about restrictions or lack thereof on private ownership. State governments could have imposed standards on the type of guns to be used by the militia.
It's clear enough that military, not personal recreational use, is the intended focus of this Amendment; I think we agree on that much. I do think most communities assumed a store of weapons at home, though. This is implied by the documents I've read from the time, which due to my genealogy addiction, is more than a few. I think most frontier households had arms in the home and used them regularly (though more for animal defense than human). Perhaps less true of urban populations. But even the cities weren't really organized the same way they are now, especially not where coercive force was concerned. Most communities had no formal police or military structure beyond local voluntary organizations that functioned more like clubs than armies. If an actual war started, the state or nation paid you on a defined contract and assumed the cost of training alongside the year-and-day of your service. You could be conscripted without your consent, but not for longer than your term of service, and only in an emergency situation.
I also don't think the organization of the militia is as such the purpose of the Second, since that would be superfluous; the main document already establishes the right of the Congress to oversee the creation and regulation of the militias. The Bill of Rights is about personal, not institutional rights. I do not think it is credible therefore to assert that this Amendment was only meant to apply to collective armories; personal gun ownership has to have been its intended target. But I do agree that the use of arms they were thinking about was participation in the militias, not hanging out at the shooting range with the boys, shooting primary schoolers for the lulz, or collecting guns for their own sake.
Honestly, I think the entire country would be better off following the model of some of European nations in which all adult citizens have mandatory firearm training and perhaps the responsibility to maintain a government-provided rifle, but otherwise have fairly limited and clearly regulated gun rights beyond what is necesary to maintain the citizen militia / conscription readiness. But the immovable our citizens are already heavily armed, and public opinion is not in favor of such a system.
I'm not in favor the so-called Originalist approach to interpreting the Constitution. It sets up a self-contradictory situation in which the document itself clearly attempts to describe a democracy, but is interpreted in such a way as to over-ride democratic opinion in favor of oligarchic rule by a nine person Court and their self-interested "interpretation" of the whims of ghosts. The law should reflect the desires and understanding of the people who are currently living and can express their own opinions, not the desires of deceased men whose hypothetical opinion on present situations can only be guessed at. The Founders were not unanimous in their opinions nor inflexible in changing them; using fanciful portrayals of what their sepulchres demand as the foundation of law is inherently doomed to inconsistency.
And my point is, it seemed that the 2nd was aimed at keeping the federal government from controlling the arming policies of the states so as to prevent federal control of munitions.
It's a right specifically granted to the states which they then abused by not actually regulating "the militia" at all.
And now we have a bunch of militias, really more terrorist cells, which are unregulated and running amok.
#### Copernicus
And my point is, it seemed that the 2nd was aimed at keeping the federal government from controlling the arming policies of the states so as to prevent federal control of munitions.
It's a right specifically granted to the states which they then abused by not actually regulating "the militia" at all.
And now we have a bunch of militias, really more terrorist cells, which are unregulated and running amok.
Again, I think that you make the mistake of thinking that "well-regulated" meant "well-governed" or "well-controlled". But why would personal ownership of a weapon be relevant to that sense of the expression? It made more sense if the authors were thinking of soldiers that could reload quickly and fire in a coordinated pattern. That is, the sense of "well-regulated" they intended was more probably "well-trained" in using single-shot muskets that took time to reload.
#### Jimmy Higgins
##### Contributor
And my point is, it seemed that the 2nd was aimed at keeping the federal government from controlling the arming policies of the states so as to prevent federal control of munitions.
It's a right specifically granted to the states which they then abused by not actually regulating "the militia" at all.
And now we have a bunch of militias, really more terrorist cells, which are unregulated and running amok.
Again, I think that you make the mistake of thinking that "well-regulated" meant "well-governed" or "well-controlled". But why would personal ownership of a weapon be relevant to that sense of the expression? It made more sense if the authors were thinking of soldiers that could reload quickly and fire in a coordinated pattern. That is, the sense of "well-regulated" they intended was more probably "well-trained" in using single-shot muskets that took time to reload.
I thought Jarhyn made a good point, but when you look at the text, it is explicitly talking to individual rights. The Bill of Rights was originally meant to protect the States and People from the Federal Government. But if the 2nd Amendment was enumerating the responsibility of gun ownership to the states, it is oddly worded. Additionally, gun ownership wasn't all too controversial in a country with a frontier and very rural.
#### Jarhyn
##### Wizard
And my point is, it seemed that the 2nd was aimed at keeping the federal government from controlling the arming policies of the states so as to prevent federal control of munitions.
It's a right specifically granted to the states which they then abused by not actually regulating "the militia" at all.
And now we have a bunch of militias, really more terrorist cells, which are unregulated and running amok.
Again, I think that you make the mistake of thinking that "well-regulated" meant "well-governed" or "well-controlled". But why would personal ownership of a weapon be relevant to that sense of the expression? It made more sense if the authors were thinking of soldiers that could reload quickly and fire in a coordinated pattern. That is, the sense of "well-regulated" they intended was more probably "well-trained" in using single-shot muskets that took time to reload.
I thought Jarhyn made a good point, but when you look at the text, it is explicitly talking to individual rights. The Bill of Rights was originally meant to protect the States and People from the Federal Government. But if the 2nd Amendment was enumerating the responsibility of gun ownership to the states, it is oddly worded. Additionally, gun ownership wasn't all too controversial in a country with a frontier and very rural.
More, "in the interests of the functions of a well ordered militia (specifically not 'army'), the federal government shall not tell the states how they shall arm the militia and warehouse it's weapons."
What I see as happening is the states saying "ok, well, neither will we, so SUCK IT! Everybody is militia!"
#### Jimmy Higgins
##### Contributor
I thought Jarhyn made a good point, but when you look at the text, it is explicitly talking to individual rights. The Bill of Rights was originally meant to protect the States and People from the Federal Government. But if the 2nd Amendment was enumerating the responsibility of gun ownership to the states, it is oddly worded. Additionally, gun ownership wasn't all too controversial in a country with a frontier and very rural.
More, "in the interests of the functions of a well ordered militia (specifically not 'army'), the federal government shall not tell the states how they shall arm the militia and warehouse it's weapons."
What I see as happening is the states saying "ok, well, neither will we, so SUCK IT! Everybody is militia!"
But was there that issue back then? Adversity between Feds and States over firearms? Regardless, the Articles of Confederation stated:
Articles of Confederation said:
No vessel of war shall be kept up in time of peace by any State, except such number only, as shall be deemed necessary by the United States in Congress assembled, for the defense of such State, or its trade; nor shall any body of forces be kept up by any State in time of peace, except such number only, as in the judgement of the United States in Congress assembled, shall be deemed requisite to garrison the forts necessary for the defense of such State; but every State shall always keep up a well-regulated and disciplined militia, sufficiently armed and accoutered, and shall provide and constantly have ready for use, in public stores, a due number of filed pieces and tents, and a proper quantity of arms, ammunition and camp equipage.
Reading the 2nd Amendment in context with the text of the Articles of Confederation does muddy the waters.
#### Politesse
##### Lux Aeterna
The federal government does have direct control over state militias, though. This has nothing to do with the Bill of Rights, but Article 1 of the Constitution itself:
Clause 16. The Congress shall have Power... To provide for organizing, arming, and disciplining, the Militia, and for governing such Part of them as may be employed in the Service of the United States, reserving to the States respectively, the Appointment of the Officers, and the Authority of training the Militia according to the discipline prescribed by Congress.
This was later "interpreted" to apply to a national standing army, but its original intent is pretty obvious in the context of this discussion: ultimately, Congress has direct control in all matters pertaining to war, including the drawing and organization of state militias, at least in any matter of collective national defense.
##### Loony Running The Asylum
Staff member
No need to argue over what was meant of the duties of the militias. They wrote it down.
#### Gospel
##### Unify Africa
Zoinks! There you have it, folks. It's time we start kicking down doors for those guns. It's not a complete loss, we'll leave you a National Guard recruitment pamphlet.
#### Jarhyn
##### Wizard
I thought Jarhyn made a good point, but when you look at the text, it is explicitly talking to individual rights. The Bill of Rights was originally meant to protect the States and People from the Federal Government. But if the 2nd Amendment was enumerating the responsibility of gun ownership to the states, it is oddly worded. Additionally, gun ownership wasn't all too controversial in a country with a frontier and very rural.
More, "in the interests of the functions of a well ordered militia (specifically not 'army'), the federal government shall not tell the states how they shall arm the militia and warehouse it's weapons."
What I see as happening is the states saying "ok, well, neither will we, so SUCK IT! Everybody is militia!"
But was there that issue back then? Adversity between Feds and States over firearms? Regardless, the Articles of Confederation stated:
Articles of Confederation said:
No vessel of war shall be kept up in time of peace by any State, except such number only, as shall be deemed necessary by the United States in Congress assembled, for the defense of such State, or its trade; nor shall any body of forces be kept up by any State in time of peace, except such number only, as in the judgement of the United States in Congress assembled, shall be deemed requisite to garrison the forts necessary for the defense of such State; but every State shall always keep up a well-regulated and disciplined militia, sufficiently armed and accoutered, and shall provide and constantly have ready for use, in public stores, a due number of filed pieces and tents, and a proper quantity of arms, ammunition and camp equipage.
Reading the 2nd Amendment in context with the text of the Articles of Confederation does muddy the waters.
There didn't need to be an issue. At the time, they had no reason to think that this was a bad arrangement of just leaving it loose, and trusting states to organize as they were able.
As ZiprHead points out, later clarification was forthcoming.
#### Gospel
##### Unify Africa
There didn't need to be an issue. At the time, they had no reason to think that this was a bad arrangement of just leaving it loose, and trusting states to organize as they were able.
My understanding is the culture was a bunch of Europeans doing the whole get rich or die trying thingamabob. They didn't foresee that half of America would change goals centuries later.
#### bilby
##### Fair dinkum thinkum
So, the issue here is that the federal government gave a right to the states that the states abused.
The abusive states just declared the whole state a militia and minimized requirements.
Sounds like the federal government must define "militia" legally, and crack down on illegal militias.
There is a whole mythology surrounding the interpretation of "militia" wrt the 2nd amendment. The NRA would have us believe that all military-aged citizens belong to it. Strictly speaking, our modern National Guard is a creation of the early 20th century. Before that, National Guard units and Militia units had been treated separately by state governments. Nowadays, a few states still maintain militias separate from their National Guard units, but they don't play much of a role. The modern National Guard has superseded them and is thoroughly under federal control.
But what does a gun have to do with being well-organized? Any angry teenager can learn to use a modern military-style assault weapon and reload it quickly with high capacity magazines. Not a lot of training is necessary, just Youtube videos, web sites, and chat rooms to show them the basics. It wouldn't be that simple, if they needed to load powder, wadding, and ball down the barrel every time they had to fire the weapon.
I disagree.
I understand ‘well regulated’ in the time and context of the writing of the second amendment to mean “under the strict control of the authorities”.
It had little to do with organisation, and nothing to do with training; It is an admonition against militias that don’t act as loyal servants of the properly elected and constituted government - anarchists, rebels, and cultists.
The writers of the second amendment were probably particularly concerned about the possibility of pro-English militias who might seek to overturn their revolution, and separatists who might seek to break up their new country.
This interpretation is certainly a better fit with my understanding of the way that the English language has changed since the C18th.
Organisation is a more modern fetish, which arose from the industrial revolution; Training would at the time have been referred to using language such as “well drilled” - “regulated” specifically meant “commanded by legitimate authority”, and again, other meanings for the word commonly used today derive from the rise of the machines, that hadn’t yet occurred when the second amendment was penned.
The pre-industrial world was a very different place, and many new concepts that arose during the industrial revolution co-opted words that had previously been used quite differently. “Regulated” is certainly one such word. In a military context, it’s a reference to what today would be called “communication, command and control” - the ability of the central authority to direct the actions of individual units in the field. | 2022-08-07 19:33:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19897307455539703, "perplexity": 4153.538707274711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570692.22/warc/CC-MAIN-20220807181008-20220807211008-00340.warc.gz"} |
http://mrpt.ual.es/reference/2.1.5/class_mrpt_hwdrivers_CFFMPEG_InputStream.html | # class mrpt::hwdrivers::CFFMPEG_InputStream¶
A generic class which process a video file or other kind of input stream (http, rtsp) and allows the extraction of images frame by frame.
Video sources can be open with “openURL”, which can manage both video files and “rtsp://” sources (IP cameras).
Frames are retrieved by calling CFFMPEG_InputStream::retrieveFrame
For an example of usage, see the file “samples/grab_camera_ffmpeg”
This class is an easy to use C++ wrapper for ffmpeg libraries (libavcodec). In Unix systems these libraries must be installed in the system as explained in MRPT’s wiki. In Win32, a precompiled version for Visual Studio must be also downloaded as explained in the wiki.
#include <mrpt/hwdrivers/CFFMPEG_InputStream.h>
class CFFMPEG_InputStream
{
public:
// structs
struct Impl;
//
methods
bool openURL(
const std::string& url,
bool grab_as_grayscale = false,
bool verbose = false
);
bool isOpen() const;
void close();
double getVideoFPS() const;
bool retrieveFrame(mrpt::img::CImage& out_img);
};
## Methods¶
bool openURL(
const std::string& url,
bool grab_as_grayscale = false,
bool verbose = false
)
Open a video file or a video stream (rtsp://) This can be used to open local video files (eg.
“myVideo.avi”, “c:a.mpeg”) and also IP cameras (e. “rtsp://a.b.c.d/live.sdp”). However, note that there is currently no support for user/password in IP access. If verbose is set to true, more information about the video will be dumped to cout.
Returns:
false on any error (and error info dumped to cerr), true on success.
bool isOpen() const
Return whether the video source was open correctly.
void close()
Close the video stream (this is called automatically at destruction).
openURL
double getVideoFPS() const
Get the frame-per-second (FPS) of the video source, or “-1” if the video is not open.
bool retrieveFrame(mrpt::img::CImage& out_img)
Get the next frame from the video stream.
Note that for remote streams (IP cameras) this method may block until enough information is read to generate a new frame. Images are returned as 8-bit depth grayscale if “grab_as_grayscale” is true.
Returns:
false on any error, true on success. | 2023-02-04 08:14:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2842884361743927, "perplexity": 12857.643408598527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500095.4/warc/CC-MAIN-20230204075436-20230204105436-00114.warc.gz"} |
https://www.dcode.fr/complex-number-modulus | Search for a tool
Complex Number Modulus/Magnitude
Tool for calculating the value of the modulus/magnitude of a complex number |z| (absolute value): the length of the segment between the point of origin of the complex plane and the point z
Results
Complex Number Modulus/Magnitude -
Tag(s) : Arithmetics, Geometry
Share
dCode and more
dCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day!
A suggestion ? a feedback ? a bug ? an idea ? Write to dCode!
Please, check our dCode Discord community for help requests!
NB: for encrypted messages, test our automatic cipher identifier!
Thanks to your feedback and relevant comments, dCode has developed the best 'Complex Number Modulus/Magnitude' tool, so feel free to write! Thank you!
# Complex Number Modulus/Magnitude
## Complex from Modulus and Argument Calculator
### What is the modulus of a complex number? (Definition)
The modulus (or magnitude) is the length (absolute value) in the complex plane, qualifying the complex number $z = a + ib$ (with $a$ the real part and $b$ the imaginary part), it is denoted $| z |$ and is equal to $| z | = \sqrt{a ^ 2 + b ^ 2}$.
The module can be interpreted as the distance separating the point (representing the complex number) from the origin of the reference of the complex plane.
### How to calculate the modulus of a complex number?
To find the module of a complex number $z = a + ib$ carry out the computation $|z| = \sqrt {a^2 + b^2}$
Example: $z = 1+2i$ (of abscissa 1 and of ordinate 2 on the complex plane) then the modulus equals $|z| = \sqrt{1^2+2^2} = \sqrt{5}$
The calculation also applies with the exponential form of the complex number.
### How to calculate the modulus of a real number?
The modulus (or magnitude) of a real number is equivalent to its absolute value.
Example: $|-3| = 3$
### What are the properties of modulus?
For the complex numbers $z, z_1, z_2$ the complex modulus has the following properties:
$$|z_1 \cdot z_2| = |z_1| \cdot |z_2|$$
$$\left| \frac{z_1}{z_2} \right| = \frac{|z_1|}{|z_2|} \quad z_2 \ne 0$$
$$|z_1+z_2| \le |z_1|+|z_2|$$
A modulus is an absolute value, therefore necessarily positive (or null):
$$|z| \ge 0$$
The modulus of a complex number and its conjugate are equal:
$$|\overline z|=|z|$$
## Source code
dCode retains ownership of the online 'Complex Number Modulus/Magnitude' tool source code. Except explicit open source licence (indicated CC / Creative Commons / free), any 'Complex Number Modulus/Magnitude' algorithm, applet or snippet (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any 'Complex Number Modulus/Magnitude' function (calculate, convert, solve, decrypt / encrypt, decipher / cipher, decode / encode, translate) written in any informatic language (Python, Java, PHP, C#, Javascript, Matlab, etc.) and no data download, script, copy-paste, or API access for 'Complex Number Modulus/Magnitude' will be for free, same for offline use on PC, tablet, iPhone or Android ! dCode is free and online.
## Need Help ?
Please, check our dCode Discord community for help requests!
NB: for encrypted messages, test our automatic cipher identifier! | 2021-08-03 01:11:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3591054677963257, "perplexity": 3385.3819641442064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154408.7/warc/CC-MAIN-20210802234539-20210803024539-00671.warc.gz"} |
https://www.impan.pl/pl/wydawnictwa/czasopisma-i-serie-wydawnicze/annales-polonici-mathematici/all/82/1/84521/geometry-of-quotient-spaces-and-proximinality | JEDNOSTKA NAUKOWA KATEGORII A+
# Wydawnictwa / Czasopisma IMPAN / Annales Polonici Mathematici / Wszystkie zeszyty
## Geometry of quotient spaces and proximinality
### Tom 82 / 2003
Annales Polonici Mathematici 82 (2003), 9-18 MSC: 46B20, 46E30, 46E40. DOI: 10.4064/ap82-1-2
#### Streszczenie
It is proved that if $X$ is a rotund Banach space and $M$ is a closed and proximinal subspace of $X$, then the quotient space $X / M$ is also rotund. It is also shown that if ${\mit \Phi }$ does not satisfy the $\delta _2$-condition, then $h_{{\mit \Phi }}^0$ is not proximinal in $l_{{\mit \Phi }}^0$ and the quotient space $l_{{\mit \Phi }}^0/ h_{{\mit \Phi }}^0$ is not rotund (even if $l_{{\mit \Phi }}^0$ is rotund). Weakly nearly uniform convexity and weakly uniform Kadec–Klee property are introduced and it is proved that a Banach space $X$ is weakly nearly uniformly convex if and only if it is reflexive and it has the weakly uniform Kadec–Klee property. It is noted that the quotient space $X/M$ with $X$ and $M$ as above is weakly nearly uniformly convex whenever $X$ is weakly nearly uniformly convex. Criteria for weakly nearly uniform convexity of Orlicz sequence spaces equipped with the Orlicz norm are given.
#### Autorzy
• Yuan CuiDepartment of Mathematics
Harbin University of Sciences
and Technology
Harbin, P.R. China
• Henryk HudzikFaculty of Mathematics and Computer Science
Umultowska 87
61-614 Poznań, Poland
e-mail
• Yaowaluck KhongthamFaculty of Science
Maejo University
Chiang Mai, Thailand
## Przeszukaj wydawnictwa IMPAN
Zbyt krótkie zapytanie. Wpisz co najmniej 4 znaki.
Odśwież obrazek | 2023-03-24 03:20:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8495083451271057, "perplexity": 1840.9238726650144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945242.64/warc/CC-MAIN-20230324020038-20230324050038-00071.warc.gz"} |
https://tex.stackexchange.com/questions/200786/pgfplots-generating-incorrect-boxplots | # pgfplots generating incorrect boxplots
While trying to reproduce some boxplots using pgfplots (and analyzing samples automatically), I noticed that in some cases I was getting different plots from Matlab. I have left all the settings in default, which means that the thresholds for outliers and whiskers should be the same in both cases. (Thresholds for outliers by default are q1-1.5*(q3-q1) and q3+1.5*(q3-q1) where q1 and q3 are the 25th and the 75th percentiles respectively). However, Matlab and pgfplots generate plots with different whisker and outlier positions.
Plot from pgfplots:
Plot from Matlab:
Following is the code and the data file that I am using to generate these plots:
Latex:
\documentclass{minimal}
\usepackage{pgfplots}
\usepgfplotslibrary{statistics}
\pgfplotsset{compat=1.8}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
boxplot/draw direction = y,
xmin = -2,
xmax = 4]
\end{axis}
\end{tikzpicture}
\end{document}
Matlab:
T=readtable('testdata.txt','delimiter','\t');
boxplot(T{:,:})
The data file:
Var1
0
1
2
3
4
5
3
10
14
-9
Any idea why they generate different plots? Which one is correct?
There are at least two different factors potentially contributing to differences in the boxplots produced by Matlab and pgfplots.
## 1. <= and >= (Matlab) vs < and > (pgfplots)
There is a difference in the definitions of whiskers and outliers.
From the manual of pgfplots (I have emphasized the key fact):
lower whisker is the smallest data value which is larger than lower quartile−1.5 · IQR
and
upper whisker is the largest data value which is smaller than upper quartile+1.5 · IQR
From the manual of Matlab (emphasis added):
Points are drawn as outliers if they are larger than Q3+W*(Q3-Q1) or smaller than Q1-W*(Q3-Q1), where Q1 and Q3 are the 25th and 75th percentiles, respectively. The default value 1.5 corresponds to approximately +/- 2.7 sigma and 99.3 coverage if the data are normally distributed. The plotted whisker extends to the adjacent value, which is the most extreme data value that is not an outlier.
## 2. Different methods for computing box limits / quartiles
It would be all too easy to say that the box limits are the 1st and 3rd quartile (a.k.a. 25th and 75th percentile, a.k.a. quantiles with probabilities 0.25 and 0.75) and leave it at that. Alas, there are many methods for computing quantiles. Without going into too much detail, there are no less than 9 different quantile() method variants in R. For the example data set, these methods give 7 unique results for the pair of numbers (25th and 75th percentile). These are:
• 0 and 5
• 0.5 and 4.5
• 0.75 and 6.25
• 0.9166667 and 5.4166667
• 0.9375 and 5.3125
• 1 and 5
• 1.25 and 4.75
Matlab finds the box limits to be 1 and 5. According to @Jake (see comments), the limits in pgfplots are 1.5 and 4.5, which can also be approximately confirmed by looking at the picture attached by the original poster. Note that this corresponds to yet another definition of quantile. The pgfplots manual, Revision 1.11 (2014/08/04), states the following about the computation of quantiles:
I am not sure how exactly this maps the quartiles of the example data set to 1.5 and 4.5. We have x1=-9, x2=0, x3=1, x4=2, x5=x6=3, x7=4, x8=5, x9=10 and x10=14. Following the formula in the pgfplots manual, we get lower quartile 0.5*(x2+x3)=0.5*(0+1)=0.5 and upper quartile 0.5*(x7+x8)=0.5*(4+5)=4.5. One consequence of using the formula presented in the pgfplots manual is that for computing small quantiles, one would need the non-existent value x0.
## How this works with the example data
Assuming what I wrote above is correct and everything works as documented, we can work through a boxplot of the example data from the perspective of both Matlab and pgfplots.
Matlab
Assuming that the quartiles are 1 and 5, the inter-quartile range (IQR) is 4. Now 1.5*IQR is 6. Third quartile + 1.5*IQR is 11. Data value 14 is larger than that which makes it an outlier. However, 10 is not an outlier. Thus the upper whisker extends to 10.
pgfplots
Assuming that the quartiles are 1.5 and 4.5, the inter-quartile range (IQR) is 3. Now 1.5*IQR is 4.5. Third quartile + 1.5*IQR is 9. Data value 14 is larger than that which makes it an outlier. Also 10 is an outlier. 5 is the largest number which is not an outlier. Thus the upper whisker extends to 5.
## Conclusion
I cannot tell for sure which of the boxplot computation methods is better or correct. I can add another data point: In case of the example data, boxplot() in R gives the same result as Matlab.
One should also note that due to the limitations of computer arithmetics and the discrete nature of the whisker locations, different implementations using the same formula may also produce a different result depending on the data set.
• It seems that the upper limits of the box also differ. I'll look into that. – mvkorpel Sep 12 '14 at 9:20
• There are different methods for computing quantiles, which further complicates the issue. – mvkorpel Sep 12 '14 at 9:34
• There is also a difference in how the quartiles are calculated between PGFPlots and Matlab, which is why also the boxes look different, not just the whiskers: PGFPlots finds Q1=1.5 and Q3=4.5, while Matlab finds Q1=1 and Q3=5. It's interesting that you find yet another set of values for Q1 and Q3 (1.25 and 1.75). How did you calculate those values? For what it's worth, Wolfram Alpha agrees with Matlab on the quartile values, but draws the whiskers to cover all the data. – Jake Sep 12 '14 at 9:36
• I went through the equations more thoroughly after reading your answer and it does appear that the equations are incorrect. Take the simple case of x1=1, x2=2 and x3=3. The median or the 0.5 quantile value should be 2 in this case, but the equation gives a result of 1.5. My guess is that the indices for the numbers should start from 0, not 1. This solves the issue outlined here, generates the expected values with the example data and also get rid of the "non existent x0 value" problem that you brought up. That still leaves the issue about different values being computed by different tools... – Adi Sep 12 '14 at 14:13
• Thanks for the question and for the detailed answer/comments. They will make their way into the boxplot handler for the next release. My plan is to add further strategies and probably change the default strategy. I also plan to address performance of the computation, support for nan values, and better support for small sample sizes (the current implementation chokes at 1,2,3). – Christian Feuersänger Sep 14 '14 at 18:02 | 2019-10-17 10:09:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7471246719360352, "perplexity": 983.4371450153674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986673538.21/warc/CC-MAIN-20191017095726-20191017123226-00082.warc.gz"} |
https://www.numerade.com/questions/although-exact-isotopic-masses-are-known-with-great-precision-for-most-elements-we-use-the-average-m/ | Composition
### Discussion
You must be signed in to discuss.
##### Stephanie C.
University of Central Florida
##### Allea C.
University of Maryland - University College
##### Morgan S.
University of Kentucky
### Video Transcript
Oh, so in this question, we're talking about isotopes. And how come when we talk in atomic mass we don't use, You know, um, the exact mass of the compound bones dead use, like other, another value that I'll talk about later. And even though we know the master of each Adam of any particular compounds wearable to know that mass. But that's not the mass that is the atomic mass. So it's all about, um, isotopes and relative abundance. So we're gonna talk. Let's talk about I'm carpet, for example. But, yes, the reason for this is relative abundance may just write that down. So what this means is the in any particular sample off And Adam, there is a relative abundance of each kind of isotope. Great. So each, uh, isotope sorry has its own, um, atomic mass, which I'll call a m atomic mess. Yeah, uh, you're right. That properly? Yes. Because what basically and I stopped is to refresh my memory is compounds with the same number of protons, but different number off, um, neutrons. Number of neutrons is different. So therefore the whole atomic mass that Adam would be different. So we have carbon 12 12 being it's wait, it's atomic mass. We have carbon 13 and also carbon 14. Our planet. This is an atomic mass units relatively so carbon 12 ways. Exactly 12 atomic mass units. So it's carbon 13 is well, it's Corbyn 14. Yet the atomic mass of carbon on the product table is 12.17 So what is happening is actually quite a large Dustin, 12.0 seven. All right, So the reason for this again is relative abundance. Because in nature, when we're talking about carbon 12 we have it in the 98.9% abundance and carpet 13 we have in a 1.1 percent abundance, and carbon 14 is a very negligible number. Extremely tiny amount of carbon 14 on our planet. So that means if we scoop up, you know, carbon arbitrarily in our hands 98.9 of those atoms would weigh 12 atomic capacity is what 1.1 of those atoms would rate 13 atomic mass. Ian's. So then they actually would it make sense for us to just say that this is the weight of carbon. So So what we do is we add up 98.9% of 12 plus 1.1% of three teen, and that then gives us this number, which then accounts for each isotope, so that's where this number comes from.
McMaster University
#### Topics
Composition
##### Stephanie C.
University of Central Florida
##### Allea C.
University of Maryland - University College
##### Morgan S.
University of Kentucky | 2021-04-13 04:55:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5925903916358948, "perplexity": 1676.6658745283746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072082.26/warc/CC-MAIN-20210413031741-20210413061741-00005.warc.gz"} |
https://www.poritz.net/jonathan/past_classes/spring17/stat/index.html | ## Colorado State University — Pueblo, Spring 2017 Math 156, Introduction to Statistics Course Policies and Procedures
Here is a shortcut to the course schedule/homework page.
Here is a shortcut to the summary table below of components of the grades for this course. [See below for explanation.]
Lectures: MWF 9:05-10am and 10:10-11:05am, both in GCB315 Office Hours: MWF 2-3pm and T10am-12pm or by appointment
Instructor: Jonathan Poritz Office: GCB 314D E-mail: jonathan@poritz.net
Phone: 549-2044 (office — any time); 357-MATH (personal; please use sparingly)
Textbook: No (physical, commercial) book is required for this class. If you have accidentally purchased the book which is required for another section of Math 156, feel free to return it, sell it, or keep it for later consultation (it's not a bad book, aside from the price), as you like.
Instead, there will be frequent, substantial, required readings for this class, but they will always be freely available on the 'net — although, of course, you may print out the readings if you prefer to study off of hard copies. These readings will come from a variety of sources, including some created just for this class and others which are widely used, open, electronic resources. Exactly what is to be read, when, is detailed on the HW/schedule page for this course.
Prerequisites: Satisfactory placement exam score or Math 099 or equivalent.
Postrequisites: This course is one of the six classes which satisfy the Quantitative Reasoning Skill of the General Education Requirement. It is also required for the AIM major, the Biology major, the CM program, the Nursing major, and the Mass Communication BS degree, is one required option for the Chemistry major, the Liberal Studies program, and the Social Work program, a prerequisite for MATH 362, MATH 550, and NSG 351, and is one required option for several other classes. Actually, one could argue that a course like this is a requirement for any educated person to understand the modern world.
Course Content/Objective: The Catalog describes it as:
Introduction to data analysis. Binomial and normal models. Sample statistics, confidence intervals, hypothesis tests, linear regression and correlation, and chisquare tests.
In practice, we tend not to get all the way to the $\chi^2$ (that's a Greek letter, written in English as "chi" and pronounced in English like a hard "k" sound followed by the English word "eye") test. A more precise list of what you will know about by the end of this class is:
1. Describing data and distributions
• graphing
• measures of variation
• density curves
• Normal distributions, the 68-95-99.7 rule
2. Relationships in data
• scatterplots
• correlation — the least-squares line
• cautions: extrapolation, hidden variables, "correlation is not causation"
3. producing data
• simple random samples ("independent, identically distributed")
• matched-pair and block designs
• the placebo effect, double-blind experiments
• experimental ethics
4. probability
• outcome space, events, combining events, mutually exclusive events
• independent events
• the Law of Large Numbers
• distributions, cumulative distributions
• random variable
• the situation of repeated Bernoulli trials
• mean (expectation), variance, standard deviation
• sampling distributions
• the Central Limit Theorem
5. confidence intervals (for means with known and unknown population standard deviation)
• definitions
• confidence levels
• critical values on the Normal distribution
• dependence on sample size
• Student's T-distribution
6. hypothesis testing (tests of significance; for means with known and unknown population standard deviation)
• null hypothesis, alternative hypothesis
• test statistic
• p-values
General Education Student Learning Outcomes: This course satisfies the general education mathematics requirement which has the following learning outcomes:
• Critical Thinking – Identify, analyze and evaluate arguments and sources of information to make informed and logical judgments, to arrive at reasoned and meaningful arguments and positions, and to formulate and apply ideas to new contexts.
• Identify questions, problems, and arguments.
• Differentiate questions, problems, and arguments.
• Evaluate the appropriateness of various methods of reasoning and verification.
• State position or hypothesis, give reasons to support it and state its limitations.
• Identify stated and unstated assumptions.
• Assess stated and unstated assumptions.
• Critically compare different points of view
• Formulate questions and problems.
• Construct and develop cogent arguments.
• Articulate reasoned judgments.
• Discuss alternative and points of view.
• Defend or criticize a point of view in view of available evidence.
• Evaluate the quality of evidence and reasoning.
• Draw an appropriate conclusion.
• Quantitative Reasoning – Apply numeric, symbolic and geometric skills to formulate and solve quantitative problems.
• Select data that are relevant to solving a problem.
• Use several methods, such as algebraic, geometric and statistical reasoning to solve problems.
• Interpret and draw inferences from mathematical models such as formulas, graphs, and tables.
• Generalize from specific patterns and phenomena to more abstract principles and to proceed from abstract principles to specific applications.
• Represent mathematical information symbolically, graphically, numerically and verbally.
• Estimate and verify answers to mathematical problems to determine reasonableness, compare alternatives, and select optimal results.
This course is also in gT Pathways. This course is approved in the State of Colorado gT Pathways curriculum as GT-MA1. According to the Colorado Department of Higher Education website, "after starting on you higher education pathway at any public college or university in Colorado, and, upon acceptance to another, you can transfer up to 31 credits of previously and successfully (C-or better) completed gT Pathways (general education) coursework. These courses will automatically transfer with you and continue to count toward your general education core or graduation requirements for any liberal arts or science associate or bachelor’s degree program."
Maximum number of Mathematics credits that are guaranteed to transfer: The total number of Mathematics credits guaranteed to transfer in the gtPathways curriculum is three (3).
Numerical computation: There are a lot of numbers in statistics, and often we want to do fairly elaborate arithmetic with them. We also like to assemble these numbers into pretty pictures (graphs). Both of these processes are made far simpler for the student (and the experienced statistician alike) by using electronic computational devices. There is a whole host of "scientific calculators" available for purchase which will do all of this tedious work for you, and any one you might already have is perfectly fine in this class so long as it has basic statistical functions and graphs — show it to your instructor if you aren't sure.
In addition, feel free to use any computer programs you like which will perform these tasks on a laptop, desktop (when you're home or in a campus computer lab working on homework), or smartphone. There are also many websites and free online tools which will do just fine. Your instructor will show many such tools in class, and is happy to work with you to find a cheap (free!) one that you can use on whichever device to which you have convenient access.
Note that there will be no problem with getting used to some electronic tools and then not having them when you take quizzes and tests since you will be allowed to use whatever devices you like at all times.
The Mathematics Department does have a TI-84 Plus calculator rental program, with a limited number of such calculators available on a first-come, first-serve basis for a non-refundable fee of $20 per semester payable at the Bursar's window in the Administration Building. For more information, contact Tracey Blanco in the Math Learning Center (PM 132). Attendance and workload: Regular attendance in class is a key to success — don't skip class, don't be late. But more than merely attending, you are also expected to be engaged with the material in the class. In order for this to be possible, it is necessary to be current with required outside activities such as doing readings and homework problems: you are expected to spend 2-3 hours on this outside work per hour of class. This is not an exaggeration (or a joke!), but if you put in the time and generally approach the class with some seriousness you will get quite a bit out of it (certainly including the grade you need). If you absolutely have to miss a class, please inform me in advance and I will video the class and post the video on the 'net. You should e-mail me no earlier than a few hours after class (to allow for upload time) asking for the link to that video, and you can then watch the class you missed in the comfort of you home and (hopefully) not fall behind. Classes I have videoed will have the icon next to that day's entry on the schedule/homework page to remind you of the available video. Even if you are not the one who originally requested the video, you may want to watch it (as part of reviewing for a test, maybe) — but you have to e-mail me for the links as the videos cannot simply be found by a search on YouTube. Homework: Mathematics at this level is a kind of practical (although purely mental) skill, not unlike a musical or sports skill — and, like for those other skills, one must practice to build the skill. In short, doing problems is the only way truly to master this material (in fact, it is the only way to pass this course). There will be frequent homework sets assigned and collected. Here are some details: • Homework is due either in class or at my office, no later than 3pm. • Homework is due as sets, but will be graded by problem. Each problem will be worth 5 points. • Note that none of us is actually at all interested in the specific answers to these problems: homework is about learning how to do these kinds of problems; everyone knows that quote about giving someone a fish versus teaching them how to fish. In short, "Showing your work" is not something extra that you can add to a homework assignment — it is the homework assignment. • If we have agreed that homework — and this is true of every thing else you hand in, including quizzes, tests, and ASEs — is a form of communication between student and instructor about what thought process the student is following, then some things are important to make that communication as clear as it can be. For example: • Always define all variables, clearly and completely and with units (if relevant). • Always explain all steps of every calculation you do — this could be something like •$s=17$(from calculator's STD DEV) or •$s=17$(used eqn 3.14 from such-and-such a reading) or •$s=17$(used def$s=\sqrt{\frac{1}{n-1}\sum_i^n(x_i-\bar{x})^2}$from class). • Exception: you can skip explaining a step which amounts to$2+2=4$, or even one where you compute a very basic object from this class after we have been doing it for weeks. E.g., towards the end of semester, you don't need to quote a formula or book equation every time you compute a sample mean$\bar{x}$. • Exception to the exception: on tests, every concept you learned in this class should be defined with a formula the first time you use it. • Experience shows that this issue of explaining your work is often quite difficult for students, at least at first — it is so very different from what you have been doing in math classes for years, probably, and it is hard to break old (bad) habits. But once you get into new (good) habits, they will make this part easy. And this is very important in using and talking about statistics outside of a math classroom. Furthermore, since quizzes and tests will be open-book, there really is very little point in merely checking whether you can plug numbers into a formula: what matters is whether you understand what you are doing, and what it means. • Always label all axes of graphs and parts of diagrams. • Homework assignments appear on the schedule/homework web page on a regular basis. Please get used to going to that page frequently — at least every class day (for special announcements), and certainly before starting your work on a homework set. • Late homework will count, but at a reduced value — generally, the score will be reduced by 20% for each day late, unless you use a Homework Late Pass [see below]. • Exception: Late homework will count as zero, even even if you try to use a Homework Late Pass, when handed in after the next major test (the next hour exam during the semester, and the final for the end of the course). • Exception to the exception: revisions of graded homework [see below] can always be handed in at the next class meeting after the graded work was returned, even if that is after the midterm ending a unit of the class. • After you complete HW0, you will receive a sheet of 10 Homework Late Passes which may be used to hand in homework late but without penalty, subject to the restrictions mentioned above. It is your responsibility to keep track of these passes — don't loose them, they are valuable! Any unused passes may be turned in at the end of the term for general course extra credit. • Your five lowest scores (on individual homework problems) will be dropped. • Please label your homework clearly (make sure you name is there!). If you don't staple pages together, that's fine — just make sure each page then has your name and the assignment title on it. Please cut of the ragged edges of paper which come if you tear your HW out of a spiral-bound notebook. Big Ideas: Part of the Critical Thinking mentioned above is an idea of assimilating material, understanding its assumptions and hypotheses and being able then to articulate them. In order to help you practice this skill, you will be expected to write down (and hand in) a Big Idea [BI] for most classes. This will certainly include all classes in which new material is introduced or a complex idea is further examined, but generally will not include days like test days or review days when nothing new is done. BIs will always be due the very next class. A good Big Idea is a short but complete explanation of a new idea, piece of terminology, formula, or algorithm but is not just an example. Make sure you describe the context and define all variables used in a BI. For example, if in one class we discussed the Pythagorean Theorem, then a good BI to hand in for the next class would be Big Idea: The Pythagorean Theorem tells us that if a triangle has sides of lengths$a$,$b$, and$c$, and if the angle between the sides of lengths$a$and$b$is$90^\circ$, then$a^2+b^2=c^2$. In contrast, the following would be bad BIs: • "$a^2+b^2=c^2$." [Missing all the set-up, including defining variables and stating the hypothesis that it's a right triangle.] • "In a right triangle,$a^2+b^2=c^2$." [Forgot to define the variables.] • "In a right triangle with legs of length 3 and 4, the hypotenuse has length 5." [This is an example, not a general idea.] Note that the content of a BI could come from class, but if you didn't take good notes or missed a class or just prefer to do so, you may use the reading assignment for a particular class as a source of material for a BI. The expectation is that it will take just a class or two to figure out what would make a good BI, after which you should always be getting perfect scores on them for the rest of the term. It will also turn out that if you keep track of your BIs (and don't simply throw them out when they are returned), then stapling together the bunch of them before each quiz or test will create for you a very complete and useful study guide of the important ideas you will have to know. BIs are not part of the Late Homework Pass system, but they can be corrected and resubmitted for full credit. They are graded out of 2 points, as follows: 1. BI not handed in or having no clear idea at all (e.g., if it is an example rather than an idea); 2. BI present and mostly on track, but missing an important piece like a crucial hypothesis or variable definition; and 3. BI present and complete. Quizzes: Most Fridays, during weeks in which there is no hour exam, there will be a short (10-15 minute) quiz at the end of class. These will be "open book and notes," and calculators/laptops/smartphones will be allowed. The quizzes will often be quite similar to a homework problem from that week; if you can do the homework and have been awake in class, you should have no trouble with the quiz. Quizzes are each graded out of 10 points, and your lowest quiz score will be dropped. Applied Statistical Exegeses [ASEs]: Roughly once a week you will write a 1-2 page explanation of a statistical result whose description you found on a website, in an article you read for pleasure or for your studies, in a textbook from another class, or other source you find on your own (after consultation with your instructor). The idea for these write-ups will be to take information of a statistical nature you find elsewhere and to explain it in detail using the terminology and methods of this class — and then to think about it critically and to see if you can offer suggestions for how it might be improved. More information about these ASEs will follow soon. Exams: We will have three midterm exams on dates to be determined (and announced at least a week in advance). Our final exam is scheduled for Wednesday, December 7th from 8:00-10:20am in our usual classroom. All exams will be open book/notes/calculator/laptop/smartphone. Revision of work on homework, quizzes, ASEs, and tests: A great learning opportunity is often missed by students who get back a piece of work graded by their instructor and simply shrug their shoulders and move on — often depositing their graded work in a trash can without even looking at it! In fact, painful though it may be, looking over the mistakes on those returned papers is often the best way to figure out exactly where you tend to make mistakes. If you correct that work, taking the time to make sure you really understand completely what was missing or incorrect, you will often truly master the technique in question, and never again make any similar mistake. In order to encourage students to go through this learning experience, I will allow students to hand in revised solutions to all homeworks, BIs, quizzes, ASEs, and midterms. There will be an expectation of slightly higher quality of exposition (more clear and complete explanations, all details shown, etc.) but you will be able to earn a percentage of the points you originally lost, so long as you hand in the revised work at the very next class meeting. The percentage you can earn back is given in the "revision %" column of the table below. Green points: I am trying to reduce the carbon footprint of my classes. So I ask that you reuse paper whenever possible, by taking any pages you can find that are blank on one side (handouts from other classes, drafts of your work for this or other classes, etc.), putting a big "X" over the previously used side, and doing your HW, ASEs, revisions, etc., for this class on the blank side. To encourage this, I will keep track of how many such reused pages you hand in and they will be worth Green Points extra credit at the end of the term. Note that submitting work electronically is an even more eco-friendly approach. So if you submit any work by e-mail, you will get a Green Point for each page you saved in that way. Grades: On quiz or exam days, attendance is required — if you miss a quiz or exam, you will get a zero as score; you will be able to replace that zero only if you are regularly attending class and have informed me [e.g., by e-mail], in advance, of your valid reason for missing that day. In each grading category, the lowest n scores of that type will be dropped, where n is the value in the "# dropped" column. The total remaining points will be multiplied by a normalizing factor so as to make the maximum possible be 100. Then the different categories will be combined, each weighted by the "course %" from the following table, to compute your total course points out of 100. Your letter grade will then be computed in a manner not more strict than the traditional "90-100% is an A, 80-90% a B, etc." method. [Note that the math department does not give "+"s or "-"s.] pts each # of such # dropped revision % course % 5/prob ≈65 probs 5 probs 75% 15% 2 ≈35 5 100% 5% 12 ≈10 1 75% 12% 1 ≈10 2 75% 15% >100 3 0 50% 36% >200 1 0 0% 17% 1/page ≤200 ? 0 0% XC Contact outside class: Over the years I have been teaching, I have noticed that the students who come to see me outside class are very often the ones who do well in my classes. Now correlation is not causation, but why not put yourself in the right statistical group and drop in sometime? I am always in my office, GCB 314D, during official office hours. If you want to talk to me privately and/or cannot make those times, please mention it to me in class or by e-mail, and we can find another time. Please feel free to contact me for help also by e-mail at jonathan@poritz.net, to which I will try to respond quite quickly (usually within the day, often much more quickly); be aware, however, that it is hard to do complex mathematics by e-mail, so if the issue you raise in an e-mail is too hard for me to answer in that form, it may well be better if we meet before the next class, or even talk on the telephone (in which case, include in your e-mail a number where I can reach you). A request about e-mail: E-mail is a great way to keep in touch with me, but since I tell all my students that, I get a lot of e-mail. So to help me stay organized, please put your full name and the course name and time, like "Math 156 9:05" in the subject line of all messages to me. Early alert: This course is part of CSU-Pueblo's general education program, and participates in the Early Alert program. Early in the semester, information about student performance in this class will be communicated to Student Academic Services. This information is then relayed to faculty academic advisors and to advisors in the first year program. Your advisor may then ask to meet with you to discuss your progress. The program is designed to promote success among our students through proactive advising, and through referral to appropriate student support centers. The effort continues throughout the semester, and instructor concerns can be posted to the Early Alert system at any time. Academic integrity: Mathematics is more effectively and easily learned — and more fun — when you work in groups. However, all work you turn in must be your own, and any form of cheating is grounds for an immediate F in the course for all involved parties. For details of what constitutes academic dishonesty, the processes that are started when it is violated, and your rights in such proceedings, see The Student Code of Conduct. In any case, it is always a good idea to ask your instructor if you want to do something which you are concerned might be, or even might appear to be, an act of academic dishonesty. Nota bene: Most rules on due dates, admissibility of make-up work, etc., will be interpreted with great flexibility for students who are otherwise in good standing (i.e., regular classroom attendance, homework (nearly) all turned in on time, no missing quizzes and tests, etc.) when they experience temporary emergency situations. Please speak to me — the earlier, the better — in person should this be necessary for you. Consequences of Open Book Policy: As noted above, you will always be allowed to use any book, notes, laptop, smartphone, calculator, website, etc. you like during quizzes and tests. Please note that this should not substantially change the way you study for this class, nor does it mean you can afford to read long explanations during a test to make up for the fact that you didn't study — there just isn't time to do this during a quiz. I've seen students who tried to do this fail the class! The best way to think of the open book-policy is that you must know everything, be aware of all the formulæ and how to use them ... only you don't need to memorize every little detail of a formula, because you can check it in your notes or book before you use it on a test. E.g. the open book is good to help you with questions like "was I supposed to divide by$n$or$n-1\$ in the formula for the standard deviation", but not questions like "what in the world is the standard deviation" — if you are asking yourself something like that second question during a test, it is too late to use the book or your notes to get the answer. So: study, but no need to have test anxiety about memorizing the ugly details of all the complicated formulæ.
Words: One warning up front: I believe strongly that students should learn to think in the way of a subject they are learning, not merely that they become sophisticated calculators who can follow recipes. Therefore I will require you to explain all your work on HWs and tests and on the final. This doesn't mean that you have to write essay answers to purely computational questions, but it does mean that you have to tell me a word of two about what you are thinking as you do the calculations. In particular, you could hand in an answer to some problem with just a few numbers, all of which were correct — and get a 0; you could also hand in an answer with a few words explaining your numbers and get full credit, even if all of the numbers were actually wrong. I will try to give you feedback on HWs and in class on this requirement during the term, so that it does not come as a surprise during tests.
Tutoring Help: The Math Learning Center is open all semester, except for the week of Spring Break, through the Thursday of finals week (May 4th), offering registered CSU-Pueblo students free tutoring in math classes from Elementary Algebra to Calculus and Statistics. It is staffed by a Director and student tutors and is located in the Physics and Mathematics building, PM 132 — no appointment is necessary, just walk in and ask for help. The hours of operation are posted at the Center and on this page; typically, they are 8:30am-5:00pm Monday to Thursday and 8:30am-3:00pm on Friday. Note that there are specialist tutors for statistics M8:30-10:05, T12:20-2:25, W8:30-10:05, Θ12:20-2:25, and F8:30-11:10 — at other times there is general math help and the MLC director, if she is there, can help with statistics, but those are the times of the student tutors who specialize in statistics.
Accommodations: The University abides by the Americans with Disabilities Act and Section 504 of the Rehabilitation Act of 1973, which stipulate that no student shall be denied the benefits of education "solely by reason of a handicap." If you have a documented disability that may impact your work in this class for which you may require accommodations, please see the Disability Resource Coordinator as soon as possible to arrange accommodations. In order to receive accommodations, you must be registered with and provide documentation of your disability to the Disability Resource Office, which is located in the Library and Academic Resources Center, Suite 169.
It is easy to lie with statistics, but it is easier to lie without them.
Frederick Mosteller (1916 - 2006 )
Forecasting is very difficult, especially about the future.
Edgar R. Fiedler (1929 - 2003)
(or maybe the Danish politician Karl Kristian Steincke; a version
is often attributed to the Nobel Laureate Niels Bohr, which is
probably based on another variant which is said to be a "Danish proverb")
Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write!
Samuel S. Wilks (1906 - 1964), paraphrasing Herbert G. Wells (1866 - 1946)
Luck is probability taken personally. It is the excitement of bad math.
Penn F. Jillette (1955 - )
The only statistics you can trust are those you falsified yourself.
Sir Winston Churchill (1874 - 1965) (Attribution to Churchill is ironically falsified)
Thirty years ago I was diagnosed with motor neurone disease, and given two and a half years to live.
I have always wondered how they could be so precise about the half.
Stephen Hawking (1942 - )
It is commonly believed that anyone who tabulates numbers is a statistician.
This is like believing that anyone who owns a scalpel is a surgeon.
Robert Hooke (1918 - ? .. not the Hooke who was a friend of Newton's!) | 2018-05-21 22:41:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3691656291484833, "perplexity": 2310.9063110124193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864558.8/warc/CC-MAIN-20180521220041-20180522000041-00355.warc.gz"} |
https://ftp.aimsciences.org/article/doi/10.3934/dcds.2014.34.4537 | # American Institute of Mathematical Sciences
November 2014, 34(11): 4537-4553. doi: 10.3934/dcds.2014.34.4537
## Localization, smoothness, and convergence to equilibrium for a thin film equation
1 Department of Mathematics, Hill Center, Rutgers University, Piscataway, NJ 08854, United States 2 Department of Mathematics, Faculty of Education, Zirve University, Gaziantep, Turkey
Received April 2013 Revised February 2014 Published May 2014
We investigate the long-time behavior of weak solutions to the thin-film type equation $v_t =(xv - vv_{xxx})_x\ ,$ which arises in the Hele-Shaw problem. We estimate the rate of convergence of solutions to the Smyth-Hill equilibrium solution, which has the form $\frac{1}{24}(C^2-x^2)^2_+$, in the norm $|\!|\!| f |\!|\!|_{m,1}^2 = \int_{\mathbb{R}}(1+ |x|^{2m})|f(x)|^2 \, dx + \int_{\mathbb{R}}|f_x(x)|^2 \, dx.$ We obtain exponential convergence in the $|\!|\!| \cdot |\!|\!|_{m,1}$ norm for all $m$ with $1\leq m< 2$, thus obtaining rates of convergence in norms measuring both smoothness and localization. The localization is the main novelty, and in fact, we show that there is a close connection between the localization bounds and the smoothness bounds: Convergence of second moments implies convergence in the $H^1$ Sobolev norm. We then use methods of optimal mass transportation to obtain the convergence of the required moments. We also use such methods to construct an appropriate class of weak solutions for which all of the estimates on which our convergence analysis depends may be rigorously derived. Though our main results on convergence can be stated without reference to optimal mass transportation, essential use of this theory is made throughout our analysis.
Citation: Eric A. Carlen, Süleyman Ulusoy. Localization, smoothness, and convergence to equilibrium for a thin film equation. Discrete & Continuous Dynamical Systems, 2014, 34 (11) : 4537-4553. doi: 10.3934/dcds.2014.34.4537
##### References:
show all references
##### References:
[1] Mario Bukal. Well-posedness and convergence of a numerical scheme for the corrected Derrida-Lebowitz-Speer-Spohn equation using the Hellinger distance. Discrete & Continuous Dynamical Systems, 2021, 41 (7) : 3389-3414. doi: 10.3934/dcds.2021001 [2] Xinqun Mei, Jundong Zhou. The interior gradient estimate of prescribed Hessian quotient curvature equation in the hyperbolic space. Communications on Pure & Applied Analysis, 2021, 20 (3) : 1187-1198. doi: 10.3934/cpaa.2021012 [3] Matthias Erbar, Jan Maas. Gradient flow structures for discrete porous medium equations. Discrete & Continuous Dynamical Systems, 2014, 34 (4) : 1355-1374. doi: 10.3934/dcds.2014.34.1355 [4] Bouthaina Abdelhedi, Hatem Zaag. Single point blow-up and final profile for a perturbed nonlinear heat equation with a gradient and a non-local term. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021032 [5] Lifen Jia, Wei Dai. Uncertain spring vibration equation. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021073 [6] V. V. Zhikov, S. E. Pastukhova. Korn inequalities on thin periodic structures. Networks & Heterogeneous Media, 2009, 4 (1) : 153-175. doi: 10.3934/nhm.2009.4.153 [7] Vladimir Georgiev, Sandra Lucente. Focusing nlkg equation with singular potential. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1387-1406. doi: 10.3934/cpaa.2018068 [8] Daoyin He, Ingo Witt, Huicheng Yin. On the strauss index of semilinear tricomi equation. Communications on Pure & Applied Analysis, 2020, 19 (10) : 4817-4838. doi: 10.3934/cpaa.2020213 [9] Carmen Cortázar, M. García-Huidobro, Pilar Herreros, Satoshi Tanaka. On the uniqueness of solutions of a semilinear equation in an annulus. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021029 [10] Kamel Hamdache, Djamila Hamroun. Macroscopic limit of the kinetic Bloch equation. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021015 [11] Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, 2021, 14 (2) : 199-209. doi: 10.3934/krm.2021002 [12] Diana Keller. Optimal control of a linear stochastic Schrödinger equation. Conference Publications, 2013, 2013 (special) : 437-446. doi: 10.3934/proc.2013.2013.437 [13] Simone Cacace, Maurizio Falcone. A dynamic domain decomposition for the eikonal-diffusion equation. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 109-123. doi: 10.3934/dcdss.2016.9.109 [14] Naeem M. H. Alkoumi, Pedro J. Torres. Estimates on the number of limit cycles of a generalized Abel equation. Discrete & Continuous Dynamical Systems, 2011, 31 (1) : 25-34. doi: 10.3934/dcds.2011.31.25 [15] Jumpei Inoue, Kousuke Kuto. On the unboundedness of the ratio of species and resources for the diffusive logistic equation. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2441-2450. doi: 10.3934/dcdsb.2020186 [16] Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems, 2021, 41 (6) : 2739-2776. doi: 10.3934/dcds.2020384 [17] Peng Chen, Xiaochun Liu. Positive solutions for Choquard equation in exterior domains. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021065 [18] José A. Carrillo, Bertram Düring, Lisa Maria Kreusser, Carola-Bibiane Schönlieb. Equilibria of an anisotropic nonlocal interaction equation: Analysis and numerics. Discrete & Continuous Dynamical Systems, 2021, 41 (8) : 3985-4012. doi: 10.3934/dcds.2021025 [19] Samira Shahsavari, Saeed Ketabchi. The proximal methods for solving absolute value equation. Numerical Algebra, Control & Optimization, 2021, 11 (3) : 449-460. doi: 10.3934/naco.2020037 [20] Jonathan DeWitt. Local Lyapunov spectrum rigidity of nilmanifold automorphisms. Journal of Modern Dynamics, 2021, 17: 65-109. doi: 10.3934/jmd.2021003
2019 Impact Factor: 1.338 | 2021-04-17 21:03:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6348251104354858, "perplexity": 3976.2893975360626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464045.54/warc/CC-MAIN-20210417192821-20210417222821-00065.warc.gz"} |
https://www.physicsforums.com/threads/the-fundemental-theorum-of-calculus.233776/ | The Fundemental Theorum Of Calculus
the fundamental theorem of calculus
Homework Statement
$$\int^{3}_{2}$$12 * (x^2-4)^(5) * x
U substitution.
The Attempt at a Solution
This is part of a FTC problem, but I find myself stumbling a little bit with the u substitution still. I'm not sure when the du= the derivative of the u, and when it is just the numbers left over.
Like in this situation, I set u=x^2-4. Would du=2x or 12x?
Last edited:
If u = x2 - 4, then the derivative du/dx = 2x. Although we technically shouldn't break up the derivative, it turns out we can do it without affecting results, and all our steps are justifiable with the chain rule. Commonly, however, we treat du/dx as a fraction, and find $$du = 2xdx \implies dx = du/(2x)$$. | 2022-07-06 07:14:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8709385991096497, "perplexity": 1343.4730685425936}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104668059.88/warc/CC-MAIN-20220706060502-20220706090502-00556.warc.gz"} |
https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-042j-mathematics-for-computer-science-spring-2015/structures/partial-orders-and-equivalence/vertical-d757201744eb/ | 2.7 Partial Orders and Equivalence
Equivalence Relations & Partial Orders
For each of the following relations, indicate whether it is an equivalence relation, a partial but not a total order, a total order, or none of the above.
1. $$\{(p,q) \;|\; p \text{ and } q \text{ are people of the same age}\}$$
Exercise 1
2. $$\{(a,b) \;|\; a \text{ is the age of someone who is not younger than anyone of age } b\}$$
Exercise 2
Ages can be translated into days or similar numerical units, which reveals that we have just given a somewhat awkward description of the relation greater-or-equal on these numbers.
3. $$\{(p,q) | p \text{ is a person whose age is an integer multiple of person \(q$$'s age}\}\)
Exercise 3
Two different people can be the same age, so the relation is not antisymmetric, ruling out partial order and total order. It is not symmetric, since a 4-year-old is related to a 2-year-old, but not conversely, ruling out equivalence relation. Note that as a relation on their ages, this would be the same as the divisibility relation on nonnegative integers, for which partial but not a total order would have been correct. Yes, this was a bit of a trick question. | 2021-09-22 09:30:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8739567995071411, "perplexity": 480.35182852495035}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057337.81/warc/CC-MAIN-20210922072047-20210922102047-00580.warc.gz"} |
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=rm&paperid=4835&option_lang=eng | RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PERSONAL OFFICE
General information Latest issue Archive Impact factor Subscription License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Uspekhi Mat. Nauk: Year: Volume: Issue: Page: Find
Uspekhi Mat. Nauk, 1973, Volume 28, Issue 1(169), Pages 65–130 (Mi umn4835)
$J$-expanding mtrix functions and their role in the analytical theory of electrical circuits
A. V. Efimov, V. P. Potapov
Abstract: Chapter I establishes the essential properties of the $\mathscr A$-matrix of a passive multipole depending on the number of its branches. These properties are based on Langevin's theorem. A classification of the basic objects of investigation:$J$-expanding matrix-functions (class $\mathfrak M$), and also positive matrix functions (class $\mathfrak B$ ), is introduced. Chapter II gives an account of a theory of matrix functions of class $\mathfrak M$. It also investigates the simplest (elementary and primary) matrices of this class. The fact is established that elementary (and primary) factors can be split off from a given matrix of class $\mathfrak M$. In particular, the factorizability of a rational reactive matrix of class $\mathfrak M$ is established.
Chapters III–IV set forth a theory of various subclasses of matrix functions of class $\mathfrak M$: $\mathfrak M_{sl}$, $\mathfrak M_{cgl}$, $\mathfrak M_{lr}$. The realizability of the matrix functions of each of these subclasses as $\mathscr A$-matrices of passive multipoles with the corresponding provision for branches is established.
The fact that they are realizable is proved by the construction of a corresponding multipole.
The last chapter is concerned with a generalization of Darlington's theorem, which leads to a realization of functions of the subclasses $\mathfrak M_{clr}$ and $\mathfrak M_{cglr}$ as $\mathscr A$-matrices or $z$-matrices of dissipative multipoles.
Full text: PDF file (2955 kB)
References: PDF file HTML file
English version:
Russian Mathematical Surveys, 1973, 28:1, 69–140
Bibliographic databases:
UDC: 519.53+512.83
MSC: 15A48, 15A15, 15A23
Citation: A. V. Efimov, V. P. Potapov, “$J$-expanding mtrix functions and their role in the analytical theory of electrical circuits”, Uspekhi Mat. Nauk, 28:1(169) (1973), 65–130; Russian Math. Surveys, 28:1 (1973), 69–140
Citation in format AMSBIB
\Bibitem{EfiPot73} \by A.~V.~Efimov, V.~P.~Potapov \paper $J$-expanding mtrix functions and their role in the analytical theory of electrical circuits \jour Uspekhi Mat. Nauk \yr 1973 \vol 28 \issue 1(169) \pages 65--130 \mathnet{http://mi.mathnet.ru/umn4835} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=394287} \zmath{https://zbmath.org/?q=an:0268.94009|0285.94009} \transl \jour Russian Math. Surveys \yr 1973 \vol 28 \issue 1 \pages 69--140 \crossref{https://doi.org/10.1070/RM1973v028n01ABEH001397}
• http://mi.mathnet.ru/eng/umn4835
• http://mi.mathnet.ru/eng/umn/v28/i1/p65
SHARE:
Citing articles on Google Scholar: Russian citations, English citations
Related articles on Google Scholar: Russian articles, English articles
This publication is cited in the following articles:
1. D. Z. Arov, “Darlington realization of matrix-valued functions”, Math. USSR-Izv., 7:6 (1973), 1295–1326
2. S. A. Orlov, “Nested matrix disks analytically depending parameter, and theorems on the invariance radii of limiting disks”, Math. USSR-Izv., 10:3 (1976), 565–613
3. I. V. Kovalishina, “Analytic theory of a class of interpolation problems”, Math. USSR-Izv., 22:3 (1984), 419–463
4. Elsa Cortina, “j-Expansive matrix-valued functions and Darlington realization of transfer-scattering matrices”, Journal of Mathematical Analysis and Applications, 92:2 (1983), 435
5. N. K. Al'bov, “On a criterion for solvability of Fredholm equations”, Math. USSR-Sb., 55:1 (1986), 113–119
6. L. A. Sakhnovich, “Factorization problems and operator identities”, Russian Math. Surveys, 41:1 (1986), 1–64
7. Rainer Pauli, “Darlington's theorem and complex normalization”, Int J Circ Theor Appl, 17:4 (1989), 429
8. AndréC.M Ran, Leiba Rodman, “Laurent interpolation for rational matrix functions and a local factorization principle”, Journal of Mathematical Analysis and Applications, 164:2 (1992), 524
9. Vladimir Bolotnikov, “On a general moment problem on the half axis”, Linear Algebra and its Applications, 255:1-3 (1997), 57
10. Pedro Albgría, Mischa Cotlar, “Generalized Toeplitz Forms and Interpolation Colligations”, Math. Nachr, 190:1 (1998), 5
11. N.N. Chernovol, “The degenerate Carathéodory problem and the elementary multiple factor”, Zhurn. matem. fiz., anal., geom., 1:2 (2005), 225–244
12. Theory Probab. Appl., 51:2 (2007), 342–350
13. A.E.. Choque-Rivero, L.E.. Garza, “Moment perturbation of matrix polynomials”, Integral Transforms and Special Functions, 2014, 1
14. A. E. Choke Rivero, L. E. Garza Gaona, “Matrix orthogonal polynomials associated with perturbations of block Toeplitz matrices”, Russian Math. (Iz. VUZ), 61:12 (2017), 57–69
15. Yu. M. Dyukarev, “The zeros of determinants of matrix-valued polynomials that are orthonormal on a semi-infinite or finite interval”, Sb. Math., 209:12 (2018), 1745–1755
• Number of views: This page: 552 Full text: 213 References: 35 First page: 1 | 2019-06-18 01:15:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38156604766845703, "perplexity": 8134.872782606356}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998600.48/warc/CC-MAIN-20190618003227-20190618025227-00345.warc.gz"} |
https://www.physicsforums.com/threads/expressing-an-integral-as-a-sum-of-terms.982843/ | Expressing an Integral as a sum of terms
Homework Statement:
Suppose a given function f(x) and continuous x limit is given from say a to b.
Now if it is asked to say that sum over the values of f(x) over x between a and b.
Relevant Equations:
e.g
Can we write it as
$$f(a)+f(a+dx)+f(2a+dx)+f(3a+dx)+.......f(b)=\int^b_a f(x)dx$$
e.g
Can we write it as
$$f(a)+f(a+dx)+f(2a+dx)+f(3a+dx)+.......f(b)=\int^b_a f(x)dx$$......(?)
Although $$\int f(x)dx$$ given the area tracked by thr function with the x-axis between a and b
Thanks.
Last edited by a moderator:
Homework Statement:: Suppose a given function f(x) and continuous x limit is given from say a to b.
Now if it is asked to say that sum over the values of f(x) over x between a and b.
Homework Equations:: e.g
Can we write it as
$$f(a)+f(a+dx)+f(2a+dx)+f(3a+dx)+.......f(b)=\int^b_a f(x)dx$$
e.g
Can we write it as
$$f(a)+f(a+dx)+f(2a+dx)+f(3a+dx)+.......f(b)=\int^b_a f(x)dx$$......(?)
Although $$\int f(x)dx$$ given the area tracked by thr function with the x-axis between a and b
Thanks.
I might be missing something here, but don't we need to multiply each of the $f(x_i)$ by a $\delta x$ to get an area? Otherwise, I thought that the area under a certain f(x) value for a continuous function is 0.
As to the rewritten version, what if $b < 2a$. To me it would seem more intuitive to write out something like:
$$Integral = f(a)\delta x + f(a + \Delta x)\delta x + f(a + 2\Delta x)\delta x + f(a + 3\Delta x)\delta x + ... + f(b)\delta x$$
Mark44
Mentor
Can we write it as
$$f(a)+f(a+dx)+f(2a+dx)+f(3a+dx)+.......f(b)=\int^b_a f(x)dx$$
No.
As already mentioned by another poster, your coefficients are in the wrong places. The sum above is often written as
##f(a)+f(a + \Delta x)+f(a + 2\Delta )+f(a + 3\Delta ) + \dots + f(a + n \Delta x)##
where ##\Delta x = \frac {b - a} n##
This sum is called a Riemann sum, which is used to approximate a definite integral. For suitable functions (i.e., functions that are continuous on the interval [a, b]), the integral ##\int_a^b f(x) dx## is defined to be equal to the limit of the Riemann sum, as n goes to infinity.
$$f(a)+f(a+dx)+f(2a+dx)+f(3a+dx)+.......f(b)=\int^b_a f(x)dx$$......(?)
The two sides will not be equal. As the user above said, the left side is a Riemann Sum and only used as approximation.
Mark44
Mentor
As the user above said, the left side is a Riemann Sum
... that has no relationship with the integral on the right side. | 2021-06-16 15:10:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9525591731071472, "perplexity": 491.6277839606131}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623942.48/warc/CC-MAIN-20210616124819-20210616154819-00011.warc.gz"} |
https://pypi.org/project/django-ssify/0.2.6/ | Two-phased rendering using SSI.
## Project description
-------

For full list of contributors see AUTHORS section at the end.
This program is free software: you can redistribute it and/or modify
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Dependencies
============
* Python >= 2.6
* Django >= 1.5
Installation
============
* ssify.middleware.SsifyMiddleware on top,
ssify.middleware.PrepareForCacheMiddleware after it,
* ssify.middleware.LocaleMiddleware instead of stock LocaleMiddleware.
3. Make sure you have 'django.core.context_processors.request' in your
TEMPLATE_CONTEXT_PROCESSORS.
4. Configure your webserver to use SSI ('ssi=on' with Nginx).
Usage
=====
1. Define your included urls using the @ssi_included decorator.
2. Define your ssi variables using the @ssi_variable decorator.
Authors
=======
## Project details
Uploaded source
Uploaded py2 py3 | 2022-08-17 19:53:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3183238208293915, "perplexity": 11069.928179843777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573104.24/warc/CC-MAIN-20220817183340-20220817213340-00699.warc.gz"} |
https://socratic.org/questions/if-the-sum-of-three-consecutive-even-integers-is-30-what-are-the-integers | # If the sum of three consecutive even integers is -30, what are the integers?
Apr 26, 2016
Three even integers are $- 12$, $- 10$ and $- 8$.
#### Explanation:
Let the three even integers be $x$, $x + 2$ and $x + 4$. Hence,
$x + x + 2 + x + 4 = - 30$ or
$3 x + 6 = - 30$ or $3 x = - 30 - 6 = - 36$
or $x = - \frac{36}{3} = - 12$
Hence three even integers are $- 12$, $- 10$ and $- 8$. | 2022-08-19 12:22:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8918876051902771, "perplexity": 745.5232590139349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00724.warc.gz"} |
https://projecteuclid.org/euclid.aoap/1465905014 | ## The Annals of Applied Probability
### Bernoulli and tail-dependence compatibility
#### Abstract
The tail-dependence compatibility problem is introduced. It raises the question whether a given $d\times d$-matrix of entries in the unit interval is the matrix of pairwise tail-dependence coefficients of a $d$-dimensional random vector. The problem is studied together with Bernoulli-compatible matrices, that is, matrices which are expectations of outer products of random vectors with Bernoulli margins. We show that a square matrix with diagonal entries being 1 is a tail-dependence matrix if and only if it is a Bernoulli-compatible matrix multiplied by a constant. We introduce new copula models to construct tail-dependence matrices, including commonly used matrices in statistics.
#### Article information
Source
Ann. Appl. Probab. Volume 26, Number 3 (2016), 1636-1658.
Dates
First available in Project Euclid: 14 June 2016
https://projecteuclid.org/euclid.aoap/1465905014
Digital Object Identifier
doi:10.1214/15-AAP1128
Mathematical Reviews number (MathSciNet)
MR3513601
Zentralblatt MATH identifier
06618837
#### Citation
Embrechts, Paul; Hofert, Marius; Wang, Ruodu. Bernoulli and tail-dependence compatibility. Ann. Appl. Probab. 26 (2016), no. 3, 1636--1658. doi:10.1214/15-AAP1128. https://projecteuclid.org/euclid.aoap/1465905014
#### References
• [1] Berman, A. and Shaked-Monderer, N. (2003). Completely Positive Matrices. World Scientific, River Edge, NJ.
• [2] Bluhm, C. and Overbeck, L. (2007). Structured Credit Portfolio Analysis, Baskets & CDOs. Chapman & Hall/CRC, Boca Raton, FL.
• [3] Bluhm, C., Overbeck, L. and Wagner, C. (2002). An Introduction to Credit Risk Modeling. Chapman & Hall, London.
• [4] Chaganty, N. R. and Joe, H. (2006). Range of correlation matrices for dependent Bernoulli random variables. Biometrika 93 197–206.
• [5] Dhaene, J. and Denuit, M. (1999). The safest dependence structure among risks. Insurance Math. Econom. 25 11–21.
• [6] Embrechts, P., McNeil, A. J. and Straumann, D. (2002). Correlation and dependence in risk management: Properties and pitfalls. In Risk Management: Value at Risk and Beyond (Cambridge, 1998) 176–223. Cambridge Univ. Press, Cambridge.
• [7] Fiebig, U., Strokorb, K. and Schlather, M. (2014). The realization problem for tail correlation functions. Available at http://arxiv.org/abs/1405.6876.
• [8] Joe, H. (1997). Multivariate Models and Dependence Concepts. Monographs on Statistics and Applied Probability 73. Chapman & Hall, London.
• [9] Joe, H. (2015). Dependence Modeling with Copulas. Monographs on Statistics and Applied Probability 134. CRC Press, Boca Raton, FL.
• [10] Kortschak, D. and Albrecher, H. (2009). Asymptotic results for the sum of dependent non-identically distributed random variables. Methodol. Comput. Appl. Probab. 11 279–306.
• [11] Liebscher, E. (2008). Construction of asymmetric multivariate copulas. J. Multivariate Anal. 99 2234–2250.
• [12] McNeil, A. J., Frey, R. and Embrechts, P. (2005). Quantitative Risk Management: Concepts, Techniques and Tools. Princeton Univ. Press, Princeton, NJ.
• [13] Nikoloulopoulos, A. K., Joe, H. and Li, H. (2009). Extreme value properties of multivariate $t$ copulas. Extremes 12 129–148.
• [14] Rüschendorf, L. (1981). Characterization of dependence concepts in normal distributions. Ann. Inst. Statist. Math. 33 347–359.
• [15] Strokorb, K., Ballani, F. and Schlather, M. (2015). Tail correlation functions of max-stable processes: Construction principles, recovery and diversity of some mixing max-stable processes with identical TCF. Extremes 18 241–271.
• [16] Yang, J., Qi, Y. and Wang, R. (2009). A class of multivariate copulas with bivariate Fréchet marginal copulas. Insurance Math. Econom. 45 139–147. | 2017-11-18 15:36:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.52579265832901, "perplexity": 6672.353357271507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804976.22/warc/CC-MAIN-20171118151819-20171118171819-00427.warc.gz"} |
http://blog.inf.ed.ac.uk/squinney/ | ## User management improvements
Posted November 23, 2017 by squinney
Categories: Uncategorized
Tags: ,
Management of local users and groups (i.e. those in /etc/passwd and /etc/group) is done using the LCFG auth component. One feature that has always been lacking is the ability to create a home directory where necessary and populate it from a skeleton directory (typically this is /etc/skel). The result of this feature being missing is that it is necessary to add a whole bunch of additional file component resources to create the home directory and that still doesn’t provide support for a skeleton directory.
Recently I needed something along those lines so I’ve taken the chance to add a couple of new resources – create_home_$ and skel_dir_$. When the create_home resource is set to true for a user the home directory will be created by the component and the permissions set appropriately. By default the directory will be populated from /etc/skel but it could be anything. This means it is now possible to setup a machine with a set of identically initialised local users.
For example:
auth.pw_name_cephadmin cephadmin
auth.create_home_cephadmin yes /* Ensure home directory exists */
## LCFG Core: resource types
Posted November 21, 2017 by squinney
Categories: Uncategorized
Tags: ,
The recent round of LCFG client testing using real LCFG profiles from both Informatics and the wider community has shown that the code is now in very good shape and we’re close to being able to deploy to a larger group of machines. One issue that this testing has uncovered is related to how the type of a resource is specified in a schema. A type in the LCFG world really just controls what regular expression is used to validate the resource value. Various type annotations can be used (e.g. %integer, %boolean or %string) to limit the permitted values, if there is no annotation it is assumed to be a tag list and this has clearly caught out a few component authors. For example:
@foo %integer
foo
@bar %boolean
bar
@baz
baz
@quux sub1_$sub2_$
quux
sub1_$sub2_$
Both of the last two examples (baz and quux) are tag lists, the first just does not have any associated sub-resources.
The compiler should not allow anything but valid tag names (which match /^[a-zA-Z0-9_]+$/) in a tag list resource but due to some inadequacies it currently permits pretty much anything. The new core code is a lot stricter and thus the v4 client will refuse to accept a profile if it contains invalid tag lists. Bugs have been filed against a few components (bug#1016 and bug#1017). It’s very satisfying to see the new code helping us improve the quality of our configurations. ## yum cache and disk space Posted November 15, 2017 by squinney Categories: Uncategorized Tags: At a recent LCFG Deployers meeting we discussed a problem with yum not fully cleaning the cache directory even when the yum clean all command is used. This turns out to be related to how the cache directory path is defined in /etc/yum.conf as /var/cache/yum/$basearch/$releasever. As the release version changes with each minor platform release (e.g. 7.3, 7.4) the old directories can become abandoned. At first this might seem like a trivial problem but these cache directories can be huge, we have seen instances where gigabytes of disk space have been used and cannot be simply reclaimed. To help fix this problem I’ve added a new purgecache method to the LCFG yum component. This takes a sledgehammer approach of just deleting everything in the /var/cache/yum/ directory. This can be run manually whenever required or called regularly using something like cron. In Informatics it is now configured to run weekly on a Sunday like this: !cron.objects mADD(yum_purge) cron.object_yum_purge yum cron.method_yum_purge purgecache cron.run_yum_purge AUTOMINS AUTOHOUR * * sun ## LCFG autoreboot Posted November 10, 2017 by squinney Categories: Uncategorized Tags: One of the tools which saves us an enormous amount of effort is our LCFG autoreboot component. This watches for reboot requests from other LCFG components and then schedules the reboot for the required date/time. One nice feature is that it can automatically choose a reboot time from within a specified range. This means that when many similarly configured machines schedule a reboot they don’t all go at the same time which could result in the overloading of services that are accessed at boot time. Recently it was reported that the component has problems parsing single-digit times which results in the reboot not being scheduled. Amazingly this bug has lain undetected for approximately 4 years during which time a significant chunk of machines have presumably been failing to reboot on time. As well as resolving that bug I also took the chance to fix a minor issue related to a misunderstanding of the shutdown command options which resulted in the default delay time being set for 3600 minutes instead of 3600 seconds, thankfully we change that delay locally so it never had any direct impact on our machines. Whilst fixing those two bugs I discovered another issue related to sending reboot notifications via email, if that failed for any reason the reboot would not be scheduled, the component will now report the error but continue. This is a common problem we see in LCFG components where problems are handled with the Fail method (which logs and then exits) instead of just logging with Error. This is particularly a problem since an exit with non-zero code is not the same as dieing which can be caught with the use of the eval function. Since a call to Fail ends the current process immediately this can lead to a particularly annoying situation where a failure in a Configure method results in a failure in the Start method. This means that a component might never reach the started state, a situation from which it is difficult to recover. We are slowly working our way through eradicating this issue from core components but it’s going to take a while. Recently we have had feedback from some of our users that the reboot notification message was not especially informative. The issue is related to us incorporating the message into the message of the day which sometimes leads to it being left lieing around out-of-date for some time. The message would typically say something like “A reboot has been scheduled for 2am on Thursday”, which is fine as long as the message goes away once the reboot has been completed. To resolve this I took advantage of a feature I added some years ago which passes the reboot time as a Perl DateTime object (named shutdown_dt) into the message template. With a little bit of thought I came up with the following which uses the Template Toolkit Date plugin: [%- USE date -%] [%- USE wrap -%] [%- FILTER head = wrap(70, ‘*** ‘, ‘*** ‘) -%] This machine ([% host.VALUE %]) requires a reboot as important updates are available. [%- END %] [% IF enforcing.VALUE -%] [%- FILTER body = wrap(70, ‘ ‘, ‘ ‘) -%] It will be unavailable for approximately 15 minutes beginning at [% date.format( time = shutdown_dt.VALUE.epoch, format = ‘%H:%M %A %e %B %Y’, locale = ‘en_GB’) %]. Connected users will be warned [% shutdown_delay.VALUE %] minutes beforehand. [%- END %] [% END -%] This also uses the wrap plugin to ensure that the lines are neatly arranged and the header section has a “*** ” prefix for each line to help grab the attention of the users. ## LCFG Core: Resource import and export Posted November 7, 2017 by squinney Categories: Uncategorized Tags: , As part of porting the LCFG client to the new core libraries the qxprof and sxprof utilities have been updated. This has led to the development of a new high-level LCFG::Client::Resources Perl library which can be used to import, merge and export resources in all the various required forms. The intention is that eventually all code which uses the LCFG::Resources Perl library (in particular the LCFG::Component framework) will be updated to use this new library. The new library provides a very similar set of functionality and will appear familiar but I’ve taken the opportunity to improve some of the more awkward parts. Here’s a simple example taken from the perldoc: # Load client resources from DB my$res1 = LCFG::Client::Resources::LoadProfile("mynode","client");
# Import client resources from environment variables
my $res2 = LCFG::Client::Resources::Import("client"); # Merge two sets of resources my$res3 = LCFG::Client::Resources::Merge( $res1,$res2 );
# Save the result as a status file
LCFG::Client::Resources::SaveState( "client", $res3 ); The library can import resources from: Berkeley DB, status files, override files, shell environment and explicit resource specification strings. It can export resources as status files, in a form that can be evaluated in the shell environment and also in various terse and verbose forms (e.g. the output styles for qxprof). The LCFG::Resources library provides access to resources via a reference to a hash which is structured something like: { 'sysinfo' => { 'os_id_full' => { 'DERIVE' => '/var/lcfg/conf/server/releases/develop/core/include/lcfg/defaults/sysinfo.h:42', 'VALUE' => 'sl74', 'TYPE' => undef, 'CONTEXT' => undef }, 'path_lcfgconf' => { 'DERIVE' => '/var/lcfg/conf/server/releases/develop/core/include/lcfg/defaults/sysinfo.h:100', 'VALUE' => '/var/lcfg/conf', 'TYPE' => undef, 'CONTEXT' => undef }, } } The top level key is the component name, the second level is the resource name and the third level is the name of the resource attribute (e.g. VALUE or TYPE ). The new LCFG::Client::Resources library takes a similar approach with the top level key being the component name but the value for that key is a reference to a LCFG::Profile::Component object. Resource objects can then be accessed by using the find_resource method which returns a reference to a LCFG::Resource object. For example: my$res = LCFG::Client::Resources::LoadProfile("mynode","sysinfo");
my $sysinfo =$res->{sysinfo};
my $os_id_full =$sysinfo->find_resource('os_id_full');
say $os_id_full->value; Users of the qxprof and sxprof utilities should not notice any differences but hopefully the changes will be appreciated by those developing new code. ## Testing the new LCFG core : Part 2 Posted May 18, 2017 by squinney Categories: Uncategorized Following on from the basic tests for the new XML parser the next step is to check if the new core libs can be used to correctly store the profile state into a Berkeley DB file. This process is particularly interesting because it involves evaluating any context information and selecting the correct resource values based on the contexts. Effectively the XML profile represents all possible configuration states whereas only a single state is stored in the DB. The aim was to compare the contents of the old and new DBs for each Informatics LCFG profile. Firstly I used rdxprof to generate DB files using the current libs: cd /disk/scratch/profiles/inf.ed.ac.uk/ for i in$(find -maxdepth 1 -type d -printf '%f\n' | grep -v '^\.');\
do \
echo $i; \ /usr/sbin/rdxprof -v -u file:///disk/scratch/profiles/$i; \
done
This creates a DB file for each profile in the /var/lcfg/conf/profile/dbm directory. For 1500-ish profiles this takes a long time…
The next step is to do the same with the new libs:
find /disk/scratch/profiles/ -name '*.xml' | xargs \
perl -MLCFG::Profile -wE \
'for (@ARGV) { eval { $p = LCFG::Profile->new_from_xml($_); \
$n =$p->nodename; \
$p->to_bdb( "/disk/scratch/results/dbm/$n.DB2.db" ) }; \
print $@ if$@ }'
This creates a DB file for each profile in the /disk/scratch/results/dbm directory. This is much faster than using rdxprof.
The final step was to compare each DB. This was done simply using the perl DB_File module to tie each DB to a hash and then comparing the keys and values. Pleasingly this has shown that the new code is generating identical DBs for all the Informatics profiles.
Now I need to hack this together into a test script which other sites can use to similarly verify the code on their sets of profiles.
## Testing the new LCFG core : Part 1
Posted May 17, 2017 by squinney
Categories: Uncategorized
Tags:
The project to rework the core LCFG code is rattling along and has reached the point where some full scale testing is needed. The first step is to check whether the new XML parser can actually just parse all of our LCFG profiles. At this stage I’m not interested in whether it can do anything useful with the data once loaded, I just want to see how it handles a large number of different profiles.
Firstly a source of XML profiles is needed, I grabbed a complete local copy from our lcfg server:
rsync -av -e ssh lcfg:/var/lcfg/conf/server/web/profiles/ /disk/scratch/profiles/
I then ran the XML parser on every profile I could find:
find /disk/scratch/profiles/ -name ‘*.xml’ | xargs \
perl -MLCFG::Profile -wE \
‘for (@ARGV) { eval { LCFG::Profile->new_from_xml($_) }; print$@ if \$@ }’
Initially I hit upon bug#971 which is a genuine bug in the schema for the gridengine component. As noted previously, this was found because the new libraries are much stricter about what is considered to be valid data. With that bug resolved I can now parse all 1525 LCFG XML profiles for Informatics.
## LCFG Core Project
Posted May 2, 2017 by squinney
Categories: Uncategorized
Tags: ,
Over the last few years I have been working on (and off) creating a new set of “core” libraries for LCFG. This is now finally edging towards the point of completion with most of the remaining work being related to polishing, testing and documentation.
This project originated from the need to remove dependencies on obsolete Perl XML libraries. The other main aims were to create a new OO API for resources/components and packages which would provide new opportunities for code reuse between client, ngeneric and server.
Over time several other aims have been added:
• Platform independence / portability.
• Make it possible to support new languages.
• Ensure resource usage remains low.
Originally this was to be a rewrite just in Perl but the heavy resource usage of early prototypes showed it was necessary to move at least some of the functionality into C libraries. Since that point the chance to enhance portability was also identified and included in the aims for the project. As well as making it possible to target other platforms (other Linux or Unix, e.g. MacOSX), the enhanced portability should make it much simpler and quicker to port to new Redhat based platforms.
The intention is that the new core libraries will be totally platform-independent and portable, for example, no hardwired paths or assumptions that platform is Redhat/RPM (or even Linux) based. The new core is split two parts: C and Perl libraries with the aim that as much functionality as possible is in the C libraries to aid reuse from other languages (e.g. Python).
The aim is that these libraries should be able to co-exist alongside current libraries to ease the transition.
I have spent a lot of time on documenting the entire C API. The documentation is formatted into html and pdf using doxygen, I had not used this tool before but I am very pleased with the results and will definitely be using it more in the future. Although a slow task, documenting the functions has proved to be a very useful review process. It has helped me find many inconsistencies between functions with similar purposes and has led to numerous small improvements.
## LCFG Client
The client has been reworked to use new Core libraries. This is where the platform-specific knowledge of paths, package manager, etc, is held.
## Resource Support
XML YES NO
DB YES YES
Status YES YES
Environment YES YES
There is currently no support for reading header files or source profiles but this could be added later.
There is new support for finding the “diffs” between resources, components and profiles.
## Package Support
XML YES YES
rpmcfg YES YES
rpmlist YES YES
There is currently no support for reading package list files but this could be added later.
## Remaining Work
There is still work to be done on the top-level profile handling code and the code for finding the differences between resources, components and profiles needs reworking. Also the libraries for reading/writing XML files and Berkeley DB need documentation.
That is all the remaining work required on the “core” libraries. After that there will be some work to do on finishing the port of the client to the new libraries. I’ve had that working before but function APIs have changed, I don’t expect it to require a huge amount of work.
## PostgreSQL 9.6
Posted September 29, 2016 by squinney
Categories: Uncategorized
I’m currently working on upgrading both the PkgForge build farm and the BuzzSaw log file processor services to SL7.2. Both of these services use PostgreSQL databases and have been stuck on 9.2 for a while pending the server upgrades. The latest version of PostgreSQL (9.6) is due to be released today so I thought I would give the release candidate a whirl to see how I get on. There are numerous benefits over 9.2, in particular I am planning to use the new jsonb column type to store PkgForge build information which was previously serialised to a YAML file, being able to query that data directly from the DB should be very useful. The feature I am most interested in trying from 9.6 is parallel execution of sequential scans, joins and aggregates. This has the potential to make some of the large queries for the BuzzSaw DB much faster. My very simplistic first tests suggest that setting the max_parallel_workers_per_gather option to 4 will reduce the query time by at least 50%, it will need a bit more investigation and analyse to check it really is helpful but that’s an encouraging result.
A 2ndQuadrant blog post has some useful information on the new parallel sequential scan feature.
## LCFG Client: Hasn’t died yet…
Posted August 2, 2016 by squinney
Categories: Uncategorized
Tags: ,
Coming back from holiday I was pleased to see that I have a v4 client instance which has now been running continuously for nearly 3 weeks without crashing. It hasn’t done a massive amount in that time but it has correctly applied some updates to both resources and packages.
In the time I’ve not been on holiday I’ve been working hard on documenting the code. For the C code I’ve chosen to use doxygen, it does a nice job of summarizing all the functions in each library and it makes it very simple to write the documentation using simple markup right next to the code for each function. I’ve also been working through some of the Perl modules and adding POD where necessary. It might soon be at the stage where others can pick it up and use it without needing to consult me for the details… | 2018-01-16 17:17:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23803630471229553, "perplexity": 2409.9909078430683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886476.31/warc/CC-MAIN-20180116164812-20180116184812-00597.warc.gz"} |
http://nag.com/numeric/cl/nagdoc_cl23/html/G13/g13asc.html | g13 Chapter Contents
g13 Chapter Introduction
NAG C Library Manual
# NAG Library Function Documentnag_tsa_resid_corr (g13asc)
## 1 Purpose
nag_tsa_resid_corr (g13asc) is a diagnostic checking function suitable for use after fitting a Box–Jenkins ARMA model to a univariate time series using nag_tsa_multi_inp_model_estim (g13bec). The residual autocorrelation function is returned along with an estimate of its asymptotic standard errors and correlations. Also, nag_tsa_resid_corr (g13asc) calculates the Box–Ljung portmanteau statistic and its significance level for testing model adequacy.
## 2 Specification
#include #include
void nag_tsa_resid_corr (Nag_ArimaOrder *arimav, Integer n, const double v[], Integer m, const double par[], Integer narma, double r[], double rc[], Integer tdrc, double *chi, Integer *df, double *siglev, NagError *fail)
## 3 Description
Consider the univariate multiplicative autoregressive-moving average model
$ϕ B Φ B s W t - μ = θ B Θ B s ε t$ (1)
where ${W}_{\mathit{t}}$, for $\mathit{t}=1,2,\dots ,n$, denotes a time series and ${\epsilon }_{\mathit{t}}$, for $\mathit{t}=1,2,\dots ,n$, is a residual series assumed to be Normally distributed with zero mean and variance ${\sigma }^{2}\left(>0\right)$. The ${\epsilon }_{t}$'s are also assumed to be uncorrelated. Here $\mu$ is the overall mean term, $s$ is the seasonal period and $B$ is the backward shift operator such that ${B}^{r}{W}_{t}={W}_{t-r}$. The polynomials in (1) are defined as follows:
$ϕ B = 1 - ϕ 1 B - ϕ 2 B 2 - ⋯ - ϕ p B p$
is the non-seasonal autoregressive (AR) operator;
$θ B = 1 - θ 1 B - θ 2 B 2 - ⋯ - θ q B q$
is the non-seasonal moving average (MA) operator;
$Φ B s = 1 - Φ 1 B s - Φ 2 B 2s - ⋯ - Φ P B Ps$
is the seasonal AR operator; and
$Θ B s = 1 - Θ 1 B s - Θ 2 B 2s - ⋯ - Θ Q B Qs$
is the seasonal MA operator. The model (1) is assumed to be stationary, that is the zeros of $\varphi \left(B\right)$ and $\Phi \left({B}^{s}\right)$ are assumed to lie outside the unit circle. The model (1) is also assumed to be invertible, that is the zeros of $\theta \left(B\right)$ and $\Theta \left({B}^{s}\right)$ are assumed to lie outside the unit circle. When both $\Phi \left({B}^{s}\right)$ and $\Theta \left({B}^{s}\right)$ are absent from the model, that is when $P=Q=0$, then the model is said to be non-seasonal.
The estimated residual autocorrelation coefficient at lag $l$, ${\stackrel{^}{r}}_{l}$, is computed as:
$r ^ l = ∑ t = l + 1 n ε ^ t-l - ε - ε ^ t - ε - ∑ t=1 n ε ^ t - ε - 2 , l = 1 , 2 , …$
where ${\stackrel{^}{\epsilon }}_{t}$ denotes an estimate of the $t$th residual, ${\epsilon }_{t}$, and $\stackrel{-}{\epsilon }={\sum }_{t=1}^{n}{\stackrel{^}{\epsilon }}_{t}/n$. A portmanteau statistic, ${Q}_{\left(m\right)}$, is calculated from the formula (see Box and Ljung (1978)):
$Q m = n n+2 ∑ l=1 m r ^ l 2 / n-l$
where $m$ denotes the number of residual autocorrelations computed. (Advice on the choice of $m$ is given in Section 8.) Under the hypothesis of model adequacy, ${Q}_{\left(m\right)}$ has an asymptotic ${\chi }^{2}$ distribution on $m-p-q-P-Q$ degrees of freedom. Let ${\stackrel{^}{r}}^{\mathrm{T}}=\left({\stackrel{^}{r}}_{1},{\stackrel{^}{r}}_{2},\dots ,{\stackrel{^}{r}}_{m}\right)$ then the variance-covariance matrix of $\stackrel{^}{r}$ is given by:
$Var r ^ = I m - X XT X -1 XT / n .$
The construction of the matrix $X$ is discussed in McLeod (1978). (Note that the mean, $\mu$, and the residual variance, ${\sigma }^{2}$, play no part in calculating $\mathrm{Var}\left(\stackrel{^}{r}\right)$ and therefore are not required as input to nag_tsa_resid_corr (g13asc).)
## 4 References
Box G E P and Ljung G M (1978) On a measure of lack of fit in time series models Biometrika 65 297–303
McLeod A I (1978) On the distribution of the residual autocorrelations in Box–Jenkins models J. Roy. Statist. Soc. Ser. B 40 296–302
## 5 Arguments
1: arimavNag_ArimaOrder *
Pointer to structure of type Nag_ArimaOrder with the following members:
pInteger
dIntegerInput
qIntegerInput
bigpIntegerInput
bigdIntegerInput
bigqIntegerInput
sIntegerInput
On entry: these seven members of arimav must specify the orders vector $\left(p,d,q,P,D,Q,s\right)$, respectively, of the ARIMA model for the output noise component.
$p$, $q$, $P$ and $Q$ refer, respectively, to the number of autoregressive $\left(\varphi \right)$, moving average $\left(\theta \right)$, seasonal autoregressive $\left(\Phi \right)$ and seasonal moving average $\left(\Theta \right)$ arguments.
$d$, $D$ and $s$ refer, respectively, to the order of non-seasonal differencing, the order of seasonal differencing and the seasonal period.
Constraints:
• $\mathbf{arimav}\mathbf{\to }\mathbf{p}$, $\mathbf{arimav}\mathbf{\to }\mathbf{q}$, $\mathbf{arimav}\mathbf{\to }\mathbf{bigp}$, $\mathbf{arimav}\mathbf{\to }\mathbf{bigq}$, $\mathbf{arimav}\mathbf{\to }\mathbf{s}\ge 0$,
• $\mathbf{arimav}\mathbf{\to }\mathbf{p}+\mathbf{arimav}\mathbf{\to }\mathbf{q}+\mathbf{arimav}\mathbf{\to }\mathbf{bigp}+\mathbf{arimav}\mathbf{\to }\mathbf{bigq}>0$,
• if $\mathbf{arimav}\mathbf{\to }\mathbf{s}=0$, then $\mathbf{arimav}\mathbf{\to }\mathbf{bigp}=0$ and $\mathbf{arimav}\mathbf{\to }\mathbf{bigq}=0$.
2: nIntegerInput
On entry: the number of observations in the residual series, $n$.
Constraint: ${\mathbf{n}}\ge 3$.
3: v[n]const doubleInput
On entry: ${\mathbf{v}}\left[\mathit{t}-1\right]$ must contain an estimate of ${\epsilon }_{\mathit{t}}$, for $\mathit{t}=1,2,\dots ,n$.
Constraint: v must contain at least two distinct elements.
4: mIntegerInput
On entry: the value of $m$, the number of residual autocorrelations to be computed. See Section 8 for advice on the value of m.
Constraint: ${\mathbf{narma}}<{\mathbf{m}}<{\mathbf{n}}$.
5: par[narma]const doubleInput
On entry: the parameter estimates in the order ${\varphi }_{1},{\varphi }_{2},\dots ,{\varphi }_{p}$, ${\theta }_{1},{\theta }_{2},\dots ,{\theta }_{q}$, ${\Phi }_{1},{\Phi }_{2},\dots ,{\Phi }_{P}$, ${\Theta }_{1},{\Theta }_{2},\dots ,{\Theta }_{Q}$ only.
Constraint: the elements in par must satisfy the stationarity and invertibility conditions.
6: narmaIntegerInput
On entry: the number of ARMA arguments, $\varphi$, $\theta$, $\Phi$ and $\Theta$ arguments, i.e., ${\mathbf{narma}}=p+q+P+Q$.
Constraint: ${\mathbf{narma}}=\mathbf{arimav}\mathbf{\to }\mathbf{p}+\mathbf{arimav}\mathbf{\to }\mathbf{q}+\mathbf{arimav}\mathbf{\to }\mathbf{bigp}+\mathbf{arimav}\mathbf{\to }\mathbf{bigq}$.
7: r[m]doubleOutput
On exit: an estimate of the residual autocorrelation coefficient at lag $\mathit{l}$, for $\mathit{l}=1,2,\dots ,m$. If ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NE_G13AS_ZERO_VAR}}$ on exit then all elements of r are set to zero.
8: rc[${\mathbf{m}}×{\mathbf{tdrc}}$]doubleOutput
On exit: the estimated standard errors and correlations of the elements in the array r. The correlation between ${\mathbf{r}}\left[i-1\right]$ and ${\mathbf{r}}\left[j-1\right]$ is returned as ${\mathbf{rc}}\left[\left(i-1\right)×{\mathbf{tdrc}}+j-1\right]$ except that if $i=j$ then ${\mathbf{rc}}\left[\left(i-1\right)×{\mathbf{tdrc}}+j-1\right]$ contains the standard error of ${\mathbf{r}}\left[i-1\right]$. If on exit, ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NE_G13AS_FACT}}$ or NE_G13AS_DIAG, then all off-diagonal elements of rc are set to zero and all diagonal elements are set to $1/\sqrt{n}$.
9: tdrcIntegerInput
On entry: the stride separating matrix column elements in the array rc.
Constraint: ${\mathbf{tdrc}}\ge {\mathbf{m}}$.
10: chidouble *Output
On exit: the value of the portmanteau statistic, ${Q}_{\left(m\right)}$. If ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NE_G13AS_ZERO_VAR}}$ on exit then chi is returned as zero.
11: dfInteger *Output
On exit: the number of degrees of freedom of chi.
12: siglevdouble *Output
On exit: the significance level of chi based on df degrees of freedom. If ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NE_G13AS_ZERO_VAR}}$ on exit then siglev is returned as one.
13: failNagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).
## 6 Error Indicators and Warnings
NE_2_INT_ARG_LT
On entry, ${\mathbf{tdrc}}=〈\mathit{\text{value}}〉$ while ${\mathbf{m}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{tdrc}}\ge {\mathbf{m}}$.
NE_ALLOC_FAIL
Dynamic memory allocation failed.
NE_ARIMA_INPUT
On entry, $\mathbf{arimav}\mathbf{\to }\mathbf{p}=〈\mathit{\text{value}}〉$, $\mathbf{arimav}\mathbf{\to }\mathbf{d}=〈\mathit{\text{value}}〉$, $\mathbf{arimav}\mathbf{\to }\mathbf{q}=〈\mathit{\text{value}}〉$, $\mathbf{arimav}\mathbf{\to }\mathbf{bigp}=〈\mathit{\text{value}}〉$, $\mathbf{arimav}\mathbf{\to }\mathbf{bigd}=〈\mathit{\text{value}}〉$, $\mathbf{arimav}\mathbf{\to }\mathbf{bigq}=〈\mathit{\text{value}}〉$ and $\mathbf{arimav}\mathbf{\to }\mathbf{s}=〈\mathit{\text{value}}〉$.
Constraints on the members of arimav are:
$\mathbf{arimav}\mathbf{\to }\mathbf{p}$, $\mathbf{arimav}\mathbf{\to }\mathbf{q}$, $\mathbf{arimav}\mathbf{\to }\mathbf{bigp}$, $\mathbf{arimav}\mathbf{\to }\mathbf{bigq}$, $\mathbf{arimav}\mathbf{\to }\mathbf{s}\ge 0$, $\mathbf{arimav}\mathbf{\to }\mathbf{p}+\mathbf{arimav}\mathbf{\to }\mathbf{q}+\mathbf{arimav}\mathbf{\to }\mathbf{bigp}+\mathbf{arimav}\mathbf{\to }\mathbf{bigq}>0$, if $\mathbf{arimav}\mathbf{\to }\mathbf{s}=0$, then $\mathbf{arimav}\mathbf{\to }\mathbf{bigp}=0$ and $\mathbf{arimav}\mathbf{\to }\mathbf{bigq}=0$.
NE_G13AS_AR
On entry, the autoregressive (or moving average) arguments are extremely close to or outside the stationarity (or invertibility) region. To proceed, you must supply different parameter estimates in the array par.
NE_G13AS_DIAG
This is an unlikely exit. At least one of the diagonal elements of rc was found to be either negative or zero. In this case all off-diagonal elements of rc are returned as zero and all diagonal elements of rc set to $1/\sqrt{\left(n\right)}$.
NE_G13AS_FACT
On entry, one or more of the AR operators has a factor in common with one or more of the MA operators. To proceed, this common factor must be deleted from the model. In this case, the off-diagonal elements of rc are returned as zero and the diagonal elements set to $1/\sqrt{\left(n\right)}$. All other output quantities will be correct.
NE_G13AS_ITER
This is an unlikely exit brought about by an excessive number of iterations being needed to evaluate the zeros of the AR or MA polynomials. All output arguments are undefined.
NE_G13AS_ZERO_VAR
On entry, the residuals are practically identical giving zero (or near zero) variance. In this case chi is set to zero, siglev to one and all the elements of r set to zero.
NE_INPUT_NARMA
On entry, $\mathbf{arimav}\mathbf{\to }\mathbf{p}=〈\mathit{\text{value}}〉$, $\mathbf{arimav}\mathbf{\to }\mathbf{q}=〈\mathit{\text{value}}〉$, $\mathbf{arimav}\mathbf{\to }\mathbf{bigp}=〈\mathit{\text{value}}〉$, $\mathbf{arimav}\mathbf{\to }\mathbf{bigq}=〈\mathit{\text{value}}〉$ while ${\mathbf{narma}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{narma}}=\mathbf{arimav}\mathbf{\to }\mathbf{p}+\mathbf{arimav}\mathbf{\to }\mathbf{q}+\mathbf{arimav}\mathbf{\to }\mathbf{bigp}+\mathbf{arimav}\mathbf{\to }\mathbf{bigq}$.
NE_INT_3
On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$, ${\mathbf{n}}=〈\mathit{\text{value}}〉$, ${\mathbf{narma}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{narma}}<{\mathbf{m}}<{\mathbf{n}}$.
NE_INT_ARG_LT
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}\ge 3$.
NE_INTERNAL_ERROR
An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance.
## 7 Accuracy
The computations are believed to be stable.
### 8.1 Timing
The time taken by nag_tsa_resid_corr (g13asc) depends upon the number of residual autocorrelations to be computed, $m$.
### 8.2 Choice of $m$
The number of residual autocorrelations to be computed, $m$ should be chosen to ensure that when the ARMA model (1) is written as either an infinite order autoregressive process:
$W t - μ = ∑ j=1 ∞ π j W t-j - μ + ε t$
or as an infinite order moving average process:
$W t - μ = ∑ j=1 ∞ ψ j ε t-j + ε t$
then the two sequences $\left\{{\pi }_{1},{\pi }_{2},\dots \right\}$ and $\left\{{\psi }_{1},{\psi }_{2},\dots \right\}$ are such that ${\pi }_{j}$ and ${\psi }_{j}$ are approximately zero for $j>m$. An overestimate of $m$ is therefore preferable to an under-estimate of $m$. In many instances the choice $m=10$ will suffice. In practice, to be on the safe side, you should try setting $m=20$.
### 8.3 Approximate Standard Errors
When ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NE_G13AS_FACT}}\text{ or }{\mathbf{NE_G13AS_DIAG}}$ all the standard errors in rc are set to $1/\sqrt{n}$. This is the asymptotic standard error of ${\stackrel{^}{r}}_{l}$ when all the autoregressive and moving average arguments are assumed to be known rather than estimated.
## 9 Example
A program to fit an ARIMA(1,1,2) model to a series of 30 observations. 10 residual autocorrelations are computed.
### 9.1 Program Text
Program Text (g13asce.c)
### 9.2 Program Data
Program Data (g13asce.d)
### 9.3 Program Results
Program Results (g13asce.r) | 2014-11-23 13:51:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 147, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994904637336731, "perplexity": 1383.7617578390336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400379512.32/warc/CC-MAIN-20141119123259-00224-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://ncatlab.org/nlab/show/contravariant+functor | # Contravariant functors
## Idea
A contravariant functor is like a functor but it reverses the directions of the morphisms. (Between groupoids, contravariant functors are essentially the same as functors.)
## Between categories
A contravariant functor $F$ from a category $C$ to a category $D$ is simply a functor from the opposite category $C^op$ to $D$.
To emphasize that one means a functor $C \to D$ as stated and not as a functor $C^{op} \to D$ one sometimes says covariant functor when referring to non-contravariant functors, for emphasis.
Equivalently, a contravariant functor from $C$ to $D$ may be thought of as a functor from $C$ to $D^op$, but the version above generalises better to functors of many variables.
Also notice that while the objects of the functor category $[C^{op}, D]$ are in canonical bijection with those in the functor category $[C, D^{op}]$ (both are contravariant functors from $C$ to $D$), the morphisms in the two functor categories are in general different, as
$[C^{op}, D] \simeq [C, D^{op}]^{op} \,.$
This matters when discussing a natural transformation from one contravariant functor to another.
## Between higher categories
Since n-categories (and also (infinity,n)-categories) have $2^n$ different kinds of opposite category depending on which of the $k$-morphisms are reversed for $1\le k\le n$ (see for instance opposite 2-category), they also have $2^n$ different kinds of “contravariant functor”.
## Abstractly
Categories, covariant functors, and natural transformations form a 2-category Cat. To include the contravariant functors as well, we can equip $Cat$ with a duality involution, or we can generalize to a 2-category with contravariance, or some more general structure that also includes extranatural transformations or dinatural transformations. There could also be higher-categorical versions, such as a 3-category with contravariance.
Last revised on June 17, 2016 at 13:36:09. See the history of this page for a list of all contributions to it. | 2018-12-18 21:38:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 21, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9922997355461121, "perplexity": 352.2366391417275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829812.88/warc/CC-MAIN-20181218204638-20181218230638-00166.warc.gz"} |
http://pbnlaplatatirecenter.newstagingsite.com/eotk/dihedral-group-d12-elements.html | # Dihedral group d12 elements
dihedral group d12 elements So jS 4j= 4 The nonabelian groups in this range are the dihedral groups D 6 and D 7, of order 12 and 14 (respectively), together with the alternating group A 4, and the semidirect product Z 3 Z 4 of a cyclic group of order 4 acting on a cyclic group of order 3. The elements that comprise the group are three rotations: , , and counterclockwise about the center of , , and , respectively; and three reflections: , , and about the lines indicated in the figure below. A group is a set, G, together with an operation ⋅ (called the group law of G) that combines any two elements a and b to form another element, denoted a ⋅ b or ab. The group is generated by a rotation by 360/n degrees and a flip across an axis through the center and a vertex. Since isomorphic groups must have centers with the same number of elements, we conclude that S4 is not isomorphic to D12. Abstract Algebra: Find all subgroups in S5, the symmetric group on 5 letters, that are isomorphic to D12, the dihedral group with 12 elements. In fact, every element of A_4 is either a 3-cycle like (134) or a product of two disjoint transpositions like (13)(24) or is the identity. 1] Dihedral groups as symmetries of n-gons The dihedral group Gis the symmetry group of a regular n-gon. Mar 08, 2013 · This article gives specific information, namely, element structure, about a family of groups, namely: dihedral group. Then G contains a cyclic group C of order n consisting of rotations; all the elements outside C are reflections. But the presentation of a dihedral group would have x 2 = 1, instead of x 2 = a n; and this yields a groups, and they play an important role in group theory, geometry and chemistry. Solution Recall that the center of a group is the subgroup consisting of all the elements that commute with every other element in the group. eg: staggered conformation of ethane The angle between any blue C-H bond (C-H 1 , C-H 2 , C-H 3 ) and any red C-H bond (C-H 4 , C-H 5 , C-H 6 ) is a dihedral angle. While there is no geometry for general dihedral groups, Soergel performs an analogous algebraic construction to produce B w for smooth elements. Let the dihedral group D n be given via generators and relations, with generators a of order n and b of order 2, satisfying ba = a−1b. We will first introduce the concept of Cayley graphs on dihedral groups and give some necessary auxiliary results in Section 2. There are thus two ways to produce the character table, either inducing from and using the orthogonality relations or simply by finding the character tables for and and taking their group direct sum. There are 2n elements in total consisting 20 Dihedral angle has a strong influence on dihedral effect, which is named after it. Thus, it has order 2n, and is generated by elements ˙ and ˝, with relations ˙n = ˝2 = 1 and ˝˙ = ˙ 1˝. May 14, 2020 · The elements in a dihedral group are constructed by rotation and reflection operations over a 2D symmetric polygon. The dotted lines are lines of re ection: re ecting the polygon across Jul 15, 2011 · cyclic group:Z12: 2 : element structure of cyclic group:Z12: element structure of cyclic groups: alternating group:A4: 3 : element structure of alternating group:A4: element structure of alternating groups: dihedral group:D12: 4 : element structure of dihedral group:D12: element structure of dihedral groups: direct product of Z6 and Z2: 5 Dihedral group understanding Hot Network Questions Trying to move away from arrow keys in normal/insert/visual mode, but small text inserts are killing me! Let G be the dihedral group D12, and let N be the subgroup a3 = {e,a3,a6,a9}. 7 7 abc Mar 22, 2017 · Note that D12 has r^6 (rotation of 180 degrees) as a nontrivial element in its center. Oct 27, 2010 · Recall the dihedral group that is defined by $$\displaystyle D_n= \langle a,b | a^2 = b^2 =(ab)^2=1 \rangle$$. Reflecting in one axis of symmetry followed by reflecting in another axis of symmetry produces a G = D12 order 24 = 23·3. The generators of the group returned are the elements corresponding to the factors C_{ gap> DihedralGroup(8); <pc group of size 8 with 3 generators> gap > #I Q-class 20. org Feb 01, 2006 · Also, C 3 × D 4 is a new group of real genus 7 to be added to the list of [4, page 698]. Much like how we find “paths” in a group by taking an element in it and applying it over and over, we find “paths” in a group action by taking one of the objects being acted on and applying the entire group of functions only to that object. The groups D(G) generalize the classical dihedral groups, as evidenced by the isomor-phism between Oct 28, 2011 · Once a group has been selected, its group table is displayed to the right, and a list of its elements are listed on the left. If is a reflection in the dihedral group find all elements X in such that and all elements in such that . CL] 3 Jun 2019 Oct 08, 2008 · Notice that if g is an element of C G (H), then ghg-1 = h for all h elements of H so, C G (H) is a sub group of N G (H). By definition, “The group of symmetries of a regular polygon P n of n sides is called the dihedral group of degree n and denoted by D(n)” (Bhattacharya, Jain, & Nagpaul, 1994). Note that conjugate group elements always have the same order, but in general two group elements that have the same order need not be conjugate. $Find$latex is an element of order 2, b is an element of In the Alternating Groups every real element is strongly real, but this is not true in means that the Dihedral group D12 is a subgroup of the automorphism group 29 Aug 2019 2. Then in Section 3, we will give a complete characterization of the Pfaffian property of Cayley graphs on dihedral groups. Like all dihedral groups, it has two generators: r of order 12 -- r¹² = e (the identity) f of order 2 -- f² = e Dec 27, 2017 · We compute all the conjugacy classed of the dihedral group D_8 of order 8. 3 Dihedral group D n The subgroup of S ngenerated by a= (123 n) and b= (2n)(3(n 1)) (i(n+ 2 i)) is called the dihedral group of degree n, denoted In [2] it was proved that every product G D AB of two periodic locally dihedral subgroups A and B is soluble (for a periodic group G this was already shown in [4]). Z(D10) = {e, $r^{5}$) This generalizes to Z(Dn Feb 27, 2016 · Dihedral groups describe the symmetry of objects that exhibit rotational and reflective symmetry, like a regular n-gon. To which well-known group is G/H isomorphic? Is the subgroup generated by b normal in D_8? (v) Viewing the square in the real plane, centred at the origin, write down the 2Ã-2 matrix ?(a) which represents the rotation a and the 2Ã-2 matrix ?(b) which represents the reflection b. It is conjectured that any (ordinary) difference set in a dihedral Since a2 and a2 are elements of L inducing the same inner automorphism of L and the center of L is trivial, we must have a2 = a2. If G is a finite group with a dihedral Sylow 2-subgroup of order 2n with n ≥ 3, then |Irr(B0(kG))| = 2n−2 + 3 and the values at non-trivial 2-elements of the ordinary irreducible characters in Irr(B0(kG)) are given by the non-trivial generalised According to Lagrange's theorem every element must have an order that divides 4. ( D12 denotes the dihedral group of order 24) coset quotient group order 16 +(24 (mod 33)〉 Z33/(24 (mod 33)〉 11 (11 (mod 37)) U(37)/(11 (mod 37)) D 121(a6〉 12 elements reset id elmn perm . On The Group of Symmetries of a Rectangle page we then looked at the group of symmetries of a nonregular polygon - the rectangle. !The dihedral group with two elements, D 2, and the dihedral group with four elements, D 4, are abelian. Advanced Algebra: Sep 10, 2019: Homomorphisms and kernals: Advanced Algebra: Apr 30, 2018: Finding all ring homomorphisms: Advanced Algebra: Apr 8, 2017: Group homomorphisms and short exact sequences: Advanced Algebra: Apr 7, 2017 Compute the multiplication table of the quotient group D_8/H. From Lagrange's theorem we know that any non-trivial subgroup of a group with 6 elements must have order 2 or 3. 1 Throughout this paper, G is the dihedral group D n, X[r] is the set of all ordered r-element subsets of Xn={1, 2, , }, and n P r is n permutation r. To qualify as a group, the set and operation, (G, ⋅), must satisfy four requirements known as the group axioms: A group homomorphism that doesn't send one into one. Definition ( The Dihedral group, p ) The group, & á, is the group of all symmetries of a regular polygon. Let G=<a>: Since Ghas an element of in nite order, ak is of in nite order for some k 6=0 :Since jbj= jb−1j;(problem 4 on pg. More generally, a dihedral group is a group which can be generated by two distinct elements of order two. 15 15 a7c Feb 23, 2015 · The cycle graphs of dihedral groups consist of an n-element cycle and n 2-element cycles. Permutation Matrices Abstract Algebra: (Linear Algebra Required) The symmetric group S_n is realized as a matrix group using permutation matrices. r =counterclockwise rotationby 2ˇ=n A dihedral group is Abelian as well as cyclic if the group order is in {1,2} (Bilal et al. We rst The metabelian groups considered in this study are some nonabelian metabelian groups of order 24, which are the dihedral group, D12 as well as the semidirect products, R = ℤ3 ⋊ ℤ8 and S 정의. The elements in a dihedral group are constructed by rotation and reflection operations over a 2D symmetric polygon. There is a superficial resemblance between the dicyclic groups and dihedral groups; both are a sort of "mirroring" of an underlying cyclic group. If K represents an algebraic system, then Aut( K) (Inn(K)) will denote the group of automorphisms (inner automorphisms) of K. Examples include (Z;+) integers under addition, D 2n the rotations and ips of an n-gon, and S n the set of all permutations of n elements. If G is a group of order 2p (where p is prime), G is either the cyclic group, C 2p, or the dihedral group, D p. For example, with n=6, Nov 06, 2019 · The dihedral group of all the symmetries of a regular polygon with n sides has exactly 2n elements and is a subgroup of the Symmetric group S_n (having n! elements) and is denoted by D_n or D_2n by different authors. What is the order of the four elements 12 Sep 2012 This group, usually denoted D_{12} (though denoted D_6 in an alternate convention) is defined in the following equivalent ways: It is the 8 Mar 2013 This article discusses the element structure of the dihedral group D_{2n} of degree n and order 2n , given by the presentation: \langle x,a \mid 11 Nov 2011 Yes, you are perfectly right. This lets us represent the elements of D n as 2 2 \begin{align} \quad rrr^{-1} &= r \\ \quad r^3r(r^3)^{-1} &= r^3rr = r^5 = r \\ \quad srs^{-1} &= srs = r^3 \\ \quad (rs)r(rs)^{-1} &= (rs)r(s^{-1}r^{-1}) = (rs)(rs)r Skip to main content Search This Blog In mathematics, the infinite dihedral group Dih ∞ is an infinite group with properties analogous to those of the finite dihedral groups. Unlike di erent conjugacy classes, Subgroups Of Dihedral Group D12 For each In [9, 15] finite groups with all elements of prime order are classified. Dihedral effect is a critical factor in the stability of an aircraft about the roll axis (the spiral mode ). Genevieve Maalouf & Taylor Walker Conjugacy Class Graphs of Dihedral and Permutation Groups Nov 09, 2010 · center, centralizer Let D4 = {e, r, r2, r3, f, fr, fr2, fr3}, where r4 = f2 = e and rf = fr−1 = fr3. Generalized dihedral group: This is a semidirect product of an abelian group by a cyclic group of order two acting via the inverse map. A standard model of a cyclic group of order n is the multiplicative group Cn = {z ∈ C: zn = 1} of n-th roots of 1. The only groups that have a space of forms of dimension larger than 1 are isomorphic either to one of the cyclic groups Cl I Abstract Algebra: Consider the dihedral group with eight elements D8, the symmetries of the square. Any element in this group has order 1 or that prime, which means that either it is the identity or it is a generator for the whole group (again by Lagrange), which means that the group is cyclic (as all elements can't be the identity). - 0 0 e + - 1 1 a + - 2 2 a2 + - 3 3 a3 + - 4 4 a4 + - 5 5 a5 + - 6 6 c + - 7 7 ac + - 8 8 a2c + - 9 9 a3c + - 10 10 a4c + . Z(D10) = {e, $r^{5}$) This generalizes to Z(Dn Question: (1 Point) Determine The Order Of Each Of The Following Elements In The Respective Quotient Groups. the binary dihedral group of order 12 – 2 D 12 2 D_{12} correspond to the Dynkin label D5 in the ADE-classification. This project will make use of the definition that all of the permutations for each of the dihedral groups D(n) preserve the cyclic order of the vertices of each If is a reflection in the dihedral group find all elements X in such that and all elements in such that . It is the non-abelian group of order 2n gotten by taking an element g of order n, an element f of order 2 which is not equal to any power of g, and setting gf = fgn−1 = fg−1. A quick review of the properties of a group include a set Gwhich is closed under a binary operation which is associative, contains an identity, and has in-verses. The infinite dihedral group Dih (C ∞) is denoted by D ∞, and is isomorphic to the free product C 2 * C 2 of two cyclic groups of order 2. Algebraically, the dihedral group of order 24 is the group generated by two elements, s and t, subject to the three relations In fact every group of order 6 is isomorphic to Zmod6 or S3 (symmetric group on 3 elements). Jun 06, 2015 · Let G be a generalised dihedral group of order 2 n and let S be an inverse-closed generating set for G not containing the identity. Stack Exchange network consists of 177 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. It is a non abelian groups (non commutative), and it is the group of symmetries of a regu DIHEDRAL GROUPS KEITH CONRAD 1. graph, the probability that an element of the dihedral groups fixes a set must elements of the dihedral groups and the lowest common multiple of the order of An example of an element of infinite order is the element 1 in the group The dihedral group of order 24 is D12 since Dn has 2n elements. Jan 20, 2012 · We're doing isomorphisms and I was just wondering, is the dihedral group $D_{12}$ isomorphic to the group of even permutations $A_4$? Answers and Replies Related Linear and Abstract Algebra News on Phys. D 6 D_6 is isomorphic to the symmetric group on 3 elements Dihedral groups $D_n$ with $n\ge 3$ are non-abelian contrary to cyclic groups. The dihedral group Dn is the full symmetry group of regular n-gon which includes both rotations and flips. - ' : ' (aka ' x| ') is the semi-direct Summary: This paper connects the twelve musical tones to elements in the dihedral group of order 24 (the symmetries of a regular dodecagon). CL] 3 Jun 2019 Work out its elements, and find the orbit and the stabiliser of each of the points 1, 1/2,1/3. You can think about elements of $D_{12}$ as about symmetry preserving rotations of a hexa is a group with identity f(e). To qualify as a group, the set and operation, ( , ), must satisfy four requirements known as the group axioms: closure, associativity, identity element, and inverse element. (c) (T–F) Find all Sylow subgroups (for all prime divisors of the group order) of the dihedral conjugate P by elements of D12. I don't understand the first part of the question, because surely the whole point of the infinite dihedral group is that it has infinite elements. Note an element $$a$$ forms a class by itself if and only if $$a$$ commutes with all of $$G$$. Like the PLR&group, the PS&group is isomorphic to D12 , the dihedral group of order 24, and only compositions of two operations are needed to generate all group elements. If a and b are elements in N G (H) then show ab-1 is an element Let a and b be elements in N G (H), further let a = g= b meaning gHg-1 =H=gHg-1. I have worked out the cayley table and found the center to be {e, a^2} and found the orders of the elements, but not sure what to do next. The probability that an element of a group fixes a set is Dec 05, 2008 · elements of order 6 and exactly 7 elements of order 2. Then find all subgroups and Dec 04, 2015 · The center consists of the identity and $r^{5}$, where r is a $\frac{1}{10}$ rotation. Symmetry Group of a Regular Hexagon The symmetry group of a regular hexagon is a group of order 12, the Dihedral group D 6. (a) List all Solution: Since G has no element of order 4, every subgroup of order A page shows a presentation of a group with: elements list, graph (if done), 8_3 b, dihedral group Dih8 (Heisenberg), < a,b,c | a2=b2=c2=abcbc >. ) Dihedral Group on 6 Vertices, White Sheet Subgroup Lattice: Element Lattice: Conjugated Poset: Alternate Descriptions: (* Most common) Name: Symbol(s) Dihedral D12: elements graph table table2 8 elements reset id elmn perm . Other examples include G = Z 3 and H = Z 6 (both of which have automorphism group isomorphic to Z 2), G= Z 7 and H= Z 18 (both of which have isomorphism group isomorphic to Z 6). Miller - Solution to HW #18: Dihedral Groups - Due Friday, 11/14/08 The so-called dihedral groups, denoted Dn, are permutation groups. In [2], we find that a non-Abelian group that is generated by two elements σ and τ where τ2 = e and τστ = σ−1 is isomorphic to a Dihedral group. A symmetry gis completely determined by the image gv, which can be any other vertex, and by gw, which can be either one of the two vertices Subscribe to this blog. It is well-known that the group of 12 transpositions and 12 inversions acting on the 12 pitch classes (T/I) is isomorphic to D12, as is the Riemann-Klumpenhouwer Schritt/Wechsel group (S/W). 1 The dihedral group of order 24 is the group of symmetries of a regular 12-gon, that is, of a 12-gon with all sides of the same length and all angles of the same measure. Thus we get: ( n 1 , 0) * ( n 2 , h 2 ) = ( n 1 + n 2 , h 2 ) Jul 06, 2019 · The infinite dihedral group, which is the case of the dihedral group and is denoted and is defined as: . Answer to on quotients of dihedral groups are given in Chapter7 The dihedral group (a) Write Down The Elements Of A Cyclic Subgroup I Of D12 Of Order 6. GroupTheory CharacterTable construct the character table of a finite group Calling "4a" and "4b" distinguish two distinct conjugacy classes of elements of order . Find an example of a group Gthat contains one element of order nfor every positive integer nand which also contains an element of order in nity. If a cyclic group has an element of in nite order, how many elements of nite order does it have? Solution. This project will make use of the definition that all of the permutations for each of the dihedral groups D(n) preserve the cyclic order of the vertices of each ective symmetry. We will describe the dihedral group D 2 p as the 2 p rotations and reflections of a regular p-sided polygon. There is an element of order 16 in Z 16 Z 2, for instance, (1;0), but no element of order 16 in Z 8 Z 4. In particular, consists of elements (rotations) and (reflections), which combine to transform under its group operation according to the identities , , and , where addition and subtraction are performed The only nontrivial relative difference set up to equivalence in a dihedral group known to the authors is as follows : Example 1. 일반화 정이면체군(영어: generalized dihedral group) () 는 다음과 같은 반직접곱이다. The homomorphic image of a dihedral group has two generators a ^ and b ^ which satisfy the conditions a ^ b ^ = a ^ - 1 and a ^ n = 1 and b ^ 2 = 1 , therefore the image is a dihedral group. Each group Dn is created as follows: • Draw a regular n-gon, and label its vertices 1,2,,nin a clockwise direction. - 0 0 ee + - 1 1 a + - 2 2 a2 + - 3 3 a3 + - 4 4 a4 + - 5 5 a5 + - 6 6 a6 + - 7 7 a7 + - 8 8 c + - 9 9 ac + - 10 10 a2c + - 11 11 a3c + - 12 12 a4c + - 13 13 a5c + - 14 14 a6c + . There are more group tables at the end of Alright, so $<r^2>=\{r^{2n}: n\in\Z \}$ (the representation is not unique, but that’s fine for our purposes) The approach to solving this may depend on your axioms, but whatever axioms used are equivalent to this: The dihedral group [ma Section 5. ( D12 denotes the dihedral group of order 24) coset quotient group order 16 +(24 (mod 33)〉 Z33/(24 (mod 33)〉 11 (11 (mod 37)) U(37)/(11 (mod 37)) D 121(a6〉 Skip to main content Search This Blog Nov 22, 2008 · D₁₂ is the group of symmetries of a dodecagon. The other two are given to show that it is possible to draw them like this, and omitted for other dihedral groups. (Informal) We say that a group is generated by two elements x, y Jun 09, 2020 · If F is a reflection in the dihedral group D, find all elements X in D, such that X? = F and all elements X in D, such that X³ = F. We will start by showing Ghas a normal 2-Sylow subgroup or a May 09, 2011 · Abstract Algebra: Consider the dihedral group with eight elements D8, the symmetries of the square. Article Google Scholar The dihedral group D, is, by definition, the (non-Abelian) group of symmetries of the n-sided regular polygon. Any of its two Klein four-group subgroups (which are normal in D 4 ) has as normal subgroup order-2 subgroups generated by a reflection (flip) in D 4 , but these subgroups are not normal in D 4 . The group of symmetries of Cconsists of all the Dec 04, 2015 · The center consists of the identity and $r^{5}$, where r is a $\frac{1}{10}$ rotation. Table 1: D 4 D 4 e ˆ ˆ2 ˆ3 t tˆ tˆ2 tˆ3 e e ˆ Oct 31, 2009 · The dihedral group Dn with 2n elements is generated by 2 elements, r and d, where r has order n, and d has order 2, rd=dr-1, and <d> n <r> = {e}. In two-dimensional geometry, the infinite dihedral group represents the frieze group symmetry, p1m1, seen as an infinite set of parallel reflections along an axis. Let Ω be the set of all subsets of all commuting elements of size two in the form of a,b, where a and b commute and ∣a∣= ∣b∣= 2. (Received 25 August 1989) Abstract--Permutations and combinations of n objects as well as the elements of the dihedral group of Dihedral Group The dihedral group of order , denoted by , consists of the six symmetries of an equilateral triangle. 1 The dihedral group of order 24 is the group of symmetries of a regular 12-gon, that is, of a 12-gon with all sides of the same length and all angles of the same measure. 28 84 G 28 1: Z 7 ⋊ Z 4: Binary dihedral group 86 G 28 3: Dih 14: Dihedral group, product 30 89 G 30 1: Z 5 × S 3: Product 90 G 30 2: Z 3 × Dih 5: Product Given any abelian group G, the generalized dihedral group of G is the semi-direct product of C 2 = {±1} and G, denoted D(G) = C 2 n ϕ G. As subgroups of the isometry group of the set of vertices of a regular n-gon they are different: the reflections in one subgroup all have two fixed points, while none in the other subgroup has (the rotations of both are the same). Since Cn is its own centralizer in Sn, any element This finite figure is a dihedral group of order 8 due to its eight reflections and eight rotations. The homomorphism ϕ maps C 2 to the automorphism group of G, providing an action on G by inverting elements. The corresponding dihedral group D_n has 2n elements: half are rotations and groups are an alternating group, a dihedral group, and a third less familiar group. As the matrix representations of dihedral group can be symmetric or skew-symmetric, and the multiplication of the group elements can be Abelian or non-Abelian, it is a good candidate to model the relations with all the Jul 11, 2000 · The Dihedral group D n is the symmetry group of the regular n-gon 1. Such groups consist of the rigid motions of a regular $$n$$-sided polygon or $$n$$-gon. As the matrix rep-resentations of dihedral group can be symmetric or skew-symmetric, and the multiplication of the arXiv:1906. We study degree n extensions of the p-adic numbers whose normal clo-sures have Galois group equal to D n, the dihedral group of order 2n. Show that this definition allows for only one infinite group, up to isomorphism, and determine its centre. We want xr=rx C(r)={e,r,r^2,r^3,f} want xf=fx C(f)={e,r,f,fr^2} center is elements that commute with every other Math 325 - Dr. Dihedral Group The dihedral group of order , denoted by , consists of the ten symmetries of a pentagon. (Informal) We say that a group is generated by two elements x, y if any element of the group can be written as a product of x’s and y’s. Inverse of r is rxr Mathematically, the dihedral group consists of the symmetries of a regular -gon, namely its rotational symmetries and reflection symmetries. Like all dihedral groups, it has two generators: r of order 12 -- r¹² = e (the identity) f of order 2 -- f² = e S11MTH 3175 Group Theory (Prof. !The composition of two symmetries of a regular polygon is again a symmetry of this object, giving us the algebraic structure of a nite group. (Tradi- The set of all such elements in Perm(P n) obtained in this way is called the dihedral group (of symmetries of P n) and is denoted by D n. Then I don't get, why ethene (see the picture Sep 12, 2014 · In this paper we study some properties of the dihedral group, & á, acting on unordered r-element subsets from the set : L <1,2… J =. The group operation is given by composition of symmetries: if aand bare two elements in D n, then a b= b a. The translation from pitch classes to integers modulo 12 allows for the modeling of musical works using abstract algebra. Algebraically, the dihedral group of order 24 is the group generated by two elements, s and t, subject to the three relations The cycle graphs of dihedral groups consist of an n-element cycle and n 2-element cycles. QDm is the quasi-dihedral group of order m, (1,3,5)(2,4,6) ) gap> IsSurjective( x ); false gap> Image( x ); Subgroup( d12, [ (1,5)(2,4) ] ). common example is the dihedral group D2n, which is 'defined' as Note that this group is not isomorphic to any of C12, C2 ×C6, A4,D12 (the first two 4 - Find all subgroups of DIHEDRAL GROUPS KEITH CONRAD 1. The nth dihedral group is represented in the Wolfram Language as Dihedr 4 (the symmetric group of permutations of four numbers) and D 12 (the dihedral group of symmetries of a regular 12-sided polygon). Introduction For n ≥ 3, the dihedral group D n is defined as the rigid motions 1 of the plane preserving a regular n-gon, with the operation being composition. This paper extends the concept of the PLR-group from the neo&Riemannian theory, which acts on the set of major and minor triads, to a PS-group , which acts on the set of major and minor seventh chords. Dihedral group d8 Dihedral group d8 We then examined some of these dihedral groups on the following pages: The Group of Symmetries of the Equilateral Triangle. In this paper, the dihedral group of degree n, n 3, is the group Dn of symmetries of a regular n-sided polygon. up vote 1 down vote favorite So, could you please tell me what's the real difference between vertical and dihedral mirror planes? OK, in a link given in comments by Tyberius is said, that dihedral planes are such planes, which bisects as many bonds as possible, while "normal" vertical planes bisects as many atoms as possible. The dark vertex in the cycle graphs below of various dihedral groups stand for the identity element, and the other vertices are the other elements of the group. The six reflections consist of three reflections along the axes between vertices, and three reflections along the axes between edges. I'm not sure how to find the subgroups of orders 2 and 5, or rather, I've found one for each, but don't if I have found them all. Construct the dihedral group of degree n and order 2 * n on generators (1, 2, , n ) construct the group as a subgroup of the symmetric group on Full elements. For dihedral groups of even degree, it is not possible to construct a Abelian Group (25) Binary Operator (3) Cardinality (7) Cayley Table (2) Center (6) Centralizer (6) Commutativity (3) Conjugation (2) Counterexample (18) Cyclic Group (26) Dihedral Group (15) Direct Product (10) Fibers (6) Finite Field (9) Finite Group (8) General Linear Group (14) Generating Set (6) Group (16) Group Automorphism (5) Group lygon. The group is - Cn represents a Cyclic group of order n - Cbn is my own way for C(n/2)xC2 - Ccn is my own way for C(n/3)xC3 - Dn or Dihn represents a Dihedral group of order n - Dicn is the Dicyclic group of order n (Dic8=Q8) - Q8 is Quaternion group - Kn represents Klein group - ' x ' is the direct product operator. (Note: Some books and Mar 22, 2017 · Note that D12 has r^6 (rotation of 180 degrees) as a nontrivial element in its center. Jun 10, 2015 · Dihedral group in group theory|order of dihedral group|dihedral group in hindi|dihedral group - Duration: 37:31. Question: D12 = Dihedral Group Of 12 Elements = Symmetric Of The Regular Hexagon1) List The Elements Of D12. This constructive method, while useful for smooth elements DIHEDRAL p-ADIC FIELDS OF PRIME DEGREE CHAD AWTREY AND TREVOR EDWARDS Abstract. But every dihedral group $D_n$ (of order $2n$) has a cyclic subgroup of order [math]n[/math DIHEDRAL GROUPS KEITH CONRAD 1. (a) Show that bai = a−ib for all i with 1 ≤ i < n, and that any element of the form aib has order 2. There are eight motions of this square which, when performed one after the other, form a group called the Dihedral Group of the Square. Identifying The dihedral group D_n is the symmetry group of an n-sided regular polygon for n>1. New building marks new era for college at AU – The Augusta Chronicle; Schools in Bihar to teach Vedic maths – Hindustan Times; Grade Nine learners taught mathematics skills – Tembisan Aug 01, 2013 · To prove the main theorems of the paper we will need to describe the dihedral group and how to bound the distance between the given probability distribution and the uniform distribution. D12 := DihedralGroup(GrpPerm, 12); > D12; Permutation group D12 acting on One calls a subgroup H cyclic if there is an element h ∈ H such that H = {hn : n ∈ Z}. It turns out that $$D_n$$ is a group (see below), called the dihedral group of order $$2n$$. D 6 D_6 is isomorphic to the symmetric group on 3 elements Quotient groups of dihedral groups are dihedral, and subgroups of dihedral groups are dihedral or cyclic. The groups D2 (which is isomorphic to Z/2Z)and4 (which is isomorphic to the Vierergruppe Z/2 ×Z/2 ) are the only abelian dihedral Recent Posts. September 12 Find a pair of elements in D12 (let's call them α and β) such that every element in. The finite dihedral group Dih (C n) is commonly denoted by D n or D 2 n (the differing conventions being a source of confusion). Since we are allowed to choose our own labels, this function assumes these labels for the group elements; the assocation between these labels and the ones in the problem post is given by: Abstract Algebra: Find all subgroups in S5, the symmetric group on 5 letters, that are isomorphic to D12, the dihedral group with 12 elements. A symmetry gis completely determined by the image gv, which can be any other vertex, and by gw, which can be either one of the two vertices The aim of this paper is to study the Pfaffian property of Cayley graphs on dihedral groups. GAP ID:? Magma ID:? 3 Jul 2016 paper, we prove that D2, D4, D6, D8, and D12 are the only dihedral groups that appear whose group of units contains an element of order pr. γ The D 12 point group is generated by two symmetry elements, C 12 and a perpendicular C 2 ′ (or, non-canonically, C 2 ″). A dihedral angle or torsional angle (symbol: θ) is the angle between two bonds originating from different atoms in a Newman projection. An example of a group is the dihedral group on eight el-ements, denoted The dihedral group of order 6 – D 6 D_6. we determine which dihedral groups are the group of units of a ring, and our classification is stratified by characteristic. Proof: The composition of plane symmetries must be a plane symmetry (it must preserve distance and carry the gure onto itself) and hence the operation is binary. Prove, by comparing orders of elements, that the following pairs of groups are not isomorphic: (a) Z 8 Z 4 and Z 16 Z 2. In geometry the group is denoted by D n, while in algebra the same group is denoted by D 2n to indicate the number of elements. They are the rotation s given by the powers of r , rotation anti-clockwise through 2 pi /n , and the n reflections given by reflection in the line through a vertex (or the midpoint of an edge ) and the centre of the polygon . For even n there are two sets {(h + k + k, 1) | k in H}, and each generates a normal subgroup of type Dih n / 2. A more interesting example is G= Z 2 Z 2 and H= S 3, both of which have automorphism group isomorphic to S 3. The reader needs to know these definitions: group, cyclic group, symmetric group, dihedral group, direct product of groups, subgroup, normal subgroup. (b) Find all the subgroups of D14 The group $$D_3$$ is an example of class of groups called dihedral groups. The corresponding dihedral group D_n has 2n elements: half are rotations and By definition, “The group of symmetries of a regular polygon P n of n sides is called the dihedral group of degree n and denoted by D(n)” (Bhattacharya, Jain, & Nagpaul, 1994). Homework Equations The Attempt at a Solution My attempt (and what is listed in the official solutions) was to first consider the cyclic group generated by an element of order 6 in group G. The "generalized dihedral group" for an abelian group A is the semidirect product of A and a cyclic group of order two acting via the inverse map on A. Rahul Mapari 30,989 views Feb 27, 2016 · Dihedral groups describe the symmetry of objects that exhibit rotational and reflective symmetry, like a regular n-gon. To the direct point of your question Cauchy's theorem states that there is at least one element of order 3 and order 2, and every non-identity element must have that order (since not cyclic). The order of an element x in a finite group G is the smallest positive integer k, such that x k is the group identity. The elements of D n can be thought as linear transformations of the plane, leaving the given n-gon invariant. More generally, the symmetry group of a regular n-gon is called the dihedral group D n, and has 2n elements. The set of all possible such orders joint with the number of elements that This article was adapted from an original article by V. The Dihedral group, the group of all these symmetries, is thus a group of Note that in D12 the 3-Sylow is normal (it is {1,x2,x4}, the rest are 6 reflections. For example, you can call counter-clockwise Original file (SVG file, nominally 2,197 × 2,197 pixels, file size: 129 KB) the square 1 through 4, then the actions of the elements of the dihedral group can instead be viewed as simple arrangements of the vertices - an action that can be performed by elements of the symmetric group. As with all groups, the composition of two or more symmetries is itself one of the twelve symmetries. Introduction For n 3, the dihedral group D n is de ned as the rigid motions1 taking a regular n-gon back to itself, with the operation being composition. Feb 23, 2015 · The cycle graphs of dihedral groups consist of an n-element cycle and n 2-element cycles. ≅ ⋊ (/)여기서 / = {,} 는 크기가 2인 유일한 군이며, 군의 작용: / × → 는 다음과 같다. element of a parabolic subgroup, the closure of O w is P/B for the appropriate parabolic P; this is smooth, and one can deduce that B w = B J. , a function to (called the group law of ) that combines any two elements and to form another element, denoted or . View element structure of group families | View other specific information about dihedral group The semidirect product is isomorphic to the dihedral group of order 6 if φ(0) is the identity and φ(1) is the non-trivial automorphism of C 3, which inverses the elements. Conjugacy Class Graphs of Dihedral and Permutation Groups 2 Feb 2011 Let $latex n \geq 1$ and let $latex D_{2n}$ be the dihedral group of order \$latex 2n. dihedral group d12 elements
fcv8 hfgb 9hwz vfam yrjk r7wi dajo 1xhn sjtp sfuh fmqu e9cx 5wpa ogif vse0 | 2020-10-31 00:55:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336938619613647, "perplexity": 613.7155692441947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107912593.62/warc/CC-MAIN-20201031002758-20201031032758-00653.warc.gz"} |
https://gmatclub.com/forum/in-a-certain-classroom-there-are-80-books-of-which-24-are-95568.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 24 Sep 2018, 13:09
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# In a certain classroom, there are 80 books, of which 24 are
Author Message
TAGS:
### Hide Tags
Manager
Status: Its Wow or Never
Joined: 11 Dec 2009
Posts: 181
Location: India
Concentration: Technology, Strategy
GMAT 1: 670 Q47 V35
GMAT 2: 710 Q48 V40
WE: Information Technology (Computer Software)
In a certain classroom, there are 80 books, of which 24 are [#permalink]
### Show Tags
09 Jun 2010, 12:00
2
5
00:00
Difficulty:
25% (medium)
Question Stats:
80% (01:57) correct 20% (01:46) wrong based on 314 sessions
### HideShow timer Statistics
In a certain classroom, there are 80 books, of which 24 are fiction and 23 are written in Spanish. How many of the fiction books are written in Spanish?
(1) Of the fiction books, there are 6 more that are not written in Spanish than are written in Spanish.
(2) Of the books written in Spanish, there are 5 more nonfiction books than fiction books.
Official Guide 12 Question
Question: 47 Page: 26 Difficulty: 700
Find All Official Guide Questions
Video Explanations:
OPEN DISCUSSION OF THIS QUESTION IS HERE: https://gmatclub.com/forum/in-a-certain ... 35831.html
_________________
---------------------------------------------------------------------------------------
If you think you can,you can
If you think you can't,you are right.
Math Expert
Joined: 02 Sep 2009
Posts: 49430
Re: In a certain classroom, there are 80 books, of which 24 are [#permalink]
### Show Tags
09 Jun 2010, 16:17
5
1
In a certain classroom, there are 80 books, of which 24 are fiction and 23 are written in Spanish. How many of the fiction books are written in Spanish?
Given:
Attachment:
Fiction1.JPG [ 10.74 KiB | Viewed 8790 times ]
(1) Of the fiction books, there are 6 more that are not written in Spanish than are written in Spanish.
Attachment:
Fiction2.JPG [ 10.95 KiB | Viewed 8797 times ]
So, $$x+x+6=24$$ --> $$x=9$$. Sufficient.
(2) Of the books written in Spanish, there are 5 more nonfiction books than fiction books
Attachment:
Fiction3.JPG [ 11 KiB | Viewed 8737 times ]
So, $$x+x+5=23$$ --> $$x=9$$. Sufficient.
Answer: D. (Total # of books is redundant information).
Hope it helps.
_________________
##### General Discussion
Math Expert
Joined: 02 Sep 2009
Posts: 49430
Re: In a certain classroom, there are 80 books, of which 24 are [#permalink]
### Show Tags
28 Nov 2017, 02:46
mojorising800 wrote:
In a certain classroom, there are 80 books, of which 24 are fiction and 23 are written in Spanish. How many of the fiction books are written in Spanish?
(1) Of the fiction books, there are 6 more that are not written in Spanish than are written in Spanish.
(2) Of the books written in Spanish, there are 5 more nonfiction books than fiction books.
Official Guide 12 Question
Question: 47 Page: 26 Difficulty: 700
Find All Official Guide Questions
Video Explanations:
OPEN DISCUSSION OF THIS QUESTION IS HERE: https://gmatclub.com/forum/in-a-certain ... 35831.html
_________________
Re: In a certain classroom, there are 80 books, of which 24 are &nbs [#permalink] 28 Nov 2017, 02:46
Display posts from previous: Sort by
# In a certain classroom, there are 80 books, of which 24 are
## Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 2018-09-24 20:09:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2237413078546524, "perplexity": 7136.04495949453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160641.81/warc/CC-MAIN-20180924185233-20180924205633-00308.warc.gz"} |
https://scicomp.stackexchange.com/questions/35803/parallelisation-strategies-for-mixed-fe-formulations | Parallelisation strategies for mixed FE formulations
Mixed FE formulations with LBB-stable elements require two different meshes for the primary and the constraint variables, for example, displacement and pressure. With continuous approximation for the pressure field, I am finding it difficult to parallelise for distributed memory architectures.
I am interested in learning some commonly employed parallelisation strategies for such problems. I very much appreciate any useful resources on this topic.
Note that I use the PETSc library for solving the matrix system in my C++ code.
• Are the meshes for both fields same but just the polynomial degree are different? For example, do you use a triangular mesh for displacement and a rectangular mesh for pressure? Is it such a case? – Abdullah Ali Sivas Aug 24 '20 at 20:18
• The mesh is the same but with different orders of polynomials for different fields, like the Taylor-Hood elements, P2/P1 and Q2/Q1. – Chenna K Aug 24 '20 at 23:05
• Then there are many ways to handle it, Wolfgang Bangerth is one of the developers of deal.ii so consider his answer and advise. But I am also very fond of the way MFEM handles it (mfem.org/performance) – Abdullah Ali Sivas Aug 24 '20 at 23:20
• Thank you for the link! MFEM is a great library. I will go through the documentation. – Chenna K Aug 24 '20 at 23:27
It's a misunderstanding that you need two different meshes: The proper way to see things is that you are using the same mesh, but different polynomial spaces for the two variables. For example, for the Stokes equation, you'd have quadratic polynomials for the velocity $$\mathbf u$$ and linear polynomials for the pressure $$p$$.
Appropriate parallelization strategies are then to partition the mesh among processors. This also induces a partitioning of degrees of freedom, and consequently of those rows of the matrix (and vector elements) each processor stores. It's really no different than if you had a scalar problem.
• I was thinking the same but then there are inf-sup stable finite elements where the velocity mesh is simplectic and the pressure mesh is tensor-product. I wonder if it is such a case. – Abdullah Ali Sivas Aug 24 '20 at 20:17
• Thank you @Wolfgang! I agree that we don't need two different meshes. With a single mesh, one can get processor IDs for each element and node in the mesh using METIS. However, processor id colouring is available only for either displacement nodes or pressure nodes. The straightforward technique would be to use the same colouring for pressure nodes as that of displacement nodes. This is what I am currently implementing. – Chenna K Aug 24 '20 at 22:55
• @ChennaK: Well, you should color cells and then infer the color of nodes based on that of the cells. Then you have the same partitioning for pressures and velocities. – Wolfgang Bangerth Aug 24 '20 at 23:00
• Regarding partitioning of the matrix, in the serial version of the code, I store displacement DOFs first and then pressure DOFs, so that I have the 2x2 block matrix format. Partitoning such a matrix across processors would be cumbersome and leads to complex and inefficient communication pattern. I guess I need to change the arrangement of the matrix such the displacement DOFs on each processor are followed immediately by pressure DOFs on the same processor. I would like to know if there are any other efficient ways of implementing this. – Chenna K Aug 24 '20 at 23:01
• @AbdullahAliSivas Right, one can concoct difficult choices of elements where the two variables are discretized in completely different ways (say, a global Fourier basis for $\mathbf u$ and a piecewise polynomial basis for $p$). But I don't think the OP was asking about such cases. – Wolfgang Bangerth Aug 24 '20 at 23:01
You do not like to have two or more different meshes, differently partitioned. That will make massive communication. Degrees of freedom from multiple fields should be as close as possible to each other in adjacency tree, to make interprocessor communication to a minimum.
You have one mesh, but you have DOFs associated with different entities, for example for H1 space, piece-linear continuous, DOFs are on nodes, whereas DOFs for L2 space, price-linear discontinuous, DOFs are on cells. That is for the simple case, for vectorial spaces, like H-div or H-curl, things are a bit more complicated. Of cores, for example, hierarchical space, you can have DOFs on vertices, edges, faces, and cells.
So you partition cells. Sub-entities, i.e. nodes, edges, faces, on the skin of partition are shared. DOFs on shared entities are typically owned by a partition with a lower rank. On other partitions, DOFs on shared entities are so-called ghost DOFs. You can create a special vector with ghost DOFs; you have such vectors in PETSc.
To partition cells, you need to build a graph; then you can use metis, or parameters to partition it. Itself, you have many strategies on how to partition the graph. You can build a graph as well in a different way. You can do it by the numbering of cells and then make an adjacent matrix, by finding neighbour cells through bridge entity. Bridge entity can be node, edge or face. For classical FEM you would use bridge adjacency entity as a vertex. For H-div - L2 formulation bridge adjacency entity should be facing. Since for H-div space, DOFs are on faces (and volumes). When you are using H-curl space, bridge adjacency entity will be an edge. For discontinuous Petrov-Galerkin, bridge adjacency entity will be on a face, since DOFs are on the skeleton.
Moreover, each cell can have weight, if you heterogenous order of approximation. That is needed for load balancing, to distribute work among processors equally.
In the end, there are many solutions, many strategies.
But why to do it by yourself, I can point you to the FEM code, which does it all for you.
• Thank you @likask! The elements (Bezier elements) I am using are not yet available in any FEM package. (Please correct me if I am wrong about this. ) So, I have to do it myself. I use METIS for mesh partitioning in my CFD/FSI code but there I use pressure stabilisation. So, no issues with book-keeping. Please point me to the FEM code concerned with the partitioning and DOFs numbering for mixed elements. Thanks! – Chenna K Aug 24 '20 at 23:14
• You can see parallel mix problem here mofem.eng.gla.ac.uk/mofem/html/mix_transport.html I can point you to other examples which Bernstein-Bezier base, and other coupled problems. For are another case see the problem which three fields and two-element types mofem.eng.gla.ac.uk/mofem/html/cell_forces_8cpp-example.html I've developed code, so I am not fully objective. – likask Aug 25 '20 at 14:12
• Thank you very much for the links! I am going to check these resources right away. – Chenna K Aug 25 '20 at 15:01 | 2021-06-23 00:34:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5333837270736694, "perplexity": 1116.3652550798215}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488525399.79/warc/CC-MAIN-20210622220817-20210623010817-00253.warc.gz"} |
https://livrepository.liverpool.ac.uk/3082113/ | Structure of high-lying levels populated in the $^{96}$Y $\rightarrow ^{96}$Zr $β$ decay
Mashtakov, KR, Ponomarev, V Yu, Scheck, M, Finch, SW, Isaak, J, Zweidinger, M, Agar, O, Bathia, C, Beck, T, Beller, J
et al (show 25 more authors) Structure of high-lying levels populated in the $^{96}$Y $\rightarrow ^{96}$Zr $β$ decay.
The nature of $J^{\pi}=1^-$ levels of $^{96}$Zr below the $\beta$-decay $Q_{\beta}$ value of $^{96}$Y has been investigated in high-resolution $\gamma$-ray spectroscopy following the $\beta$ decay as well as in a campaign of inelastic photon scattering experiments. Branching ratios extracted from $\beta$ decay allow the absolute $E1$ excitation strength to be determined for levels populated in both reactions. The combined data represents a comprehensive approach to the wavefunction of $1^-$ levels below the $Q_{\beta}$ value, which are investigated in the theoretical approach of the Quasiparticle Phonon Model. This study clarifies the nuclear structure properties associated with the enhanced population of high-lying levels in the $^{96}$Y$_{gs}$ $\beta$ decay, one of the three most important contributors to the high-energy reactor antineutrino spectrum. | 2020-11-23 22:28:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7211332321166992, "perplexity": 4738.671361092666}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141168074.3/warc/CC-MAIN-20201123211528-20201124001528-00647.warc.gz"} |
https://math.stackexchange.com/questions/2757895/is-frac00-indeterminate-or-undefined | # is $\frac{0}{0}$ indeterminate or undefined? [duplicate]
I know in calculus the form $\frac{0}{0}$ is indeterminate.but if it is not calculus is it still indeterminate or undefined in the real number field?
P.S. I know that $\frac{1}{0}$ is undefined whether it is calculus or normal arithmetic in the real number system. but in the projectively extensive real number system, $\frac{1}{0}=\infty$ but in this system what would be $\frac{0}{0}$ it becomes $0*\infty$ also an indeterminate form. in that system the problem $\frac{1}{0}$ is solved but $\frac{0}{0}$ remains.
P.P.S some answers in the question which has been identified as a duplicate of my Q state the $\frac{0}{0}$ is indeterminate. but in Wikipedia page(division by zero) states it is undefined. and that question was asked not to clarify this ambiguity(indeterminate or undefined) and there is no clear answer to my problem.
## marked as duplicate by GNUSupporter 8964民主女神 地下教會, copper.hat, José Carlos Santos, Delta-u, mathreadlerApr 28 '18 at 22:50
• idk exactly what it is but i think it is a matter of convenience as both are usually avoided in computations – The Integrator Apr 28 '18 at 19:31
• @GNUSupporter It does not answers to my problem – thomson Apr 28 '18 at 19:52
• Why this question having eleven answers doesn't answer your question? Your question is "is $\frac00$ indeterminate or undefined". Lehs' answer suggests "In the ordinary number systems division by zero is undefined.", and Bram28's answer suggests that "we say that $\frac00$ is 'indeterminate'" – GNUSupporter 8964民主女神 地下教會 Apr 28 '18 at 20:09
• How do you distinguish indeterminate from undefined? – copper.hat Apr 28 '18 at 21:16
• I think you are splitting hairs on a red herring... – copper.hat Apr 28 '18 at 21:56
Unlike $0\cdot\infty$, which is 0 if it's zero measure times infinite value (or zero value times infinite measure ) but otherwise not defined, one never attributes a value to $0/0$. It's BOTH indeterminate AND undefined.
To be specific it's indeterminate because if you tried to give a meaning to it, you could justify anything. It's undefined because (unlike with $0\cdot\infty$) one doesn't define it by convention either.
• can you give me some recommendation to further reading? and what is your source? – thomson Apr 28 '18 at 19:52
• For $0\cdot\infty=0$ in a measure theory context, any good book on real analysis using measure theory will cover that, for example Rudin's Real and Complex Analysis. As for never giving a value to $0/0$, I have a lot of mathematical experience and I've never seen $0/0$ or $\infty - \infty$ given values, whereas it is occasionally useful to define other expressions one doesn't usually define like $0\cdot\infty=0$ in a measure theory context or $0^0=1$ in a power series or polynomial context or $1/0=\infty$ in the case you cited. – C Monsour Apr 28 '18 at 21:01 | 2019-09-18 21:43:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44962969422340393, "perplexity": 654.1579497438479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573368.43/warc/CC-MAIN-20190918213931-20190918235931-00063.warc.gz"} |
https://tex.stackexchange.com/tags/crop/hot | # Tag Info
255
You can crop your image with graphicx \documentclass{article} \usepackage{graphicx} \begin{document} % trim from left edge \includegraphics[trim={5cm 0 0 0},clip]{example-image-a} % trim from right edge \includegraphics[trim={0 0 5cm 0},clip]{example-image-a} \end{document} Use the trim option, which takes four space separated values. trim={<left>...
79
You can crop/trim a pdf when including it using the trim=left botm right top. Full example: \begin{figure}[htbp] \centering \includegraphics[clip, trim=0.5cm 11cm 0.5cm 11cm, width=1.00\textwidth]{gfx/BI-yourfile.pdf} \caption{Title} \label{fig:somthing} \end{figure} Note: Figuring out how far to trim can take time. To speed things ...
30
You can use the border=<len> class option to increase the width of the border around the cropped output: \documentclass[border=1pt]{standalone}% http://ctan.org/pkg/standalone \begin{document} $2^5 = x_5 \times y^8$ \end{document} As in your case, the border around the image is not present in the PDF, it's from my screen capture. It is also possible ...
24
To clip 50% of the right of your image without using extra packages you can use a savebox to measure the natural size of the image first. This only required the graphicx package which is part of LaTeX itself and always installed. Note that all scaling/resizing is applied after the trimming. If you want the original image be scaled to 5cm width and then 50% ...
19
If the coordinates of the valuable parts in your PDF images is fixed, then the following method can be automated. Use the following template to trim or crop images and compile it with xelatex. You will get 2 pages, one for navigation and the other one is the cropped image. % cropping.tex \documentclass{article} \usepackage{pstricks} \usepackage{graphicx}...
17
To be able to crop a vector graphic reliably you must "print" it to see where the black dots are. "Printing" always involves a resolution: the black dots must have a positive size. pdfcrop uses the bbox device of ghostscript. According to the documentation of ghostscript the default resolution of this device is 4000 dpi. You can change this resolution ...
15
You do not need a cropped copy with a external program, only add some options to \includegraphics. This MWE show the same image twice (renamed to image.png) , with and without the useless background. Both images are inside a framed box to show the edges: \documentclass{article} \usepackage{graphicx} \begin{document} \section*{Original image} \fbox{\...
14
You have a couple of options with this "problem". You can use an external approach and trim the whitespace around the image. pdfcrop is capable of doing this and uses the following interface pdfcrop [options] <input[.pdf]> [output file] where [ ] denotes optional specifications. If your Dia-exported PDF image, (say) image.pdf, consists of entire ...
14
\includepdf accepts all the options that \includegraphics does; so trim=left bottom right top,clip should work (unless you're using XeLaTeX that, currently, doesn't support cropping). You have to compute the four dimensions, but probably left and right should be 0, while top and bottom depend on the document you have. Example (from your comment) \...
11
(Summarising the comments as an answer.) pdfTeX will work with px units, but you have to set these up appropriately using \pdfpxdimen. This is the physical width of one pixel, and has default value of 1 bp, meaning that images initially are assumed to be 72 dpi. \pdfpxdimen is a low-level dimen primitive, and so is best set using \dimexpr: \pdfpxdimen=\...
11
It is clearly a bug in the driver for package graphicx: pdftex.def: ok. dvips.def: ok for PostScript images, but clipping is not supported for bitmap images. xetex.def: Clipping is not supported at all. dvipdfm.def: The image is not trimmed, but distorted in the final area. dvipdfmx.def: The whole image is put in the final area without distortion, but empty ...
10
You can do this with pdfpages. The following example takes a two-up scan of a book, and crops and collates it to a one-up document. \documentclass[letterpaper]{minimal} \usepackage[pdftex,letterpaper]{geometry} \usepackage{pdfpages} \usepackage{ifthen} \newcounter{pg} \begin{document} \setcounter{pg}{1} %% my pdf file has 132 pages: %% my pdf file has size ...
10
Since you'd like to "...set margins from each direction, and the PDF is then cropped accordingly..." in order to "...control what portion of the included PDF is visible in the final document..." I would suggest you try Briss. It's easy to use and gives you much more control than pdfcrop.
10
In the terminology of the geometry package, the paper<*> parameters refer to the physical size of the sheets of paper the document will be printed on. The layout<*> parameters, by contrast, refer to the logical size of the "paper", which will (one hopes) be no larger than the physical size of the sheet of paper. (Put differently, with regard to ...
9
There are two questions here: 1) How to specify the page size. This is best done with the geometry package, which has good documentation. The key idea is that you only specify as many parameters as necessary, and geometry fills in the rest. For example, you could say \usepackage[twoside,papersize={7in,10in},margin=1in]{geometry} and have the text width ...
9
\documentclass[dvipsnames,10pt]{article} \usepackage [ a6paper, margin=5mm ] {geometry} \usepackage[table]{xcolor} \usepackage{longtable,array} \usepackage{marvosym} \usepackage{graphicx} \pagestyle{empty} \begin{document} \centering\sf \rowcolors{2}{Green!10}{Yellow!10} \begin{longtable}{*{4}{>{\scriptsize}c}} \rowcolor{Gray!20} \raisebox{-0.5pt}{\...
9
To expand on Ulrike's excellent answer: The reason this is only guaranteed after rendering the font to an actual pixel map is that there is, in principle, no obligatory relation between a glyph's ink and its bounding box. Here's an example; a lower-case 'm' from URW Nimbus Sans: The left and right sidebearings (the space between the ink and the bounding ...
9
You should use the crop and trim options for \includegraphics command. Example: \includegraphics[trim = 10mm 80mm 20mm 5mm, clip, width=3cm]{image} The options for trim is: trim=left bottom right top, where it cropped by the appropriate amount from the sides. So, including only the right half would look like: \newlength{\imagewidth} \settowidth{\...
9
Package adjustbox Your preamble already contains package adjustbox. It provides the features you need for trimming and clipping I am not sure, which spacing is needed around the image. The following example lets the image behave as it would have the height of the upper case letter H and the depth of g. \documentclass[12pt,a4paper]{article} \usepackage{...
9
Something like this could be easily done in Lua. The code below not only prints only the first five items from the list but also shuffles the items so that the order of the questions is randomized. \starttext \startluacode -- https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle local function shuffle(list) for i = #list,2,-1 do local j =...
8
Export_fig makes all the work automatically, including margins and pdf creation. The original Matlab save result is: export_fig results (picture size is the same) is: Note: this command is improvement of savefig command.
8
Marking are usually shown in a way so that they do no show when the page is cut. One way to see what is happening is to set the print paer size to A3. For example, \setuppapersize[A5,landscape][A3] \setuparranging[2TOP] \setuplayout [ topspace=1cm, backspace=1cm, header=0mm, footer=0mm, width=middle, height=middle, marking=on, ...
8
The viewport key of graphicx can also be used to simulate trimming or cropping. viewport has 4 space-separated length arguments: left bottom right top. The remaining code should be self-explanatory. \documentclass{article} \def\FirstScale{0.5}% scale for loading \def\SecondScale{1}% scale for final \def\FileName{example-image-a}% file name \usepackage{...
8
You can use the standalone document class for this. Note that the standalone bundle offers a lot more than just cropping (as detailed in the documentation). In particular, rather than including a cropped pdf in your file, you can \input the standalone file into the mainfile using (in your example) \documentclass{article} \usepackage{pgfplots} % need to ...
8
With tikz: Also with a more condensed font: Code: \documentclass{article} \usepackage{amsmath} \usepackage{graphicx} \usepackage{tikz} \usetikzlibrary{calc} \newcommand{\VAdjust}{0,-0.15ex}% \begin{document} \begin{tikzpicture} \node (A) {$\scalebox{1.5}{128} \sqrt{\text{e}\scalebox{1.5}{980}}$}; \draw [fill=gray!10, fill opacity=0.9, draw =none] (...
8
Sometimes the grid in the corner is to far away, when cropping to axis labels. So I made an update, where I add rectangles over the whole image. And I add an optional parameter to control how deep the rectangles are drawn. The thick lines have a distance of 10mm, the thin ones of 2mm. This is independent of the image if no width or height argument is passed ...
8
The extra whitespace you see is because a paragraph with the width \linewidth is created here. You can use the {varwidth}{<max width>} environment from the varwidth package to reduce the paragraph width to its minimum. This uses a minipage environment internally. \documentclass{article} \usepackage{varwidth} \usepackage[active,tightpage]{preview} \...
8
The shortcut convenience commands such as \vtwotone create frames starting from the page border. If you want them to bleed you have to define the frames using \newstaticframe and take the bleed into account. I don't like using internal commands in an answer, but I can't find anywhere in the crop documentation how to access the bleed dimension so I've had to ...
8
The \clipbox command from the trimclip package does this. \documentclass{article} \usepackage{trimclip} \newcommand{\crop}[1]{\clipbox*{0pt -.5ex {\width} .7ex}{#1}} \begin{document} \crop{ABCDefgh$\Omega$} \end{document} The starred version of \clipbox leaves visible everything within the specified coordinates. To crop more, decrease the .7ex. To get ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2019-08-19 20:51:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8965936899185181, "perplexity": 3376.518659580425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314959.58/warc/CC-MAIN-20190819201207-20190819223207-00268.warc.gz"} |
https://www.groundai.com/project/the-inner-wind-of-irc10216-revisited-new-exotic-chemistry-and-diagnostic-for-dust-condensation-in-carbon-stars/ | The inner wind of IRC+10216 revisited: New exotic chemistry and diagnostic for dust condensation in carbon stars
# The inner wind of IRC+10216 revisited: New exotic chemistry and diagnostic for dust condensation in carbon stars
I. Cherchneff Departement Physik, Universität Basel, Klingelbergstrasse 82, 4056 Basel, Switzerland
11email: isabelle.cherchneff@unibas.ch
Received November 29, 2011; accepted April 26, 2012
###### Key Words.:
Stars: Carbon – Astrochemistry – Stars: AGB and post-AGB – Molecular processes
###### Abstract
Context:
Aims:We model the chemistry of the inner wind of the carbon star IRC+10216 and consider the effects of periodic shocks induced by the stellar pulsation on the gas to follow the non-equilibrium chemistry in the shocked gas layers. We consider a very complete set of chemical families, including hydrocarbons and aromatics, hydrides, halogens, and phosphorous-bearing species. Our derived abundances are compared to those for the latest observational data from large surveys and the Herschel telescope.
Methods:A semi-analytical formalism based on parameterised fluid equations is used to describe the gas density, velocity, and temperature from 1 R to 5 R. The chemistry is described using a chemical kinetic network of reactions and a set of stiff, ordinary, coupled differential equations is solved.
Results:The shocks induce an active non-equilibrium chemistry in the dust formation zone of IRC+10216 where the collision destruction of CO in the post-shock gas triggers the formation of O-bearing species such as HO and SiO. Most of the modelled molecular abundances agree very well with the latest values derived from Herschel data on IRC+10216. The hydrides form a family of abundant species that are expelled into the intermediate envelope. In particular, HF traps all the atomic fluorine in the dust formation zone. The halogens are also abundant and their chemistry is independent of the C/O ratio of the star. Therefore, HCl and other Cl-bearing species should also be present in the inner wind of O-rich AGB or supergiant stars. We identify a specific region ranging from 2.5 R to 4 R, where polycyclic aromatic hydrocarbons form and grow. The estimated carbon dust-to-gas mass ratio derived from the mass of aromatics formed ranges from to and agrees well with existing values deduced from observations. This aromatic formation region is situated outside hot layers where SiC is produced as a bi-product of silicon carbide dust synthesis. The MgS grains can form from the gas phase but in lower quantities than those necessary to reproduce the strength of the 30 m emission band. Finally, we predict that some molecular lines will show a flux variation with pulsation phase and time (e.g., HO), while other species will not (e.g., CO). These variations merely reflect the non-equilibrium chemistry that destroys and reforms molecules over a pulsation period in the shocked gas of the dust formation zone.
Conclusions:
## 1 Introduction
In their late stages of evolution, low-mass stars (i.e., stars with initial masses on the Zero-Age-Main-Sequence comprised between 1 and 8 ) ascend the Asymptotic Giant Branch (AGB) and develop cool and strong stellar winds characterised by a great variety of chemical species in the outflow detected through their ro-vibrational transitions (Olofsson 2008). With the launch of the submillimetre (submm) Herschel telescope and the beginning of science operations of the Atacama Large Millimetre Array (ALMA), our knowledge of the chemical composition of AGB winds is bound to dramatically increase with the discovery and identification of many new molecules. The wind of a AGB star develops in the dense and hot gas layers above the stellar photosphere, triggered by the formation of dust. To a first approximation, the chemical composition of the photosphere is determined by thermodynamic equilibrium (TE) owing to the high temperatures and densities of the gas (Tsuji 1973, McCabe et al. 1979). Carbon monoxide, CO, which is a very stable species, forms under TE conditions after the production of molecular hydrogen, H. If the stellar photosphere is oxygen-rich, the excess oxygen not locked up in CO drives a wind chemistry dominated by oxygen-bearing molecules such as water, HO, and silicon monoxide, SiO, and the dust forming in the wind includes silicates (e.g., forsterite MgSiO) and metal oxides (e.g., alumina AlO). Conversely, if the star has experienced third-dredge up in the upper part of its AGB ascension, its photosphere may become carbon-rich. The chemistry of the wind then reflects the excess carbon not locked up in CO and is rich in C-bearing species such as acetylene, CH, and hydrogen cyanide, HCN. Those stars form large amounts of carbon dust close to their photosphere. This simple picture was seriously questioned by the detection at millimetre (mm) wavelengths of HCN in O-rich AGB stars (Deguchi & Goldsmith 1985). In carbon stars, SiO was observed (Olofsson et al. 1982), but because TE models of carbon stars predicted its formation, it did not come as a surprise to observe this O-bearing species in carbon-rich environments. However, the observed abundances were much higher than those derived from TE. Several mechanisms were proposed to explain these unexpected species, including an ion-molecule chemistry in the outer part of the stellar wind experiencing the penetration of the ultraviolet (UV) interstellar radiation field and cosmic rays (Nejad & Millar 1982). The later detection by the Infrared Space Observatory, ISO, of hot vibrational transitions of carbon dioxide, CO, in the cool supergiant NML Cyg (Justtanont et al. 1996) indicated that carbon-bearing species could form in the deep layers of O-rich stellar winds. The detection of water, HO, with the SWAS satellite at submm wavelength by Melnick et al. (2001) and that of hydroxyl, OH, by Ford et al. (2004) in the carbon star IRC+10216 provided additional evidence of the complex chemistry of AGB outflows, where comets (Melnick et al. 2001), grain-surface chemistry (Willacy 2004), or UV photodissociation in the intermediate envelope (Agúndez & Cernicharo 2006) were proposed as possible sources of water.
However, that these unexpected species could form by means of some non-equilibrium chemistry in the inner wind was soon found to be viable by surveys of specific species in a large sample of objects, e.g., SiO (Schöier et al. 2006). Just above the photosphere, the gas experiences the passage of shocks triggered by the pulsation of the star and these shocked gas layers are the locus of dust formation (Bowen 1988). These dense molecular gas layers has been detected by both observations with ISO (Tsuji et al. 1997) and near-infrared (IR) interferometry (Perrin et al. 2004). A semi-analytical model for the physics of these shocked regions based on the work of Fox & Wood (1985) and Bertschinger & Chevalier (1985) was proposed by Cherchneff (1996) assuming that the shock energy was dissipated in the immediate postshock gas by the collisional dissociation of H and that subsequent cooling was provided by adiabatic expansion. Using a similar formalism for the inner wind of IRC+10216, Willacy & Cherchneff (1998) modelled the gas-phase chemistry and showed that the collisional destruction of CO in the shocks could release free atomic oxygen and trigger the formation of OH and SiO. The formations of HCN, CO, and CS were later described in the inner wind of the O-rich Mira star IK Tau by Duari et al. (1999). These two studies therefore highlighted the importance of the shock-induced non-equilibirum chemistry in the inner wind of AGB stars in unleashing the synthesis of molecules that were not expected to form under photospheric TE conditions. More generally, Cherchneff (2006) studied the inner wind composition as a function of the carbon-to-oxygen (C/O) ratio of the stellar photosphere and confirmed the formation of O-bearing species in carbon stars and that of C-bearing molecules in O-rich AGB stars as a result of non-equilibrium chemistry induced by periodic shocks. Several observations of high energy rotational transitions of HCN, SiS, CS, and SiO in AGBs and supergiants at mm and submm wavelengths corroborated these results (e.g., Schöier et al. 2006, 2007, Ziurys et al. 2007, 2009, Decin et al. 2008).
A next step in understanding the complexity of the wind chemistry close to the stellar photosphere was achieved with Herschel and the confirmation of the widespread presence of water in the dust formation zone of several carbon stars (Decin et al. 2010b, Neufeld et al. 2010, 2011a, 2011b). Agúndez et al. (2011) proposed that the partial dissociation of CO and SiO by UV photons penetrating deep inside the clumpy stellar wind could produce free atomic oxygen and lead to the subsequent formation of HO. This scenario would destroy other species in the deep layers and have some impact on the water isotopologue abundances as discussed by Neufeld et al. (2011a). Both effects have not yet been tested observationally. Updating the inner wind model for IRC+10216, Cherchneff (2011a) showed that the non-equilibrium chemistry triggered by the shocks could form water in the dust formation zone in competition with the synthesis of SiO, with abundances that agree excellently with those derived from observations.
In this study, we present a complete, updated chemical model of the inner wind of the carbon star IRC+10216, based on the non-equilibrium chemistry approach mentioned previously. The synthesis of classical molecules including water and several new species is considered in the gas-phase, comprising hydrides, chlorine and phosphorous compounds, hydrocarbons and aromatics, and gas-phase precursors to silicon carbide and metal-sulphide dust. The goal of such a study is to confirm the formation of already detected species and predict new molecules potentially observable by means of their high excitation transitions in the inner wind. In Section 2, we give a brief description of the physics of the shocked layers, while the chemistry is discussed in Section 3, the results for the various families of species are presented in Section 4, and a discussion follows in Section 5.
## 2 Physics of the inner wind
The stellar parameters used in this study for IRC+10216 are listed in Table 1. The stellar photosphere and the outer atmospheric layers above it are assumed at TE with a solar elemental composition (Asplund et al. 2009) except for carbon where the C/O ratio was set to 1.4 (Winters et al. 1994). The shocks are assumed to form at a radius r R with a velocity V. The TE calculations are run until r, which is characterised by the temperature and number density listed in Table 1, and the TE abundances are used to characterise the unshocked gas. Since the pre-shock gas at r has a high molecular component, the impact of the passage of periodic shocks is modelled by considering a post-shock gas that cools via both 1) the collision-induced dissociation of H and 2) adiabatic expansion. The shock jump in density, temperature, and velocity is described by the Rankine-Hugoniot jump conditions applied to the pre-shock gas at r for a shock velocity Vkm s (Ridgway & Keady 1981). According to Fox & Wood (1985), radiative processes do not operate in cooling the post-shock gas owing to the modest shock strength, for which the energy loss is provided by the endothermic dissociation of H by collisions initiated by the prevalent chemical reaction
H2+H2→H+H+H2. (1)
Reaction 1 has a reaction rate given by k cm s, where T is the gas temperature, and operates over a length l defined as
ldiss=τdiss×vgas=1kdiss×n(H2)×[VsNjump], (2)
where n(H) is the number density of H in the post-shock gas, k is the rate of reaction 1, V is the shock velocity at r as given in Table 1, and N is the Rankine-Hugoniot velocity shock jump.
Once this cooling has proceeded over l, we assume that later cooling occurs via adiabatic expansion. As modelled by Bertschinger & Chevalier (1985), the fluid equations that describe the conservation of mass, momentum, and energy are parametrised and solved for the boundary conditions imposed by stellar gravity and the return of the shocked gas to its initial pre-shock position. Typical excursions of the gas layers over several pulsation periods are illustrated in Willacy & Cherchneff (1998) and Cherchneff (2011a) for IRC+10216. The pre-shock gas temperature and density profiles are derived as a function of radius using the formalism of Cherchneff et al. (1992), where the impact of shocks on the gas density is described by an extended scale-height formalism for the initial conditions listed in Table 1. The derived pre- and post-shock gas parameters at various radius are those assumed in Cherchneff (2011a) and listed in Table 2.
## 3 Chemistry of the inner wind
New families of chemical species are considered along with the classical molecules already studied in previous models (Cherchneff 2006, 2011a). They include hydrides, halogens, in particular chlorine- and fluorine-bearing species, and finally phosphorous-bearing species. Small gas-phase molecular precursors to silicon carbide and metal-sulphide grains are also included. All species are listed in Table 3 where the largest molecule is the aromatic ring of benzene, CH. For metal hydrides, there is no available information on chemical reaction rates for the formation processes of several of them, namely FeH, MgH, NaH, KH, and PH, at the high temperatures characterising the inner wind. We thus assume rates similar to those of documented reactions using the principle of isovalence as a guideline. Generally, the typical rates of the dominant formation pathways that consists of the reaction of the metal with H have an Arrhenius factor ranging from 10 and 10 cms, an activation energy barrier of a few 1000 K, and a mild temperature dependance (e..g, Cohen & Westberg 1983).
The chemistry of halogens including chlorine- and fluorine-bearing species is rather well-documented at high and intermediate temperatures. However, this is not the case for the phosphorous chemistry. Owing to the isovalence of phosphorous, P, with nitrogen, N, we assume for the P-bearing species chemical processes similar to those involving N and for which the rates are measured or calculated. Finally, the chemistry of silicon, Si, and sulphur, S, is poorly known and studied. For these elements, we restrict the chemical processes to the set of reactions for which reaction rates are documented.
All chemical pathways that lead to the formation of linear molecules, carbon chains, and aromatic rings, include neutral-neutral processes such as termolecular, bimolecular, and radiative association reactions, whereas destruction is described by thermal fragmentation and neutral-neutral processes (i.e., oxidation reactions of hydrocarbons and all reverse processes of the formation reactions). No ions are considered in this chemistry because the UV stellar radiation field of IRC+10216 is too weak to foster the efficient photodissociation and ionisation of molecules. In total, 63 species were considered for a chemical network of 377 reactions. Details of all these processes are provided in the online Appendix 1 available at the CDS, which gives the reactions included in the chemical network and their reaction rates. It contains the following information: Column 1 lists the reaction number, Column 2 gives the reactants and the products, Column 3 lists the A coefficient, Column 4 lists the n factor, Column 5 lists the activation energy in Kelvin, and Column 6 gives the reference for the data. The major differences between the present chemical network and that used in previous studies (e.g., Willacy & Cherchneff 1998, Cherchneff 2006) are explained in detail in the Appendix of Cherchneff (2011a). In terms of formalism, three major changes are implemented. Firstly, the treatment of the reverse reaction of a specific process is changed. Several new rates have been measured in combustion and aerosol chemistry, and are now available. The calculation of the reverse rate from the equilibrium constant assuming detailed balance (i.e., Equation 4 in Willacy & Cherchneff 1998) can lead to erroneous values when the gas temperature and density decrease. Therefore, we prefer to directly enter the available measured or calculated rate values in the chemical network. When the information is unavailable, we make ’educated’ guesses depending on the type of the reaction and its endo-(exo)thermicity. This approach allows us to test a variety of reaction paths that would be closed if one were to strictly apply detailed balance considerations. Secondly, the chemistry involving atomic silicon, Si, and Si-bearing species is restricted to reactions for which rates have been measured or calculated. Thirdly, as previously mentioned, the chemistry now describes the formation of a larger set of chemical species. This chemical scheme is used to solve a set of 63 stiff, coupled, ordinary differential equations (ODEs) at each radius of the inner wind. These coupled ODES are integrated over space (for the H cooling region defined by Equation 2) and time (for the adiabatic expansion over a pulsation period) for the radii and corresponding shock strengths of Table 2. The post-shock abundances of the species at the end of the pulsation period and radius r are used as pre-shock initial abundances for the successive radius r.
In all, the chemical network aims to provide a comprehensive and coherent chemical description of the wind, where known trends are reproduced or new trends are presented. It does not aim to give an exact description of all chemical processes in the wind, which is an impossible task to perform. Owing to the uncertainties in both our theoretical description of the inner wind and observational data analysis, we consider in the following sections that a good agreement has been reached when modelled abundances and derived values from observations differ by at most a factor of ten.
## 4 Results
Results for the dominant chemical species in the inner wind are summarised in Table 4. We list the abundance values derived by applying TE to the gas conditions met at r, as well as the non-equilibrium abundances at r and 5 R, and abundances derived from the most recent observations. As stated before, all modelled and observational values differ by at most a factor of ten, except for PN (see §4.4). This good agreement is illustrated in Figure 1 where we compare the modelled and observed abundances with respect to H at 5 R. Discrepancies between values derived from TE and non-equilibirum chemistry for O-bearing species are clear from inspection of Table 4. These discrepancies highlight once more the importance of the shock chemistry in the dust formation zone of AGB stars. We discuss below the results for the specific chemical families under study.
### 4.1 Prevalent molecules
A group of species dominates the molecular phase of the shocked, inner wind along with H. It consists of CO, HCN, CS, N, CH, SiS, SiO, and HO, whose abundances relative to H are illustrated as a function of radius in Figure 2, while their abundances with respect to total gas and as a function of pulsation phase at the shock formation radius r are shown in Figure 3. All molecules experience destruction in the post-shock gas at as seen in Figure 3. The destruction is more or less severe depending on the species and no large discrepancies may exist between TE and non-TE abundances for specific molecules at the end of one oscillation. However, TE abundance values for several species (CS, SiS, SiO, and HO) differ drastically from those obtained from shock-induced, non-equilibrium chemistry, as already stressed in existing studies (Willacy & Cherchneff 1998, Cherchneff 2006, 2011a).
The formation of O-bearing species, namely HO and SiO, results from the collision dissociation of CO in the post-shock gas. For the specific conditions of the IRC+10216 model, between 10 % and 20 % of CO molecules are destroyed at r in the H dissociation cooling part, while they quickly reform in the adiabatic expansion at phase , making CO the main provider of atomic oxygen in the post-shock gas. The formation of HO is thus triggered by the reactions (Cherchneff 2011a)
O+H2→OH+H (3)
and
OH+H2→H2O+H (4)
The reaction given in Eq. 4 is in competition with the formation of SiO via the reaction
Si+OH→SiO+H. (5)
The rate of the backward process of the reaction in Eq. 5 is unknown but the reaction has an endothermicity of 40000 K at 2000K. As mentioned in Section 3, we assume a low rate for that reaction (k cm s) that reflects its low efficiency over the temperature range of interest. We considered the resulting water abundance for different rate values and temperature dependences. The water abundance always shows a trend similar to that reported here, i.e, a high inner value that decreases at radius R to reach a typical value of . In contrast, the rate of the backward reaction in Eq. 5 was explicitly calculated from the equilibrium constant given in the study of Willacy & Cherchneff (1998). This previous rate had a very low value for temperatures lower than 2000K, and contributed in part to the non-replenishment of OH and the disappearance of HO at larger stellar radii.
The prevalent molecules in Figure 2 come from different chemical families that are chemically linked together by the indirect key-role of the over-abundant H species. Molecular hydrogen is primarily involved in the formation processes of both various members of chemical families (e.g., hydrides, hydrocarbons) and specific molecules such as OH and HO. The destruction or formation of H can then impact all chemical families, which become interrelated. This is particularly true for water, owing to its link to the hydrocarbon family. The destruction of hydrocarbons, starting with CH, indeed releases H. Through the synthesis of hydroxyl, OH, via the reaction of atomic O with H in the post-shock gas, HO is thus linked to the hydrocarbon family. As seen above, water is also linked to the Si-bearing species as both HO and SiO species are competitors in the depletion of OH. Atomic Si is also efficiently included in silicon monosulphide, SiS, and in silicon carbide, SiC, in the hot post-shock gas at r. Therefore, the Si-bearing species through their connection to SiO impact the water abundance, whose value listed in Table 4 is slightly higher than the value derived by Cherchneff (2011a). The chemistry of SiC dust precursors has been extended to the first ring (SiC) in the present study, and as discussed in § 4.6, the formation of SiC and the rings SiC and (SiC) proceeds very early on. The more atomic Si is trapped in SiC, the less Si is able to react with OH to form SiO. Thus, the SiO abundance listed in Table 4 is lower than that derived by Cherchneff (2011a) by a factor of about two, resulting in a higher water abundance. However, the chemical trends and processes in both studies are alike. The formation pathways to other important molecules such as CH , CS, and HCN were discussed in detail by Cherchneff (2006) for a carbon star with a C/O ratio equal to 1.1 and similar chemical routes operate in the inner wind of IRC+10216. Overall and as seen from Figure 1, the most abundant species have modelled abundances that agree very well with values derived from observations.
### 4.2 Hydrides
The interest of studying light hydrides was rekindled with the launch of Herschel. The rotational spectra of these species lie in the submm and far-IR domains and are difficult to observe from Earth. The detection by HIFI onboard Herschel of the J=10, 21 and 32 transitions of HCl and the J=10 transition of HF in IRC+10216 was reported by Agúndez et al. (2011).
The gas-phase chemistry of many light hydrides has never been clearly characterised and only a few measured rates have been documented. However, their production in the laboratory from the gas phase occurs via the reaction of a metal vapour with hydrogen (Ozin & McCaffrey 1984). For our model, we consider all known chemical reactions and extend the documented chemistry to some species according to the isovalence of specific elements (e.g., N and P). Following the prescriptions of experimental studies, we model the formation of hydrides according to
X+H2→XH+H, (6)
where X represents any atomic species. H is destroyed in the strongest shocks up to 3 R but efficiently reforms in the post-shock gas at phase (see Figure 3). The large H reservoir in the gas layers insures that most hydrides are formed following the reaction in Eq. 6. The abundance variation with radius of the major hydrides is illustrated in Figure 4. Hydrides have variations that depend on the species. The abundances of AlH are high out to 3 R but decrease at larger radii owing to the formation of aluminium chlorine, AlCl (see §4.3 below). A similar behaviour applies to NaH and KH with the formation of NaCl and KCl. In contrast, HCl and HF have consistently high abundances ( and ) in the inner wind while MgH, FeH, PH, and SH have extremely low abundances that are well below the PACS/SPIRE detection limits estimated by Cernicharo et al. (2010b).
The Herschel detection of the J=10 transition of HF by Agúndez et al. (2011) points to a constant abundance of with respect to H extending from the inner envelope up to 45 R. This value is lower by a factor of nine than that derived from TE in the photosphere, where most of the fluorine is in the form of HF. To reconcile these two values, they claim that F must be depleted on dust grains in the inner wind. Our calculated HF abundance in Table 4 has the constant value of , corresponding to the solar abundance for F. This value reflects the quick conversion of fluorine into HF by its reaction with H in the post-shock gas at all radii in the dust formation zone extending to 5 R. The result confirms that HF acts as the main reservoir of fluorine in AGB stars. The discrepancy between our kinetic results and those of Agúndez et al. is difficult to quantify. Our abundance variation through the inner wind agrees well with the abundance profile that is required to reproduce the HF HIFI data, i.e., a constant abundance distribution extending up to 45 R, but our calculated abundance value is higher. Agúndez et al. quote that the error in the radiative transfer model is a factor of two, while the reaction rate for the gas-phase formation of HF from H has been measured and is well-documented. The use of other available rate values does not change the present result, which indicates that fluorine is totally depleted into hydrogen fluoride in the dust formation zone.
### 4.3 Chlorines
Chlorine (Cl) has a solar abundance with respect to hydrogen of (Asplund et al. 2009), and is present at TE in the photosphere in the form of both HCl and Cl of comparable abundances with respect to H of 2.6 . Chlorine-bearing species have long been observed in circumstellar envelopes of AGB stars. Recent large molecular surveys confirm the presence of NaCl, AlCl, and KCL in IRC+10216 (Tenenbaum et al. 2010) when the hydride HCl was detected with Herschel and unambiguously found very close to the star (see § 4.2). Shinnaga et al. (2009) also identified KCl lines in their e-SMA observations that have a very compact distribution centred on the star. Results for the prevalent Cl-bearing species are shown in Figure 5. Clearly, HCl is still the dominant Cl-bearing species in the dust formation zone, and readily forms at r with an almost constant abundance throughout the inner wind and a value of at 5 R. This values perfectly agrees with that derived by Agúndez et al. (2011) and is very close to the TE abundance of HCl at r. However, HCl is destroyed in the shock at r and reforms in the post-shock region at phase to finally reach an abundance after one oscillation very close to the TE value. Other Cl-bearing species, namely AlCl, NaCl, and KCl in decreasing order of importance, are also present in the wind and their production originates directly from that of HCl via the reaction
X+HCl→XCl+H, (7)
where X is the metal. All documented reactions of Eq. 7 are fast, with activation energy barriers of a few thousand degrees (e.g., Husain & Marshall 1986), requiring high temperatures to proceed.
Very much like H for hydrides, HCl acts as the production agent of Cl-bearing species in the dust formation zone of AGB stars. The chemistry of chlorine depends essentially on the hydrogen and the chlorine content of the gas through HCl, and is thus independent of the C/O ratio of the photosphere. Since HCl is a rather stable molecule (with a dissociation energy D = 4.4 eV), it should also form efficiently in oxygen-rich AGB stars and by a chemical pathway similar to the reaction in Eq. 6 owing to the large H reservoir of AGB winds. Other Cl-bearing species will also form via the reaction in Eq. 7 provided that metal atoms are available in the gas phase. HCl and other Cl-bearing species are thus expected to be present with intermediate-to-high abundances in the inner wind of oxygen-rich AGB stars. The detection of high energy transitions of NaCl towards the O-rich supergiant VY CMa and the O-rich Mira IK Tau by Milam et al. (2007) indeed indicates that these two species form close to the star, with abundances of 5 and 4, respectively. Our results corroborate these observations and we predict that HCl, NaCl, and KCl should be observable in the dust formation zone of O-rich evolved stars. The abundance of AlCl may be lower because 1) the Al-bearing species AlOH has a high abundance in the wind acceleration zone - Tenenbaum & Ziurys (2010) derived a AlOH abundance of for VY Cma - and 2) a large fraction of Al is expected to be depleted in alumina, AlO, in the stellar wind. In reality, AlCl in VY CMa was not detected by Tenenbaum & Ziurys (2010).
### 4.4 Phosphorous bearing species
The phosphorous-bearing molecules HCP, CP, CP, and PN have been detected in the wind of IRC+10216 (Guélin et al. 1990, Agúndez et al. 2007, Milam et al. 2008, Halfen et al. 2008, He et al. (2008), Tenenbaum et al. 2010). While the shapes of the CP and CP line profiles are indicative of a shell-like distribution with a formation locus in the outer envelope induced by UV photodissociation, HCP and PN have been found close to the star. The chemistry of phosphorous is poorly documented and we use the isovalence of P with N to estimate the rates of a set of basic formation and destruction processes derived from the equivalent processes involving N. We consider a few molecules including PN, HCP, CP, and P. The latter species was chosen to reflect the refractory nature of phosphorous and its ability to form clusters. Results for P-bearing molecules are shown in Figure 5. PN is the prevalent phosphorous compound followed by HCP and P. PN is chiefly formed at r by the two reactions
N+CP→PN+C (8)
and
CN+CP→PN+C2 (9)
Similar reactions for the isovalent element N have measured rates at high temperatures. The resulting abundances for PN are quite high () through the inner wind with a rapid formation at r where the molecule reaches its final abundance at phase in the post-shock excursion. This is coherent with the fact that PN mimics N which shows a similar behaviour: a rapid synthesis in the post-shock gas at r and a constant high value across the inner wind (see Figure 2). At r and at the early phases of the post-shock adiabatic excursion, phosphorous is quickly integrated into CP which later distributes P into HCP and PN. Milam et al. (2008) observed several rotational lines of PN and HCP in the inner wind of IRC+10216 and derived a low PN abundance of . Our modelled value is greater by a factor of 1000. However, our derived HCP abundance of agrees well with the observations by Agúndez et al. (2007) and Milam et al. (2008). The former study derived an abundance value of for radii larger than 20 R, and claimed that a depletion onto dust grains is necessary to reconcile the high abundances derived assuming TE in the dust formation region with those at 20 R. The present results indicate that HCP depletion in dust may not be necessary because the high TE abundances at r r () quickly drop to at larger radii owing to the non-equilibrium chemistry and the partial conversion of P into CP and PN in the post-shock gas.
The high abundances of PN obtained in the model compared to observations force us to test the P chemistry and assess the conditions for which the conversion of CP into HCP and PN is effective. We decrease all chemical rates by a factor of ten but have no success in diminishing the PN abundance in the inner wind. We also lower the rates of the two forward reactions given in Eqs. 8 and 9, by a factor of 100, which results in decreasing the PN abundance to but in increasing the HCP abundances to , which is high to agree with observations. The assumption of isovalence between P and N requires a phosphorous chemistry in which HCP and PN mimic HCN and N. In the inner wind, nitrogen is distributed between these two species in almost equal amounts, a result corroborated by the excellent agreement of HCN abundances with values derived from observations. In the case of P, this distribution is not observed and the low PN abundance derived by Milam et al. (2008) indicates a low efficiency channel for the conversion of CP into PN in the dust formation zone. According to this result, they deduce that if PN/N were approximately equal to P/N, the N abundance would be very low (). Such a low value contradicts the high N abundance given by TE in the photosphere and fostered by the non-equilibrium chemistry of the inner wind (see Table 4). The discrepancy regarding the abundances of PN questions the validity of the assumption of isovalence in the case of phosphorous, highlights the different chemical processes that may control the phosphorous chemical family, and points to the need for more high resolution observations of high energy transitions of PN in the inner wind.
Finally, inspection of Figure 5 reveals the presence of a modest amount of phosphorous dimers, P, synthesized in the inner wind with x(P) at 5 R. As P has even lower abundances than P, a small population of P clusters may grow from P coalescence but the aggregation process will terminate with the formation of the stable tetrahedral P cluster, as observed in laser ablation of red phosphorous crystals (Bulgakov al. 2002). At most, a P abundance of may form with a left-over population of dimers with abundances . Therefore, phosphorous clusters should not be prevalent condensates in the dust formation zone of carbon stars.
### 4.5 Carbon dust precursors: Hydrocarbons and aromatics
The formation of the first aromatic ring of benzene represents a bottleneck to the formation of polycyclic aromatic hydrocarbon (PAH) species and their growth. In the chemical scheme, it is described by the recombination of two propargyl radicals, CH which is the dominant closure pathway, and the reaction of buten-3-ynyl radicals, CH, with CH. These two routes are the prevalent channels to aromatic formation in sooting flames on Earth (Cherchneff 2011b). The formation of CH results from the reaction of CH with methylene, CH, in the immediate post-shock region at r. Abundances with respect to H are shown in Figure 6 for the 20 km s shock at r. At the high post-shock gas temperatures, only stable hydrocarbons such as CH , CH, and CH can form in large amounts. In Figure 7, the abundances for similar species are shown for a shock strength of 12.6 km s at 3 R, with the appearance of CH at phases 0.4. As apparent in Figure 2, once CH forms at r, it stays abundant over the inner wind region, providing a large reservoir to grow hydrocarbons, specifically CH. Once CH starts to form from the reaction of CH with H, CH quickly builds up when the gas temperature drops. Indeed, higher temperature values favour the reverse routes to the formation of CH, which are endothermic channels. The clincher to build up the CH ring are thus the lower gas temperatures encountered at phases 0.4. The abundances of hydrocarbons and aromatics as a function of radius are illustrated in Figure 8. The formation of aromatics is delayed to r 2.5 R because oxygen-bearing species are present at smaller radii. Water and hydroxyl are the main oxidation agents to hydrocarbons and aromatics in the gas, and CH forms when the O-bearing species abundances drop at r 2.5 R(see Figure 2). According to Figure 8, the prevalent hydrocarbon species that escape the inner envelope are CH, CH, and CH, although the growth and condensation of aromatics to amorphous carbon (AC) grains may alter this result to some extent.
To estimate the total amount of AC dust mass formed in the inner wind and ejected at 5 R, we consider a simple formalism whereby a parcel of gas moves gradually from 2 R to 5 R over a certain time span. Assuming a microturbulent velocity of between 1 and 5 km s, a range characteristic of the inner wind before gas drag and acceleration by dust (Keady et al. 1988), and the stellar pulsation period of Table 1, 43 pulsations (and shocks) are necessary for that parcel to reach 5 R assuming a microturbulent velocity of 1 km s, when the pulsation number drops to 9 for a microturbulent velocity of 5 km s. For the modest shock velocities and moderate post-shock conditions found in the inner wind of IRC+10216, we assume that the AC dust is not destroyed in the hot post-shock gas at each shock passage, while PAH species are, but reform in the adiabatic expansion phase. Therefore, to estimate the total AC dust mass that possibly forms in the inner wind and is ejected at 5 R, we assume that the total CH mass is converted into AC grain mass and we sum up the derived masses over the number of pulsations required to reach 5 R, interpolating mass values from the data given in Table 5. We obtain a total dust-to-gas mass ratio that spans the range . When compared to values derived from observations ( to ), these numbers are satisfactory and point to a specific region in the inner wind where carbon dust grows from PAHs and graphene sheets. Our simple derivation is based on a 100 % growth efficiency of CH to CH and a 100 % conversion efficiency of CH in AC dust grains. These assumptions clearly maximise the AC dust mass value at 5 R. However, owing to the large CH reservoir available in this region, an additional growth process that are not considered in the present derivation includes the addition of CH molecules at the surface of graphene sheets, a mechanism that would add mass to the final carbon dust budget of the inner wind. Without pointing to a specific value for the dust-to-gas mass ratio, our derived ratios closely agree with the dust-to-gas mass ratios derived from observations and are indicative that AC dust formation at these specific radii efficiently proceeds in the inner wind of IRC+10216.
### 4.6 Other dust precursors: Carbides and sulphides
#### 4.6.1 SiC2 and SiC
Silicon carbide (SiC) dust has long been observed in the wind of carbon stars through its transition at 11 m (Treffers & Cohen 1974, Speck et al. 1997) and studies on meteorites have confirmed a AGB origin for some of the pre-solar SiC inclusions (Zinner 2007). In the laboratory, the synthesis of SiC nanoparticles is produced using various experimental methods (e.g., laser-induced pyrolysis of gas-phase mixtures of silane, SiH, and hydrocarbons) and SiC grains are observed to form in the temperature range K. These temperatures are encountered in the inner wind, and support the hypothesis that SiC forms from the gas phase by chemical kinetic processes similar to those active in SiC synthesis in the laboratory. Despite the nucleation processes not being fully understood and characterised, a few gas phase species have been identified as intermediates in the nucleation of SiC particles and include Si, C, and cyclic SiC (Fantoni et al. 1991). Silicon dicarbide, SiC, has been detected in the inner wind of IRC+10216 by mm interferometry with abundances that range from (Gensheimer et al. 1995) to (Lucas et al. 1995). Recent observations with HIFI onboard Herschel indicate a SiC abundance with respect to H of in the inner wind (Cernicharo et al. 2010a). We thus assume that the presence of SiC in the dust formation zone reflects the nucleation and condensation of SiC grains at high temperatures and densities and the role of intermediate plays by this species.
There exist no documented reaction rates for SiC formation and we rely on the isovalence of silicon with carbon to derive reasonable rates for specific processes. We also consider the identified nucleation routes for SiC clusters and the formation of SiC and (SiC) clusters according to Erhart & Albe (2005). The main production process for SiC in the gas phase is
SiC+SiC→SiC2+Si, (10)
where destruction is commanded by the reverse reaction in Eq. 10 and thermal fragmentation. We also assume that two SiC molecules react to form the SiC dimer (SiC). The results for SiC and SiC abundances are shown in Figure 9. The shock at r destroys both the SiC and SiC initially present in the photosphere under TE, but SiC reforms in the post-shock gas at 1.5 R and form SiC via the reaction in Eq. 10 and SiC dimers. At 5 R, the SiC abundance value agrees well with that derived by Cernicharo et al. (2010a), indicating that the molecule may be regarded as a by-product of the condensation of SiC clusters at small radii. The model shows that both SiC and (SiC) form in large amounts as early as 1.5 R, and well before the aromatic formation zone (2.5 R - 4 R) discussed in §4.5. SiC clusters thus represent a high temperature condensate population independent of the synthesis of AC dust. We discuss in more detail the consequences of this situation for wind acceleration in §5.
#### 4.6.2 MgS and FeS
A strong 30 m emission band has been observed in carbon-rich evolved stars at various stages of their evolution, including AGB and post-AGB stars, and planetary nebulae. Observation of the band was also reported in IRC+10216 and ascribed to solid magnesium sulphide, MgS, for which a low radiative temperature comprised between 100K and 450 K was derived (Goebel & Moseley 1985, Szczerba et al. 1999, Hony et al. 2002a). A band at 23 m was observed in two carbon-rich planetary nebulae and FeS in the form of troilite was proposed as a possible carrier (Hony et al. 2002b). FeS is also responsible for the 23 m band detected in proto-planetary discs (Keller et al. 2002). However, there is no observational evidence that this band is present in the spectral energy distribution of C-rich evolved stars in general, including IRC+10216. Chemical models assuming TE in the inner wind predict that both MgS and FeS condense in carbon-rich environments (Lattimer et al. 1978, Lodders & Fegley 1999). Keeping in mind that dust formation is not an equilibrium process in stellar outflows, we test the formation of MgS and FeS molecules as the initial gas-phase precursors of MgS and FeS grains in the inner wind. We also consider the formation of Mg- and Fe- bearing molecular species and clusters. They include the pure iron and magnesium small clusters, Fe and Mg respectively, although there exists no observational evidence of pure metal clusters in AGB environments. However, pure iron grains are often proposed as a dust component of O-rich AGB winds to account for the required near-IR opacity necessary to accelerate the wind (Woitke 2006). We also consider the hydrides MgH and FeH, and the gas-phase precursors to metal oxides MgO and FeO.
Atomic magnesium is an alkaline earth metal that primarily reacts with oxygen compounds (e.g., HO, O, NO, O) to form magnesium oxides. Although reactions with sulphur-bearing compounds are not documented, we expect similar types of reaction between Mg and SO to those between Mg and O, owing to the isovalence of sulphur with oxygen. We therefore assume that Mg will react with SO according to the reaction
Mg+SO→MgS+O (11)
Using again the isovalence of sulphur with oxygen, we assume that a reaction similar to the reaction in Eq. 11 triggers the formation of molecular FeS. For the formation of iron monoxide, FeO, and magnesium monoxide, MgO, reactions of atomic Fe and Mg with O are considered. These processes have been extensively studied (e.g., Akhmadov et al. 1988) and their rates well-documented. For both sulphides, we also consider the following radiative association reaction as a possible production channel
X+S→XS+hν, (12)
where Mg or Fe. Studies by Kimura et al. (2005a, 2005b) that explore various formation routes to MgS and FeS in the laboratory show very efficient synthesis from the reaction of the two Mg (Fe) and S gaseous phases in gas flash evaporation methods, and support the occurrence of the reaction in Eq. 12 and its termolecular analogue process. The formation of Fe is described by the reaction
Fe+Fe+M→Fe2+M, (13)
where is the gas collider. A rate for the reaction in Eq. 13 was derived by Giesen et al. (2003) in their study of pure iron cluster formation at high temperatures. The radiative association reaction between two Fe atoms is also included. Similar types of processes and rates are considered for the synthesis of Mg. We assume that the reverse reactions of all chemical pathways are the only destruction processes operating on Fe and Mg.
Abundances with respect to H for these species are shown in Figure 9, except for FeO and MgO which have negligible abundances in the inner wind ((MgO, FeO) ). Apart from MgS and FeS, Mg- and Fe-bearing species have very low abundances. Specifically, Fe and Mg do not form pure metal clusters as Fe and Mg have low abundances in the dust formation zone. According to our model, most of the Mg and Fe initially present in the photosphere and at r stays in atomic form. A moderate amount of Mg and Fe is first included in the hydrides MgH and FeH (see Figure 3) but are preferentially included into MgS and FeS at 2 R from the reaction of Mg and Fe with SO following the reaction n Eq. 11. Similar to HO, sulphur monoxide formation is induced by the release of oxygen atoms in the collisional dissociation of CO in the hot post-shock gas.
An upper limit to the total mass of solid MgS and FeS produced is derived assuming that all gas-phase MgS and FeS are depleted into clusters and grains once the gas has reached the temperature regime (T K) derived by Goebel & Moseley (1985). The mass limit for both MgS and FeS is at most of the carbon dust mass formed between r and 5 R. When modelling the spectral energy distribution of IRC+10216, Ivezić & Elitzur (1996) derive a dust composition where MgS accounts for less than 10 % by mass. For the carbon-rich post-AGB star HD 56126, Hony et al. (2003) found that a MgS mass of 2 % of the carbon dust mass is necessary to account for the 30 m band flux. One would thus expect the MgS mass to represent at most a few percent of the AC dust mass in the inner wind of carbon stars. This required mass is higher than our upper limit by a factor of ten. This discrepancy may be the result of several uncertainties in the sulphur chemistry or may point to either a nucleation process for MgS that does not occur in the gas phase or to a different carrier for the 30 m band. These various aspects are discussed in §5.
Gas-phase FeS follows an abundance trend similar to MgS, as seen in Figure 9, because we assumed that the FeS and MgS chemistries were alike. Both Fe and Mg are mainly in atomic form and have very similar abundances at TE in the photosphere. Both solid MgS and FeS possess identical clustering structures going from (XS) (X = Mg, Fe) with its planar rhombic structure to (XS), which quickly reaches a distorted cubic structure. However, MgS clusters are unstable in O-rich environments, contrarily to FeS clusters. It was proposed by Begemann et al. (1994) that composite solid sulphides including both Mg and Fe could satisfactorily reproduce the 30 m band in IRC+10216. In particular, a magnesium-iron sulphides whose composition ranged from MgFeS to MgFeS provided the closest matches to the band. Such a carrier could indeed be synthesized in the dust formation zone of IRC+10216 in view of the Fe- and Mg-bearing species that form, and the presence of both gas-phase MgS and FeS.
As pointed out before, most of the Mg and Fe in the inner wind region is in atomic form. High resolution observations of optical absorption lines of several metals by Mauron & Huggins (2010) for IRC+10216 also appear to detect atomic metals in the gas phase, in direct contradiction with TE condensation models for which all metals are depleted in a solid phase once the condensation temperature of the solid is reached in the wind. However, the derived abundances for iron and calcium atoms highlighted some degree of depletion relative to the solar abundance values. According to this study, the depletion cannot result from the trapping in a molecular phase, as corroborated by the present results where the abundances of metal-bearing species are always less than . The partial depletion of iron and calcium must thus result from either the incorporation of free-flying Fe and Ca atoms in the inner wind during the AC dust condensation process at , or the adsorption of these atoms on the surface of dust grains at the lower gas temperatures encountered at larger envelope radii.
### 4.7 Line variability with time
Our model predicts that specific molecular abundances have a strong time-dependence. Some molecular species indeed show a strong variability in their abundances as a function of time or phase of the pulsation period. The abundance of water is a good example of such a variability and its abundance with respect to H as a function of radius and pulsation phase is shown in Figure 10. At 2 R, HO abundances span almost six orders of magnitude over one pulsation period (P = 650 days), and this variation should be reflected in the intensity of its high-energy transitions. In these deep layers, the transitions are mainly pumped by IR radiation. Apart from the intrinsic variability of the stellar flux with pulsation phase that affects all molecules, the large variations in abundances should have some impact on the high excitation-line fluxes. These changes in abundances are the consequence of the post-shock non-equilibrium chemistry and the destruction of molecular species in the hot post-shock gas at early phases.
Destruction is more or less severe depending on the species and not all molecules behave like HO. For example, with its strong molecular bond, CO does not experience such variations and shows a rather constant abundance distribution with radius and pulsation phase. Some species (e.g., SiO), show a time variation in abundance that increases in amplitude from large radii to small radii, but where the variation amplitudes do not span more than one order of magnitude. Finally, other molecules (e.g., SiC) show average amplitude variations (of three orders of magnitude) at small radii but reach time-independent abundances deep inside the inner wind ( 2.4 R). Although the impact of time-varying abundances on the line fluxes is difficult to quantify without a proper radiative transfer model, we would expect some species to show little line variability (e.g. CO) but others to be prone to moderate variability (e.g., SiO) and undergo large line-flux changes with time (e.g., HO). Clearly water is an excellent tracer of time variability of high-energy molecular transitions and as such, an excellent indicator of shock activity and shock-induced chemistry in the dust formation zone.
## 5 Summary and discussion
We have modelled the inner wind of the carbon star IRC+10216 assuming the periodic propagation of pulsation-driven shocks between 1 R and 5 R and considering a complete gas-phase chemistry that encompasses several chemical families. These shocks trigger a non-equilibrium chemistry in the hot post-shock gas that leads to the formation of molecules and dust precursors. The study points to the following new results and trends applicable in general to carbon stars:
• The model confirms the presence of a group of molecules, namely CO, HCN, SiO, CS, and HO, that efficiently form between 1 R and 5 R. These species are expected to be present, albeit in different quantities, in the inner wind of all AGB stars, regardless of their C/O ratio, as already proposed by Cherchneff (2006, 2011a). The derived abundance values agree well with available observations. In particular, the dissociation of CO by collisions in the immediate post-shock gas triggers the formation of atomic O, OH, SiO, and HO.
• We have found that some hydrides form a new category of abundant and stable species in the inner wind. These include AlH, HCl, and HF. Other hydrides do form in large amounts at r but are rapidly converted in the dust formation zone into chlorine-bearing species, leaving the wind acceleration region with low abundances. The most abundant hydrides will be released to the intermediate envelope and participate in an active chemistry at larger radii.
• Once formed from the reaction of H with Cl, HCl is the production agent of other Cl-bearing species such as AlCl, NaCl, or KCl. The formation chemistry of chlorine-bearing species is thus independent of the C/O ratio of the stellar photosphere, and Cl-bearing molecules including HCl, NaCl, and KCl should also be present in O-rich AGB and supergiant stars, albeit with different abundances. AlCl is expected to have a lower abundance in the dust formation zone of O-rich sources because of the Al depletion in gas- or solid-phase metal oxide species.
• There exists a specific zone that extends from 2.5 R to 4 R, where the closure of CH ring occurs through the recombination of two propargyl radicals, with a CH abundance peaking at 3 R. The available CH abundances in the gas phase are high enough to secure growth to PAHs such as CH. These large PAHs will consecutively coalesce and coagulate to form AC dust. The estimated total dust-to-gas mass ratio spans the value range and closely agrees with existing values derived from observations of IRC+10216. Within this zone, SiC molecules efficiently form as by-products of the synthesis of SiC clusters. Some MgS and FeS species are also produced in the gas phase but their abundances are too low to account for the 26-30 m emission band.
• The shock-induced scenario predicts a time-variability for some molecular abundances over a pulsation period (e.g. HO and SiO) that should induce a time variability in their high excitation line fluxes. Other species show either negligible changes in abundances (e.g., CO), or small changes that do not affect the molecular line intensity (e.g., SiC). This predicted time variability is the direct result of the destruction of species in the hot shocked gas layers. Observations of the high-energy transitions of these specific species at different epochs of the pulsation period would help us to confirm the predicted time variability, and assess the impact of shocks on the gas chemistry.
As reported in § 4.5, we have found that the formation of PAH molecules, and both their coalescence and growth to AC dust take place in a specific radius range. The growth of benzene, CH, to coronene, CH, via the HACA mechanism is expected to consume a large part of the benzene rings synthesised at these radii. At radii larger than 4 R, inspection of Table 4 shows that some benzene rings still form and can grow to larger aromatic species as the CH reservoir is still large. However, the lower gas densities and temperatures should hinder the coalescence of PAHs to form large graphene structures. A population of free-flying PAHs not incorporated in AC dust should thus be expelled to larger radii once the wind is fully acccelerated. The so-called unidentified infrared bands are observed in carbon stars that are part of binary systems (Speck & Barlow 1997, Boersma et al. 2006). For the carbon star TU Tau, the UV radiation field of the blue companion could excite the aromatics present in the carbon star wind. These excited PAHs might include the free aromatics that are synthesised beyond the aromatic formation zone highlighted in this study.
We have also found that SiC dimers form at R, i.e., far smaller radii than the aromatic growth region, which implies that there is an independent population of SiC clusters at small radii. Owing to the extinction properties of SiC dust and a significant decrease in the Planck mean of its extinction efficiency for temperatures corresponding to the effective temperatures of AGB stars, the dust experiences an inverse greenhouse effect for a radiation field characteristic of carbon stars (Gilman 1974, McCabe 1982, Yasuda & Kozasa 2011). Since the pressure force acting on dust grains is directly proportional to the Planck mean of the extinction efficiency, most of the acceleration of the wind is provided by AC dust grains (Cherchneff et al. 1991). On the other hand, the inner SiC cluster population should experience a minor radiation pressure force and lags behind the AC clusters when expelled to larger radii. This situation may be reflected in the results of studies of meteorites. Pre-solar SiC grains bearing the isotopic fingerprint of the AGB s-process are not included in graphite spherules that have an AGB origin but form a separate pre-solar grain population (Hynes et al. 2007). This isolation of the SiC presolar grains may directly result from the non-equilibrium chemistry in the post-shock gas that produces two main dust populations, namely SiC and AC grains, at two distinct positions in the wind acceleration zone.
No firm conclusions about the production of MgS or Mg-FeS dust grains in IRC+10216 can be drawn from the present results, as they can be interpreted in many ways. Firstly, the chemical model may underestimate the MgS and FeS abundances by a factor of ten or more because too much atomic sulphur is trapped in SiS, as indicated by the slightly higher SiS abundances listed in Table 3 relative to those derived from observations. A small amount of S not locked in SiS would result in higher SO abundances and in a larger amount of MgS and FeS owing to the large reservoirs of free atomic Mg and Fe in the inner wind. The MgS and FeS clusters would then be produced from the gas phase at r 2 R, and the resulting estimated MgS dust mass could reach the few percent of AC dust mass necessary to explain the 30 m band. That the present model forms gas-phase FeS and MgS with similar efficiencies indicate that it may form composite Mg-Fe sulphide clusters instead of pure MgS clusters, as proposed by Begemann et al. (1994). According to Kimura et al. (2005a, 2005b), a MgS (FeS) nucleation from the gas phase produces spherical cubic MgS (FeS) clusters instead of the elongated network-like grains that form when gas-surface reactions are involved in the nucleation process. Hony et al (2002b) studied the effect of dust shape and temperature on the band shift in wavelength with a shift towards 26 m when spherical and hot grains were considered. The emission band in IRC+10216 clearly peaks around 27 m in the ISO spectrum, pointing to a possible formation pathway from gas-phase chemistry, as we have described in the present study. Secondly, if the MgS (FeS) abundances are indeed low in the dust formation zone, they point to: 1) a synthesis mechanism for MgS or Mg-FeS involving gas-surface processes, or 2) alternative carriers for the 30 m band. However, if MgS grain formation occurs at lower temperatures and on the surface of already produced dust grains (see the comprehensive studies by Men’shchikov et al. (2001) and Zukhovska & Gail 2008), their growth is hindered by the lack of available atomic sulphur, which is chiefly depleted in SiS and CS in the dust production zone. Therefore, a formation scenario involving surface chemistry would also require a mechanism that could return sulphur to the gas phase just after the acceleration of the outflow. Another explanation is that MgS is not the carrier of the 30 m band. A critical assessment of all previous MgS studies was made by Zhang et al. (2009) who pointed out that the mass of MgS derived from the emission at 30 m violated the available abundances of Mg and S in the stellar atmospheres, owing to the use of improper optical constants for MgS in the optical and UV wavelength domains. Alternative solids have been proposed (e.g., hydrogenated amorphous carbon, HAC, Grishko et al. 2001). A fresh reinvestigation of the carrier of the band in IRC+10216 would be extremely useful coupled to observations of high energy transitions of sulphur-bearing species to constrain the sulphur reservoir in the dust formation zone.
Finally, the results presented in this study are not unique to IRC+10216 and similar trends should apply to other carbon stars as well. For the specific case of water, HO has now been detected in several carbon stars (Neufeld et al. 2011b) where formation processes similar to those described in Cherchneff (2011a) and in the present study take place. However, the HO abundance certainly varies from source to source depending on the various parameters that are entangled with its formation in a complex way. For example, the shock strength affecs both the destruction of molecules and the creation of free atomic oxygen. Therefore, one would expect lower water abundances to be created by a mild shock than a strong shock, but the opposite actually occurs. We have modelled the chemistry induced by a 10 km s shock at r and compared our results with those of the 20 km s shock, finding that more SiO and HO molecules were produced. Because less CO is destroyed by a mild shock, the SiO formation depends on the reaction of Si with CO. Therefore, the OH radical is free to form HO, and combined with the less efficient destruction of molecules in the post-shock gas, more water is formed. Hence, the many parameters affecting the formation of water in carbon stars include the shock strength as well as the chemical composition, both the photospheric gas density and temperature, the gas-phase chemistry of the Si and S chemical families, and the amount and type of dust that forms. Water abundances is thus expected to vary greatly in carbon stars despite its synthesis in these objects having been proven both observationally and theoretically. A similar conclusion may be drawn for other species such as SiO. Combined observations of several high excitation transitions of HO and SiO molecules would be very instructive in this regard to more clearly understand the chemical processes responsible for the formation of O-bearing species and characterise the water content of carbon stars on a global scale.
###### Acknowledgements.
The author thanks the two anonymous referees for their useful comments that helped to improve the manuscript, A. Tielens for constructive remarks, and D. Gobrecht for providing the TE calculation estimates.
## References
• () Agúndez, M., & Cernicharo, J. 2006, ApJ 650, 374
• () Agúndez, M., Cernicharo, J. & Guélin, M. 2007, A&A 662, L91
• () Agúndez, M. 2009, PhDT 98.
• () Agúndez, M., Cernicharo, J. & Guélin, M. 2010, ApJ724, L133
• () Agúndez, M. Cernicharo, J., Waters, L.B.F.M. et al. 2011, A&A 533, L6
• () Akhmadov, U.S., Zaslomko, I.S. & Smirnov, V.N. 1988, Kin. Cata. 29, 251
• () Asplund, M. Grevesse, N., Sauval, A.J. & scott, P. 2009, ARA&A 47, 481
• () Begemann, B., Dorschner, T., Henning, T. et al. 1994, ApJ 423, L71
• () Bertschinger, E. & Chevalier, R.A. 1985, ApJ 299, 167
• () Bieging, J., Shaked, S. & Gensheimer, P. D. 2000, ApJ 543, 897
• () Boersma, C., Hony, S. & Tielens, A.G.G.M. 2006, A&A 447, 213
• () Bowen, G.H. 1988, ApJ 329, 299
• () Bulgakov, A.V., Bobrenok, O.F., Kosyakov, V.I. et al. 2002, Physics of the Solid State, 44, 617
• () Cernicharo, J., Waters, L.B.F.M., Decin, L. et al. 2010a, A&A 521, L10
• () Cernicharo, J., Decin L., Barlow, M. J., et al. 2010b, A&A 518, L136
• () Cernicharo, J., Agúndez, M.. Kahane, C. et al. 2011, A&A 529, L3
• () Cherchneff, I., Barker, J.R. & Tielens A.G.G.M. 1991, ApJ377, 541
• () Cherchneff, I., Barker, J.R. & Tielens A.G.G.M. 1992, ApJ401, 269
• () Cherchneff, I., 1996, in E. van Dishoeck, eds, Proc. IAU Symp. 178, Molecules in astrophysics: probes & processes, p. 469.
• () Cherchneff, I., 2006, A&A 456, 1001
• () Cherchneff, I., 2011a, A&A 256, L11
• () Cherchneff, I., 2011b, EAS 46, 177
• () Cohen, N. & Westberg, K.R. 1983, J. Phys. Chem. Ref. Data 12, 531
• () Decin, L., Cherchneff, I., Hony, S. et al. 2008, A&A 480, 431
• () Decin, l., Cernicharo, J., Barlow, M.J. et al. 2010a, A&A 518, L143
• () Decin, L., Agúndez, M., Barlow, M.J. et al. 2010b, Nature 467, 64
• () Deguchi, S. & Goldsmith, P.F. 1985, Nature 317, 336
• () Duari, D., Cherchneff, I. & Willacy, K. 1999, A&A 341, L47
• () Erhart, P. & Albe, K. 2005, Advanced Engineering Materials 7,937
• () Fantoni, R. Bijnen, F. Djuric, N. & Piccirillo, S. 1991, Appl. Phys. B. 52, 176
• () Fonfría, J.P., Cernicharo, J., Richter, M.J. & Lacy, J. 2008, ApJ 673, 445
• () Ford, K.S.E., Neufeld, D.A.,, Schilke, P. & Melnick, G.J. 2004, ApJ 614, 990
• () Fox, M.W. & Wood, P.R. 1985, ApJ 297, 455
• () Frenklach, M., Clary, D.W., Gardiner, W.C. Jr, & Stein, S.E., 1984, 20th Symp. (Int.) on Combustion, The Combustion Institute, 887
• () Gensheimer, P.D., Likkel, L. & Snyder, l.E. 1995, ApJ 439, 445
• () Giesen, A., Herzler, J. & Roth, P. 2003, J. Phys. Chem. 107, 5202
• () Gilman, R.C. 1974, ApJ268, 397
• () Goebel, J.H. & Moseley, S.H. 1985, ApJ 290, L35
• () Grishko, V.I., Tereszchuk, K., Duley, W.W. & Bernath, P. 2001, ApJ, 558, L129
• () Groenewegen, M. 1998, A&A 338, 491
• () Guélin, M., Cernicharo, J., Paubert, G. & Turner, B.E. 1990, A&A 230, L9
• () Halfen, D.T., Clouthier, D.J. & Ziurys, L.M. 2008, ApJ 677, L101
• () He, J.H., Dinh-V-Trung, Kwok, S. et al. 2008, ApJS 177, 275
• () Hony, S., Waters, L.B.F.M. & Tielens, A.G.G.M. 2002a, A&A 390, 533
• () Hony, S., Bouwman, J., Keller, L.P. & Waters, L.B.F.M. 2002b, A&A 393, L103
• () Hony, S. & Bouwman, J. 2004, A&A 413, 981
• () Huisken, F. et al.. 1999, J. Nanoparticle Res. 1, 293
• () Husain, D. & Marshall, P. 1986, Int. J. Chem. Kin. 18, 83
• () Hynes, K.M., Croat, T.K. & Bernatowicz, T.J. 2007, LPI 38, 1693
• () Ivezić, z & Elitzur, M. 1996, MNRAS 279, 1011
• () Kaito, C. et al.. 1995, Planet Space Sci. 43, 1271
• () Keller, L.P., Hony, S. Bradley, J.P. et al. 2002, Nature 417, 158
• () Kemper et al.2002]b61Kemper, F. et al.. 2002, ApJ 384, 585
• () Kimura, Y., Kurumada, M., Tamura, K. et al. 2005a, A&A 442, 507
• () Kimura, Y., Tamura, K. Koike, C. et al. 2005b, Icarus 177, 280
• () Krestinin, A.V. 2000, Combustion & Flame 121, 513
• () Lattimer, J.M., Schramm, D.N. & Grossman, L. 1978, ApJ 219, 230
• () Justtanont, K., de Jong, T., Helmich, F.P. et al. 1996, A&A 315, L217
• () Justtanont, K. Decin, L.. Schöier, F. L. et al. 2010, A&A 521, L6
• () Lodders, K. & Fegley, B. 1999, in Asymptotic Giant Branch Stars IAU Symp. 191, 279
• () Lucas, R., Guélin, M., Kahane, C. et al. 1995, Ap&SS 224, 293
• () Mauron, N. & Huggins, P. 2010, A&A 513, 31
• () McCabe, E.M., Smith, R.C. & Clegg, R.E.S. 1979, Nature 281, 263
• () McCabe, E.M. 1982, MNRAS 200, 71
• () Melnik, G. J., Neufeld, D.A., Ford, K.E.S. et al. 2001, Nature 412, 160
• () Men’shchikov, A. B., Balega, Y., Blöcker et al. 2001, A&A, 368, 497
• () Milam, S.N., Apponi, A.J., Woolf, N.J. & Ziurys, L.M. 2007, ApJ 668, L131
• () Milam, S.N., Halfen, D.T., Tenenbaum, E.D. et al. 2008, ApJ 684, 618
• () Nejad, L.A.M. & Millar, T.J. 1988, MNRAS 230, 79
• () Neufeld, D, González-Alfonso, E. Melnick, G. et al. 2010, A&A 521, L5
• () Neufeld, D, González-Alfonso, E. Melnick, G. et al. 2011a, ApJ 727, L28
• () Neufeld, D, González-Alfonso, E. Melnick, G. et al. 2011b, ApJ 727, L29
• () Olofsson, H., Johansson, L.E.B., Hjamarson, Å & Nguyen-Quang-Rieu 1982, A&A 107, 128
• () Olofsson, H. 2008, Physica Scripta 133, 014028
• () Ozin, G.A. & McCaffrey, J.G. 1984, J. Phys. Chem 88, 645
• () Patel, N., Young, K.H., Brünken, S. et al. 2009, ApJ 692, 1205
• () Perrin, G., Ridgway, S.T., Mennesson, B. et al. 2004, A&A 426, 279
• () Ridgway, S. T., & Keady, J. J. 1981, Phil.Trans. R. Soc. Lond. A, 303, 497
• () Ridgway, S.T. & Keady, J.J. 1988, ApJ 326, 843
• () Schöier, Olofsson, H. & Lundgren, A.A. 2006, A&A 454, 247
• () Schöier, F.L.,Bast, J., Olofsson, H., & Lindqvist, M. 2007, A&A 473, 871
• () Shinnaga, H., Young, K.H., Tilanus, R.P.J. et al. 2009, ApJ 698, 1924
• () Speck, A., & Barlow, M.J. 1997, Ap&SS 251, 115
• () Speck, A., Barlow, M.J. & Skinner, C.J. 1997, MNRAS288, 431
• () Szczerba, R., Henning, T., Volk, K. et al. 1999, A&A 345, L39
• () Tenenbaum, E. D. & Ziurys, L.M. 2010, ApJ712, L93
• () Tenenbaum, E. D., Dodd, J. L., Milam, S. N., Woolf, N. J. & Ziurys, L. M., 2010, ApJS 190, 348
• () Treffers, R. & Cohen, M. 1974, ApJ188, 545
• () Tsuji, T. 1973, A&A 23, 411
• () Tsuji, T., Ohnaka, K., Aoki, W. & Yamamura, I. 1997, A&A 320, L1
• () Whitteborn, F.C., Strecker, D.W., Erickson, E.F. et al. 1980, ApJ 238, 577
• () Willacy, K. & Cherchneff, I. 1998, A&A 330, 676
• () Willacy, K. 2004, ApJ 600, L87
• () Winters, J. M., Dominik, C., & Sedlmayr, E. 1994, A&A, 288, 255
• () Woitke, P. 2006, A&A 460, L9
• () Yasuda, Y. & Kozasa, T. 2011, arXiv:1109.6386
• () Zhang, K., Jiang, B.W. & Li A. 2009, ApJ 702, 680
• () Zinner, E. 2007, In Treatise on Geochemistry (eds. H. D. Holland and K. K. Turekian), Elsevier Ltd., Oxford, Vol. 1.02, 1
• () Ziurys, L. M., Milam, S. N. , Apponi, A.J. & Woolf, N. J. 2007, Nature 447, 1094
• () Ziurys, L. M., Tenenbaum, E. D., Pulliam, R. L., Woolf, N. J., & Milam, S. N. 2009, ApJ 695, 1604
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters | 2019-11-19 07:22:19 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8168099522590637, "perplexity": 3390.330546865028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670036.23/warc/CC-MAIN-20191119070311-20191119094311-00029.warc.gz"} |
http://physics.stackexchange.com/questions/111546/can-i-measure-a-journey-time-100-years-on-a-100-light-year-voyage | Can I measure a journey time < 100 years on a 100 light year voyage? [duplicate]
So, I'm traveling to another star 100 light years away in my spaceship. This ship has a solar sail pushed by a laser beamed from my home star system, so can achieve a velocity close to c. It's also got a robust parachute to slow down with.
I understand that if I measure light coming from my origin star, it will always still seem to be streaming past me at light speed (but will red shift as my speed increases). Light coming from my destination star also travels past me at light speed, and will become increasingly blue shifted as I gain speed.
I also understand that an observer checking on my speed at my origin or destination will always find it to be less than c.
However, will I perceive that in terms of the time it apparently takes me to reach my destination, my speed was greater than c? In other words, will it seem to take less than 100 years to reach the destination? 10 years on my watch, say. Or 1 year. Or a week?
Ie, as far as I'm concerned, while light keeps zipping past me at light speed, do I continue to accelerate unabated to an arbitrary apparent speed?
If not, how do I notice my continued acceleration being prevented?
-
marked as duplicate by John Rennie, DavePhD, Kyle Kanos, Valter Moretti, Brandon EnrightMay 7 '14 at 17:30
Hi Ben. Because we get so many questions like this I wrote the Q/A I've linked to try and produce the definitive article on the subject. Have a look at the linked article and if there are any points still unclear please come back to use with a new question or edit this one. – John Rennie May 7 '14 at 9:34
@JohnRennie Thank you! I see that I can simply delete my question. I wonder if the title I used is useful though as that's something I didn't know. I remain confused about the experience I'd have in my ship (instant death from hitting gas molecules or dust, aside)… Does the universe I see apparently flatten in the direction I'm travelling? You say that I feel a constant acceleration of 1g – but I must start to notice that my speed doesn't seem to be increasing, just the universe shrinking? – Benjohn May 7 '14 at 10:03
If you're on the rocket then from your perspective you're stationary and it's the rest of the universe that's moving towards you. You would indeed see the rest of the universe Lorentz contracted, and that's the point I make in the last section of my answer where the distance to the star decreases because of the Lorentz contraction. I'm sure there's a question on the site about the effect of interstellar dust at high speed - I'll have a search ... – John Rennie May 7 '14 at 10:28
@JohnRennie I've just scanned a link from you to The Relativistic Rocket. It mentions: "As you approach the speed of light you will be heading into an increasingly energetic and intense bombardment of cosmic rays and other particles. After only a few years of 1g acceleration even the cosmic background radiation is Doppler shifted into a lethal heat bath hot enough to melt all known materials." – Benjohn May 7 '14 at 10:32
Aha! The question I was thinking of is Would a fast inter-stellar spaceship benefit from an aerodynamic shape?. This doesn't actually calculate heating, but it does show the effect is small up to 0.999c. – John Rennie May 7 '14 at 10:33
The following assumes that the distance to the star (100 light year) was measured before you got in the spaceship and started moving.
When moving close to $c$ in your frame of reference space around you will be contracted relative to what someone on earth will measure. Thus, your 100 light year journey will actually be shorter in your frame of reference, and will thus take less than 100 years for you to make it to the star.
If you are ever able to report to someone back on earth that the journey took you less than 100 years in your frame of reference they will agree with you, since from the reference frame of earth your spaceship and all its inhabitants underwent time dilation, the slowing down of time relative to another frame of reference. Thus, you would both agree on the time the journey took in your frame of reference, but the person back on earth will say that according to their clocks in the earth's frame your journey took 100 years.
In summation, you will not conclude that in your frame of reference you traveled faster than $c$ because while in transit, due to length contraction, the journey you traveled was actually shorter than 100 light years.
This is an answer to your bolded question, which is a different question than the one you posed at the end about whether or not your spaceship will accelerate forever.
$@$Joshua: Well, I'm not talking about time dilatation, but about distance contraction. You didn't say that the journey will take less than 100 years, which would be about time. You said "100 light years" which means distance. If you say that length contraction has something to do here (which I agree with) than the contraction will be observed from Earth, and therefore for the traveler the distance must be longer than for Earth observers. – bright magus May 7 '14 at 17:52 | 2016-02-06 15:57:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5333760976791382, "perplexity": 524.6843711385835}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146600.56/warc/CC-MAIN-20160205193906-00173-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://astar.flyingcoloursmaths.co.uk/blog/ | # The Flying Colours Maths Blog: Latest posts
## Ask Uncle Colin: A Complex Conundrum
Dear Uncle Colin, I'm told that $z=i$ is a solution to the complex quadratic $z^2 + wz + (1+i)=0$, and need to find $w$. I've tried the quadratic formula and completing the square, but neither of those seem to work! How do I solve it? – Don't Even Start Contemplating
## Mr Penberthy’s Problem
It turns out I was wrong: there is something worse than spurious pseudocontext. It's pseudocontext so creepy it made me throw up a little bit: This is from 1779: a time when puzzles were written in poetry, solutions were assumed to be integers and answers could be a bit creepy…
## Ask Uncle Colin: My partial fractions decompose funny
Dear Uncle Colin, I recently had to decompose $\frac{3+4p}{9p^2 – 16}$ into partial fractions, and ended up with $\frac{\frac{25}{8}}{p-\frac{4}{3}} + \frac{\frac{7}{8}}{p-\frac{4}{3}}$. Apparently, that's wrong, but I don't see why! — Drat! Everything Came Out Messy. Perhaps Other Solution Essential. Hi, there, DECOMPOSE, and thanks for your message – and your
## Wrong, But Useful: Episode 44
In this month's episode of Wrong, But Useful, @reflectivemaths1 and I are joined by consultant and lapsed mathematician @freezingsheep2. We discuss: Mel's career trajectory into 'maths-enabled type things that are not actually maths', although she gets to wave her hands a lot. What you can do with a maths degree,
## Review: The Mathematics Lover’s Companion, by Edward Scheinerman
There is a danger, when your book comes plastered in praise from people like Art Benjamin and Ron Graham, that reviewers will hold it to a higher standard than a book that doesn't. That would be unfair, and I'll try to avoid that. What it does well This is a
## Ask Uncle Colin: an arctangent mystery
Dear Uncle Colin, In an answer sheet, they've made a leap from $\arctan\left(\frac{\cos(x)+\sin(x)}{\cos(x)-\sin(x)}\right)$ to $x + \frac{\pi}{4}$ and I don't understand where it's come from. Can you help? — Awful Ratio Converted To A Number Hello, ARCTAN, and thank you for your message! There's a principle I want to introduce
Last week, I wrote about the volume and outer surface area of a spherical cap using different methods, both of which gave the volume as $V = \frac{\pi}{3}R^3 (1-\cos(\alpha))^2(2-\cos(\alpha))$ and the surface area as $A_o = 2\pi R^2 (1-\cos(\alpha))$. All very nice; however, one of my most beloved heuristics fails
## Ask Uncle Colin: how big do the patches on a football need to be?
Dear Uncle Colin, I’m trying to sew a traditional football in the form of a truncated icosahedron. If I want a radius of 15cm, how big do the polygons need to be? — Plugging In Euler Characteristic’s Excessive Hello, PIECE, and thank you for your message! Getting an exact answer | 2017-05-29 05:35:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7469479441642761, "perplexity": 1821.3005339480499}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612018.97/warc/CC-MAIN-20170529053338-20170529073338-00039.warc.gz"} |
https://www.lesswrong.com/users/lsusr/replies | # All of lsusr's Comments + Replies
Omicron Post #4
This is my periodic "thank you" for all the work that goes into these things.
Shulman and Yudkowsky on AI progress
Whenever birds are an outlier I ask myself "is it because birds fly?" Bird cells (especially bird mitochondria) are intensely optimized for power output because flying demands a high power-to-weight ratio. I think bird cells' individually high power output produces brains that can perform more (or better) calculations per unit volume/mass.
Second-order selection against the immortal
Doesn't matter. Taking the immortality pill grants a strict competitive advantage over people who don't take it.
2M. Y. Zuo3dIt seems that this post is describing the regime beyond the threshold where group advantages outweigh individual advantages.
4Gunnar_Zarncke3dThe OP is not arguing on the individual level but on the population level. It is not uncommon that populations evolve to extinction.
Second-order selection against the immortal
The winning strategy is to take the immortality pill and reproduce. Voluntarily stopping having children to prevent over-crowding only works if everybody does it.
3Gunnar_Zarncke4dHe addresses this in the section "If the Immortals do continue to have babies, their second-order fitness is still pretty bad".
The Best Virtual Worlds for "Hanging Out"
I think this post is interesting as a historical document. I would like to look back at this post in 2050 with the benefits of hindsight.
Why Artists Study Anatomy
I like that this post addresses a topic that is underrepresented on Less Wrong and does so in a concise technical manner approachable to non-specialists. It makes accurate claims. The author understands how drawing (and drawing pedagogy) works.
100 Tips for a Better Life
I like this post because it following its advice has improved my quality of life.
The 2020 Review [Updated Review Dashboard]
Thank you for the link to my 2020 upvotes. I didn't know that was a thing. It brings the preliminary voting up from "super inconvenient" to "convenient".
It's weird looking at my list of strong upvotes, given that a lot of them are posts I have no memory of. "I guess I really liked this post since I strong-upvoted it, also I guess it was forgettable since if you'd told me I'd never seen it, I might have believed you."
(Possibly this says more about my memory than about the posts.)
It is a thing as of today. :)
Visible Thoughts Project and Bounty Announcement
It seems to me that their priority is find a pipeline that scales. Scaling competitions are frequently long-tailed, which makes them winner-take-all. A winner-take-all system has the bonus benefit of centralized control. They only have to talk to a small number of people. Working through a single distributor is easier than wrangling a hundred different authors directly.
Visible Thoughts Project and Bounty Announcement
Does your offer include annotating your thoughts too or does it only include writing the prompts?
7Brangus4dAfter trying it, I've decided that I am going to charge more like five dollars per step, but yes, thoughts included.
Coordinating the Unequal Treaties
That's a good question. I think the answer is "no" because each Western power had lots of rivals.
The Cold War was a different story. In the Cold War, there were (in theory) only two opposing sides. The USA would fund basically anyone who opposed the USSR (and vice versa).
First Strike and Second Strike
You're not wrong. Context does indeed matter. Few systems fall perfectly into first-strike vs second-strike.
[Book Review] "Sorceror's Apprentice" by Tahir Shah
I wanted to give readers the experience of what it was like for me to read the book.
2Pattern12dBy the way, this was a cool book review. (The table of contents on the left, didn't really follow the structure, but as per usual I just read the whole thing without looking at that until afterward.) Does the review start with how you found the book in order to give the reader a taste of what reading that book is like, or just because how you find a book affects things like whether you finish it, and how you understand it?
Attempted Gears Analysis of AGI Intervention Discussion With Eliezer
I agree that GPT-3 sounds like a person on autopilot.
Re: Attempted Gears Analysis of AGI Intervention Discussion With Eliezer
The 1940's would like to remind you that one does not need nanobots to refine uranium.
I'm confused. Nobody has ever used nanobots to refine uranium.
I'm pretty sure if I had $1 trillion and a functional design for a nuclear ICBM I could work out how to take over the world without any further help from the AI. Really? How would you do it? The Supreme Leader of North Korea has basically those resources and has utterly failed to conquer South Korea, much less the whole world. Israel and Iran are in similar situations and they're mere regional powers. Re: Attempted Gears Analysis of AGI Intervention Discussion With Eliezer Designing nuclear weapons isn't any use. The limiting factor in manufacturing nuclear weapons is uranium and industrial capacity, not technical know-how. That (I presume) is why Eliezer cares about nanobots. Self-replicating nanobots can plausibly create a greater power differential at a lower physical capital investment. Do I think that the simplest AI capable of taking over the world (for practical purposes) can't be boxed if it doesn't want to be boxed? I'm not sure. I think that is a slightly different from whether an AI fooms straight from 1 to 2. I th... (read more) 1Logan Zoellner21dThe 1940's would like to remind you that one does not need nanobots to refine uranium. I'm pretty sure if I had$1 trillion and a functional design for a nuclear ICBM I could work out how to take over the world without any further help from the AI. If you agree that: 1. it is possible to build a boxed AI that allows you to take over the world 2. taking over the world is a pivotal act then maybe we should just do that instead of building a much more dangerous AI that designs nanobots and unboxes itself? (assuming of course you accept Yudkowski's "pivotal-act framework of course).
Re: Attempted Gears Analysis of AGI Intervention Discussion With Eliezer
Thank you for the quality feedback. As you know, I have a high opinion of your work.
I have replaced "outer alignment" with "bad actor risk". Thank you for the correction.
Re: Attempted Gears Analysis of AGI Intervention Discussion With Eliezer
The way I look at things, an AGI fooms straight from 1 to 2. At that point it has subdued all competing intelligences and can take it's time getting to 3. I don't think 2 can plausibly be boxed.
1Logan Zoellner21dYou don't think the simplest AI capable of taking over the world can be boxed? What if I build an AI and the only 2 things it is trained to do are: 1. pick stocks 2. design nuclear weapons Is your belief that: a) this AI would not allow me to take over the world or b) this AI could not be boxed ?
Education on My Homeworld
I played American football for two years. It was a lot of fun.
I had a online friend I made through foreign language learning provide a source of KN95 masks at the height of the COVID-19 shortage. He lives under an authoritarian government. Long-term relationships are one way how you avoid scams over there.
2Jiro20dOkay, then change it to "you like American football less than the people who that statement was addressing like it".
6Jiro21d"I did this and it was great" is pretty much a subset of typical minding. Your own experiences are always going to include a combination of things that actually work in general, things that occasionally work if you get lucky, and things that work for people like you but don't generalize.
Open & Welcome Thread November 2021
Welcome!
[T]he many-worlds interpretation of quantum mechanics. Such a view would completely destroy the idea that this world is the special creation of an Omni-Max God who has carefully been steering Earth history as part of His Grand Design.
One planet. A hundred billion souls. Four thousand years. Such small ambitions for an ultimate being of infinite power like Vishnu, Shiva or Yahweh. It seems more appropriately scoped for a minor deity.
3Jon Garcia21dWell, at the time I had assumed that Earth history was a special case, a small stage temporarily under quarantine from the rest of the universe where the problem of evil could play itself out. I hoped that God had created the rest of the universe to contain innumerable inhabited worlds, all of which would learn the lesson of just how good the Creator's system of justice is after contrasting against a world that He had allowed to take matters into its own hands. However, now that I'm out of that mindset, I realize that even a small Type-I ASI could easily do a much better job instilling such a lesson into all sentient minds than Yahweh has purportedly done (i.e., without all the blood sacrifices and genocides).
Why do you believe AI alignment is possible?
Definition implies equality. Equality is commutative. If "human values" equals "whatever vague cluster of things human brains are pointing at" then "whatever vague cluster of things human brains are pointing at" equals "human values".
2Samuel Shadrach22dAgreed but that doesn't help. If you tell me that A aligns with B and B is defined as the thing that A aligns to, these statements are consistent but give zero information. And more specifically, zero information about whether some C in Set S can also align with B.
What the future will look like
• I hope the 10 in cryptocurrency I get for saving energy is proof of work. I am ideologically opposed to proof of stake. • I appreciate the charity for machine rights. Machines are people too. • I want to hack someone else's neuro-pellets and Rickroll them. Why do you believe AI alignment is possible? Answer by lsusrNov 15, 202110 Human brains are a priori aligned with human values. Human brains are proof positive that a general intelligence can be aligned with human values. Wetware is an awful computational substrate. Silicon ought to work better. 6Raven21dHumans aren't aligned once you break abstraction of "humans" down. There's nobody I would trust to be a singleton with absolute power over me (though if I had to take my chances, I'd rather have a human than a random AI). Arguments by definition don't work. If by "human values" you mean "whatever humans end up maximizing", then sure, but we are unstable and can be manipulated, which isn't we want in an AI. And if you mean "what humans deeply want or need", then human actions don't seem very aligned with that, so we're back at square one. 2Samuel Shadrach22dI see but isn't this reversed? "Human values" are defined by whatever vague cluster of things human brains are pointing at. Education on My Homeworld I read your Hacker News post. What don't you like about the curriculum? If the answer is "it's too easy" or "I hate Java" then you should take seriously the idea of dropping out (or if you're a freshman then consider changing your major to something harder like math or physics). If the classes aren't hard enough then the biggest thing you (personally) will lose if you drop out of college is an easy entry ticket into the big tech firms like Amazon, Facebook, etcetera. Try to arrange for a company to hire you early, before you graduate. If you succeed then y... (read more) Education on My Homeworld There is no legal obligation to prevent other people from hurting themselves. If someone uses your stuff without permission then it's basically impossible for them to sue you for negligence. Consequently, many workshop-like trespassing is done with a wink and a nod rather than explicit permission. 1M. Y. Zuo21dInteresting, how have the forces promoting greater regulations, liability, etc., been kept quiescent on your homeworld? Improving on the Karma System In my personal experience, a single post's karma already operates as a logarithmic measure of quality. It takes more than twice as much effort to write a 100 karma post compared to a 50 karma post. Improving on the Karma System Nitpick. Accumulating karma is useful in one respect: high karma users get more automatic karma in our posts, which draws more attention to them. I agree with the do nothing proposal, by the way. The current system, while imperfect, is simple and effective. Education on My Homeworld In The Case against Education: Why the Education System Is a Waste of Time and Money, Bryan Caplan uses Earth data to make the case that compulsory education does not significantly increase literacy. I'm skeptical that prosociability and the ability to manage your own boredom are taught at school in a way that would not be learned otherwise. Managing your own boredom requires freedom, which is the opposite of compulsion. Sociability requires permission to speak, which is forbidden by default in classroom-style schooling. Algebra and calculus seem the most ... (read more) 4Zolmeister22dMy reading is that he claims compulsory education had little effect in Britain and the US, where literacy was already widespread. There's an interesting footnote where he references a paper on economic returns of compulsory education [https://www.nber.org/system/files/working_papers/w19369/w19369.pdf], which cites many sources (p14) finding little to no economic return from schooling reform (though limited to Europe). In The Case against Education: Why the Education System Is a Waste of Time and Money, Bryan Caplan uses Earth data to make the case that compulsory education does not significantly increase literacy. Compulsory education increases literacy, see the Likbez in the USSR. Managing your own boredom requires freedom, which is the opposite of compulsion. One can make the opposite assertion, that it's fastest learned through discipline, and point to Chinese or South Korean schools. I don’t doubt that it’s useful to have the whole population learn reading and ... (read more) [Book Review] "The Bell Curve" by Charles Murray There's nothing to worry about, but thanks. I didn't even lose my phone. Vim It shouldn't be that way at all. The normal way to save progress while you're editing a file is to type :w followed by the Enter key. If you do this, Vim should write (or overwrite) the file on disk, resulting in a maximum of one file. (I'm ignoring the hidden temporary file.) Vim • Escape is too far from homerow compared to Ctrl+[. It's better to use Ctrl+[. I wrote about the i key in the "Insert Mode" section. • I'm not sure I understand the question. I take it you mean you save various versions of the same file? For version control, I use Git. • If you're using Vim via the terminal, you can often paste via Ctrl+Shift+v. 1Crackatook1moO I like these keys. Thank you Each time I save progress, vim creates another file. At the end, I have multiple files in addition to the original one. But it seems like it is not supposed to work that way? We Live in a Post-Scarcity Society Attention. Bitcoin. Military superiority. Being the prettiest person in the room. Anything where value is defined as winning a competition against other people. Are there any essays on what scares us? A study of fear, so-to-speak. Not that I know of—at least on this website. That being the case, here are my thoughts. Fear is an evolutionary adaptation to avoid danger. Some things like snakes, spiders, heights, darkness, the unknown, social exclusion, people who are a little off and large charging animals are scary because evolution has had plenty of time to evolve mechanisms to recognize them. You can also learn fears. For example, guns are scary even though there is no evolutionarily programmed fear of guns. We learn to fear guns. You can unlearn fears too, via (de)conditioning. The ... (read more) 1sunokthinks1moAwesome! I think I'll write up a draft. Thanks! Tell the Truth [T]his is one point where you should explain more. I will explain more. Total heritability of intelligence (in the US) might be as low as .40 (but probably isn't). Heritability of intelligence due to being in one particular genetic bucket must be strictly lower than total heritability of intelligence. "Significant" can be below 50%. An example from that article is that wearing earrings used to be highly heritable because you just had to look at whether they were female or male. As more people have started wearing earrings, the earring wearing trait has ... (read more) 1mysticRobot1moThanks for the thoughtful reply, and the interesting read! I'm not claiming that IQ has zero genetic component, but I am saying that it's not straightforward to conclude there are significant ethnic differences in IQ that are determined by genes. To be specific, I'm arguing that IQ between ethnic groups in the US is likely much less than 50% determined by genes. Finding genes correlated with IQ doesn't imply genes play a direct causal role, and there are very strong explanations that don't involve genes such as socioeconomic status for example. I'd wager around 0 to10% of the variation within normal IQ ranges is determined by genes for some cases, although that's speculation based on evidence. I can't find any rigorous scientific study of genes changing IQ (within normal ranges, as you can have genes that make the brain dysfunctional). Do you claim that heritability of intelligence due to being in one particular genetic bucket is closer to 50%? Or how much lower would you put it? Are there any essays on what scares us? A study of fear, so-to-speak. I interpret this question as seeking a list of scary things like snakes, spiders and heights. Is that what you're looking for? 1sunokthinks1moHey there! Thanks for your reply! I was actually wondering what fundamentally makes things scary, not things that are already scary. I take it that there is none? Contact Us The LW moderation team has always responded quickly and helpfully to my inquiries. I expect they will behave similarly to any other reasonable person who contacts them in good faith. [Book Review] "The Bell Curve" by Charles Murray In retrospect, I wish I had titled this [Book Review] "The Bell Curve" by Richard Herrnstein instead. That would have been funny. I have read two other books by Charles Murray and zero other books by Richard Herrnstein. In my head, I think of all of them as "Charles Murray books", which is unfair to Richard Herrnstein. 5Ben Pace1mo+1 it would have been funny, especially if you'd opened by lampshading it. [Book Review] "The Bell Curve" by Charles Murray You have my sympathy. I hope you are personally OK. Also, I hope, for the sake of that whole neighborhood, that the criminal is swiftly captured and justly punished. I fear there is little I can do to help you or your neighborhood from my own distant location, but if you think of something, please let me know. I'm totally unharmed. I didn't even lose my phone. There is absolutely nothing you can do but appreciate the offer and the well wishes. 2JenniferRM1moI'm glad you are unharmed and that my well wishes were welcome :-) The Opt-Out Clause I know why you're here, Neo. I know what you've been doing... why you hardly sleep, why you live alone, and why night after night, you sit by your computer. You're looking for him. I know because I was once looking for the same thing. And when he found me, he told me I wasn't really looking for him. I was looking for an answer. It's the question, Neo. It's the question that drives us. It's the question that brought you here. You know the question, just as I did. The Matrix 2Dojan1moHow many roads must a man walk down? 7Eliezer Yudkowsky1moHow much wood would a woodchuck chuck if a woodchuck could chuck wood? The Opt-Out Clause There's not just one. We default into several overlapping simulations. Each simulation requires a different method of getting out. One of them is to just stare at a blank wall for long enough. The Opt-Out Clause This isn't a thought experiment. It's real, except the opt-out procedure is more complicated than a simple passphrase. The problem is that this other procedure has side effects in worlds that are not simulations. 1Raymond D1moWhat's the procedure? Tell the Truth How many people answered the poll? 3tailcalled1mo26 Vaccine Requirements, Age, and Fairness My local ballroom came close to closing permanently due to lack of revenue. Forcing dance spaces to keep closed for several additional months would drive many of them out of business permanently. What is the most evil AI that we could build, today? Consider that a detailed answer to this question might constitute an information hazard. I don't think this is dangerous to talk about. If anything, talking publicly about my preferred attack vectors helps the world better triage them and (if necessary) deploy countermeasures. It's not like anybody is really going to throw away1 billion for the sake of evil.
3Zac Hatfield Dodds1moI agree; open discussion and red-teaming are valuable and I'm not concerned by your proposed (anti-?) financial attack vector. To quote Bostrom:
What is the most evil AI that we could build, today?
"[W]hat is the most infectious lethal virus which could be engineered and released today"?
Off the top of my head, my first impulse is to upgrade an influenza virus via gain-of-function research. Influenza spreads easily and used to kill lots of people. Plus, you can infect ferrets with it. (Ferrets have similar respiratory systems to human beings.) I don't think it's dangerous to talk about weaponized influenza because these facts are already public knowledge among biologists.
What is the most evil AI that we could build, today?
Yes and yes. However, pyramid schemes are created to maximize personal wealth, not to destroy collective value. Those are not quite the same thing. I think a supervillain could cause more harm to the world by setting out with the explicit aim of crashing the market. It's the difference between an accidental reactor meltdown verses a nuclear weapon. If LTCM achieved 95% leverage acting with noble aims, imagine what would possible for someone with ignoble motivations.
What is the most evil AI that we could build, today?
How exactly would you do this?…Even if you eventually grew your assets to $10B, how would you engineer a global liquidity crisis? Pyramid scheme. I'd take up as much risk, debt and leverage as I can. Then I'd suddenly default on all of it. There are few defenses against this because rich agents in the financial system have always acted out of self-interest. Nobody has even intentionally thrown away$10 billion dollars and their reputation just to harm strangers indiscriminately. The attack would be unexpected and unprecedented.
4ThomasJ1moDidn't this basically happen with LTCM? They had losses of $4B on$5B in assets and a borrow of \$120B. The US government had to force coordination of the major banks to avoid blowing up the financial markets, but meltdown was avoided. Edit: Don't pyramid schemes do this all the time, unintentionally? Like, Madoff basically did this and then suddenly (unintentionally) defaulted. | 2021-12-06 23:34:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3639959990978241, "perplexity": 2468.8882025795565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363327.64/warc/CC-MAIN-20211206224536-20211207014536-00304.warc.gz"} |
https://msp.org/agt/2017/17-1/agt-v17-n1-p03-s.pdf | Volume 17, issue 1 (2017)
Recent Issues
Author Index
The Journal About the Journal Editorial Board Subscriptions Editorial Interests Editorial Procedure Submission Guidelines Submission Page Ethics Statement ISSN (electronic): 1472-2739 ISSN (print): 1472-2747 To Appear Other MSP Journals
On the cohomology equivalences between bundle-type quasitoric manifolds over a cube
Sho Hasui
Algebraic & Geometric Topology 17 (2017) 25–64
Abstract
The aim of this article is to establish the notion of bundle-type quasitoric manifolds and provide two classification results on them: (i) $\left(ℂ\phantom{\rule{0.3em}{0ex}}{P}^{2}#ℂ\phantom{\rule{0.3em}{0ex}}{P}^{2}\right)$–bundle type quasitoric manifolds are weakly equivariantly homeomorphic if their cohomology rings are isomorphic, and (ii) quasitoric manifolds over ${I}^{3}$ are homeomorphic if their cohomology rings are isomorphic. In the latter case, there are only four quasitoric manifolds up to weakly equivariant homeomorphism which are not bundle-type.
However, your active subscription may be available on Project Euclid at
https://projecteuclid.org/agt
We have not been able to recognize your IP address 18.210.28.227 as that of a subscriber to this journal.
Online access to the content of recent issues is by subscription, or purchase of single articles.
or by using our contact form. | 2019-11-20 13:19:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19071350991725922, "perplexity": 2034.3182829343257}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670558.91/warc/CC-MAIN-20191120111249-20191120135249-00301.warc.gz"} |
https://www.nature.com/articles/s41537-021-00191-y | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# A class-contrastive human-interpretable machine learning approach to predict mortality in severe mental illness
## Abstract
Machine learning (ML), one aspect of artificial intelligence (AI), involves computer algorithms that train themselves. They have been widely applied in the healthcare domain. However, many trained ML algorithms operate as ‘black boxes’, producing a prediction from input data without a clear explanation of their workings. Non-transparent predictions are of limited utility in many clinical domains, where decisions must be justifiable. Here, we apply class-contrastive counterfactual reasoning to ML to demonstrate how specific changes in inputs lead to different predictions of mortality in people with severe mental illness (SMI), a major public health challenge. We produce predictions accompanied by visual and textual explanations as to how the prediction would have differed given specific changes to the input. We apply it to routinely collected data from a mental health secondary care provider in patients with schizophrenia. Using a data structuring framework informed by clinical knowledge, we captured information on physical health, mental health, and social predisposing factors. We then trained an ML algorithm and other statistical learning techniques to predict the risk of death. The ML algorithm predicted mortality with an area under receiver operating characteristic curve (AUROC) of 0.80 (95% confidence intervals [0.78, 0.82]). We used class-contrastive analysis to produce explanations for the model predictions. We outline the scenarios in which class-contrastive analysis is likely to be successful in producing explanations for model predictions. Our aim is not to advocate for a particular model but show an application of the class-contrastive analysis technique to electronic healthcare record data for a disease of public health significance. In patients with schizophrenia, our work suggests that use or prescription of medications like antidepressants was associated with lower risk of death. Abuse of alcohol/drugs and a diagnosis of delirium were associated with higher risk of death. Our ML models highlight the role of co-morbidities in determining mortality in patients with schizophrenia and the need to manage co-morbidities in these patients. We hope that some of these bio-social factors can be targeted therapeutically by either patient-level or service-level interventions. Our approach combines clinical knowledge, health data, and statistical learning, to make predictions interpretable to clinicians using class-contrastive reasoning. This is a step towards interpretable AI in the management of patients with schizophrenia and potentially other diseases.
## Introduction
In this article we apply a recent development in machine learning, termed class-contrastive analysis, to the major public health problem of premature mortality in schizophrenia. Schizophrenia affects approximately 0.5% of the population1. It is a severe mental illness (SMI), along with bipolar affective disorder, personality disorders, and recurrent depressive disorder. Patients with schizophrenia or other SMI more broadly have substantially increased mortality and reduced life expectancy, often due to physical co-morbidities2,3,4,5. A challenge for clinical practice is therefore to identify patients at particularly high risk of adverse events (including premature death) and seek to intervene early.
Machine learning (ML) offers a potential route to risk prediction in the field of SMI as elsewhere. ML, a sub-field of artificial intelligence (AI), involves computer algorithms that automatically adjust in response to training data (‘train themselves’). Their attractiveness relates to their ability to create predictive models from complex data sets with minimal human intervention, to a degree that may exceed the accuracy of classical statistical models such as logistic regression. ML achieves this through a variety of techniques. In a basic technique such as a multi-layer artificial neural network, for example, a layer of hidden nodes is trained by the algorithm to respond to weighted combinations of inputs; there may be further such layers responding to weighted combinations of the first layer, and these layers may interact recurrently. The “output" layer, giving the prediction or classification, responds to weighted combinations of preceding nodes. The algorithm seeks to minimize output prediction error. As a result, the output may predict accurately (if validated on independent data to avoid overfitting), but it may be very hard for a human to discern how the decision was reached. Clinically, it may be impractical to rely on such a black box predictor.
Here, we develop ML models of mortality in schizophrenia and apply the technique of class-contrastive reasoning to improve their explicability. Class-contrastive reasoning is a technique from the social sciences6,7: the contrast is to an alternative class of exemplars. An example of a class-contrastive explanation is: ‘The selected patient is at high risk of mortality because the patient has dementia in Alzheimer’s disease and has cardiovascular disease. If the patient did not have both of these characteristics, the predicted risk would be much lower.’
We apply an ML and class-contrastive framework to data on clinically relevant bio-social variables spanning physical health, mental health, personal history, and social predisposing factors. We collate this information from an electronic clinical records system and use clinician knowledge to transform them into features that are used to train an ML system (Fig. 1). We use the ML model to predict mortality and apply class-contrastive reasoning to explain the model.
We also use a visualization technique for machine learning models (class-contrastive heatmaps) that allows us to map the effect of changing a set of features.
Our approach can be helpful when explicit causal structure is modelled, and when there are a few features that are binary (categorical) in nature. Our approach may not be successful when there are continuous features or many features (hundreds). If features are hard to define and have to be discovered (for example, by semi-supervised techniques), our approach may not be helpful. Most of our features are binary (categorical) in nature and hence the class-contrastive approach could be applied successfully.
The class-contrastive approach may also be used to evaluate the practical limits of explainability of some models. For example, if a model has hundreds or thousands of features, it may be computationally intractable to exhaustively explore how changing combinations of these features affects the model output.
Our aim is not to advocate for a particular statistical model or black box model. Our objective is to give an example of how class contrastive reasoning can be used to explain black box models with binary categorical features in real-world electronic healthcare record data, in a disease of public health significance.
Our aim is not to exhaustively compare all possible statistical models but merely briefly survey and analyse some techniques. We note that our aim is not to demonstrate that some machine learning models can perform better than others.
We show a practical demonstration on a clinical dataset in a disease of public health relevance. We also outline the instances in which class-contrastive reasoning can be successfully applied to electronic health care record data. We suggest class-contrastive reasoning as a method to begin understanding ML and statistical models that have non-linearities. To the best of our knowledge this is the first application of this technique to real world electronic healthcare record data.
Our work is a step towards personalised medicine and interpretable AI in mental health and has the potential to be applicable more broadly in healthcare.
## Results
### Summary of results
We used a range of statistical and ML techniques to predict mortality in patients with schizophrenia. Class-contrastive reasoning and class-contrastive heatmaps were used to generate human-oriented explanations of statistical and ML model predictions.
Abuse of alcohol and drugs, and a diagnosis of delirium were risk factors for mortality (across all our techniques). The use of antidepressants was associated with lower risk of death via all our techniques.
The machine learning models emphasized combinations of features (like Alzheimer’s disease) with other co-morbidities. This highlights the role of co-morbidities in determining mortality in patients with SMI and the need to manage these co-morbidities.
### Survival analysis and standardized mortality ratios
Our explainable machine learning techniques complement classical statistical analysis like survival models and standardised mortality ratios. In this section, we outline these approaches.
For survival analysis, we use age (feature scaled) and the bio-social features (as outlined in Subsection Data input to statistical algorithms) as input features for patients with schizophrenia. For patients with schizophrenia, we show the hazard ratios associated with each feature in Fig. 2 using a Cox proportional hazards model. The use of second-generation antipsychotics (SGA) and antidepressants was associated with reduced risk of death in patients with schizophrenia. Alcohol/substance abuse was associated with an elevated risk of death consistent with a previous study8. A diagnosis of delirium was similarly associated with increased mortality.
The standardized mortality ratio (SMR) for patients with schizophrenia was 7.4 (95% confidence interval: [5.5, 9.2]). This is consistent with SMRs reported in the UK8.
### Logistic regression models
We used a logistic regression model to predict mortality in patients with schizophrenia. We show the odds ratios and their confidence intervals in Fig. 3. Age, diagnosis of delirium and alcohol/substance abuse were associated with a high risk of death. Use of second-generation antipsychotics and antidepressants were associated with a reduced risk of death.
### Class-contrastive heatmaps and counter-factual statements for logistic regression
The class-contrastive explanatory technique is applicable to machine learning models and statistical models such as logistic regression. We first demonstrate our approach by using class-contrastive reasoning on the logistic regression model for predicting mortality. We show the amount of change (predicted by the trained logistic regression model on the test set) in the probability of death by changing one particular feature from 0 to 1 (in the test set). We visualize this using a heatmap (Fig. 4) where rows represent patients and columns represent features in the test set that have been changed. Predictions are made using the trained logistic regression model on the test set.
The class-contrastive heatmap shows patient-specific predictions. Predictions for individual patients are made in the following way: the trained logistic regression model makes a prediction for the probability of death based on the modified features as input. This process is repeated for each patient and each feature.
We observe that a diagnosis of delirium or dementia predisposes a group of patients towards a higher probability of predicted mortality (Fig. 4). Patients (with schizophrenia) who were taking antidepressants were less likely to die during the period observed (Fig. 4). The class-contrastive and counterfactual analysis suggests that antidepressants may be associated with lower mortality in a group of patients (Fig. 4).
The heatmap also highlights counter-intuitive predictions. For example, the heatmap suggests that there is a small sub-group of patients (Fig. 4: top left hand corner indicated with an arrow) who have diabetes and have a lower risk of death. The use of a probability scale illustrates that the effect of each predictor varies in terms of its effect on probability (according to the baseline probability determined by other variables); of course, in log odds terms, changes in a given predictor will have a constant effect across all subjects.
We note that the counter-intuitive observations we observe in the class-contrastive heatmaps (on the test set) may also be as a result of imbalances in the training set. For example, a particular binary feature may be 0 for 100 patients and 1 for 10 patients.
In order to address this, we can add synthetic training data with these imbalances and visualize the class-contrastive predictions on the test set. We can artificially introduce an imbalance (for example, add more zeros than ones to a binary feature) in the test set and training set, and then observe the class contrastive heatmaps.
We note that age is a predictor in all models that we use. However, the class-contrastive heatmaps do not include age. This is because the class-contrastive analysis changes features one at a time (or pairwise), and this can be achieved only for binary categorical features. Hence, the class-contrastive heatmaps show the effect of changing predictors on the model predicted probability of mortality, over and above the contribution of age.
### Class-contrastive analysis for machine learning models
We used artificial neural networks to predict mortality in patients with schizophrenia. We performed class-contrastive analysis for this machine learning model (Fig. 5) to make it explainable.
We first show a heatmap for a simple version of class-contrastive reasoning where we mutate only one feature at a time on the test set (Fig. 6). We show the amount of change (predicted by the trained model on the test set) in the probability of death by setting one particular feature to 1 versus 0. We visualize this using a heatmap as before, where rows represent patients and columns represent features. We note that even though we cluster the features, our aim is not to demonstrate any similarity between them.
The heatmap suggests there is a subgroup of patients in whom use or prescription of medications like second-generation antipsychotics (SGA) and antidepressants is associated with a lower risk of death (Fig. 6). There is another subgroup of patients in whom personal risk factors (ICD-10 coded diagnosis; see ‘Methods’) are associated with increased risk of mortality.
The class-contrastive heatmaps also reveal counter-intuitive aspects of the data and model. Looking at the effect of individual features in isolation in Fig. 6, we observe small sub-groups of patients in whom having respiratory diseases or having Alzheimer’s disease is associated with a lower risk of death (indicated with arrows in Fig. 6).
These counter-intuitive results may be due to the fact that the class-contrastive approach is sensitive to the training data and any imbalances in features. For example, a binary feature may have mostly zeros in the training set. This can lead to a counter-intuitive result on the test set. Correlations across features may also help explain these counter-intuitive results.
We show an additional representative class-contrastive heatmap for the ML model in the Supplementary section (Supplementary Fig. 1). This ML model was run using a different split of the training and test data. This heatmap is consistent with previous results (Fig. 6), with the exception that it shows SGA are associated with an increased probability of mortality (Supplementary Fig. 1, bottom left arrow). This is not consistent with previous results from the logistic regression model and survival analysis for the effect of SGA (Figs. 2 and 3).
Deep learning models combine input features to create higher-order representations using hidden layers. Features are also often correlated and there are non-linearities involved. To account for some higher-order (non-linear) correlations and to better highlight the combinations of features, we simultaneously change all possible combinations of two features from 0 to 1 (in the test set). Specifically, we set a particular combination of two features to 1 simultaneously (versus 0) in the test set. We then repeat this for all possible pairs of features in the test set. We visualize the change in model output on the test set in Fig. 7. This technique can be used to investigate the role of combinations of different features that deep learning models exploit to build higher-order representations.
We found combinations of cardiovascular disease and use of diuretics. Diuretic use was associated with lower risk of mortality in a group of patients with cardiovascular disease (shown in the blue region of the heatmap in the lower right-hand corner which is the region of greatest decrease in predicted probability of death) (Fig. 7). There are also combinations of delirium and dementia in Alzheimer’s disease that predispose some patients towards greater risk of mortality (shown in the lower left region of the heatmap in red) (Fig. 7).
Other co-morbidities that are together associated with greater mortality in a sub-group of patients (Fig. 7) included: dementia in Alzheimer’s disease with an additional coded diagnosis of cardiovascular disease, and dementia in Alzheimer’s disease with a coded history of abuse of alcohol and drugs.
This highlights the role of co-morbidities in determining mortality in a sub-group of patients with SMI and the need for multiple conditions to be managed simultaneously in patients. A class-contrastive statement for one of these patients in this sub-group (Fig. 7) is: ‘The selected patient is at high risk of mortality because the patient has dementia in Alzheimer’s disease and has cardiovascular disease. If the patient did not have both of these characteristics, the predicted risk would be much lower.’
Our deep learning models emphasize combinations of different features. Therefore, as a very simple approximation, we also fit a more complex logistic regression model with interaction effects. We fit a logistic regression model with main effects and an interaction term between dementia in Alzheimer’s disease and cardiovascular disease (Supplementary Fig. 2). The log-odds ratio for this interaction term is greater than 0 although it is not statistically significant. This may suggest that there is only a small sub-group of patients in whom dementia and cardiovascular disease co-occur and predispose towards an increased risk of death. Additional details are available in the Supplementary Section.
### Performance
We show the predictive performance of each model in this section. The models we used to predict mortality are:
1. 1.
A logistic regression model with the bio-social features as input. The area under receiver operating curve (AUC) from the logistic regression model was 0.68 (95% confidence interval [0.65, 0.70]).
2. 2.
An autoencoder with the bio-social features as input. We then used the reduced dimensions from the autoencoder as input features to a random forest model. The predicted AUC from random forests built on top of the autoencoder-reduced dimensions was 0.80 (95% confidence interval [0.78, 0.82]).
We also use other statistical learning techniques to predict mortality and these are discussed in the Supplementary section (Section Additional analysis). We do not aim to exhaustively compare all possible statistical models but merely briefly survey and analyse some techniques. Our aim is to apply class-contrastive analysis to a machine learning model and show that in some scenarios the model predictions can be explained. We note that our aim is not to demonstrate that some machine learning models can perform better than others.
## Discussion
Mortality among patients with severe mental illnesses (SMI) is too often premature3,4. Routinely collected clinical data can help generate insights that can result in more effective treatment of these patients.
We used routinely collected clinical data in an observational study to answer questions of mortality in patients with schizophrenia. We implemented an interpretable computational framework for integrating clinical data in mental health and interrogating it with statistical and machine learning techniques.
Our framework starts with a database that is a knowledge repository of expertise. This database was created based on consultations with clinicians and maps low-level features (for example, medications such as simvastatin) to broader categories (for example, cardiovascular medication). These features are relevant for patients with schizophrenia and were used to predict mortality.
Our architecture captures clinical information on physical health, mental health, personal history and social predisposing factors to create a profile for a patient. We then used a number of statistical and machine learning techniques to predict mortality using these features.
We make our predictions interpretable by using class-contrastive reasoning6,7. Our approach has similarities to case-based reasoning9 and analogy-based reasoning10, where predictions are made based on similar patient histories cases. The approach presented here complements other techniques like Shapley explanations that are used to improve the interpretability of machine learning models. Further work is required to ensure the findings from schizophrenia generalise to other types of SMI.
We used a range of statistical and machine learning techniques to predict mortality in patients with schizophrenia. Since machine learning models may also be difficult to explain, we make them explainable using class-contrastive reasoning and class-contrastive heatmaps.
In patients with schizophrenia, abuse of alcohol and drugs, and a diagnosis of delirium were risk factors for mortality (across all techniques). Use or prescription of antidepressants and second-generation antipsychotics (SGA) were associated with lower mortality in our logistic regression and survival models. However, use or prescription of SGA was associated with an increased probability of mortality in one of our ML models.
The logistic regression model predicted that Alzheimer’s disease is a risk factor for mortality. The deep learning model emphasized Alzheimer’s disease in combination with other co-morbidities. This highlights the role of co-morbidities in determining mortality in patients with SMI and the need to manage them.
The class-contrastive and survival analysis suggest that antidepressants are associated with lower mortality in a group of patients with schizophrenia. Alcohol/substance misuse was consistently associated with elevated mortality, suggesting the requirement to address the needs of so-called “dual diagnosis" patients (with SMI and comorbid substance misuse) as part of a strategy to improve life expectancy in patients with SMI.
The association between delirium and excess mortality is notable but not unexpected2,3,4. A weakness of the current family of models is their lack of temporal structure (for example, consideration of the time between delirium and death) but this finding serves to emphasize that delirium should not be taken lightly.
The association of antidepressant use with reduced mortality was unexpected but consistent across analytical methods. Our data do not support a mechanistic interpretation (for example, mode of death is not recorded in these structured clinical records) but this question would bear further investigation.
Illicit substance abuse and lack of family involvement was associated with increased risk of mortality8. Alcohol/substance abuse was also pointed out as a critical factor in our class-contrastive reasoning analysis and survival analysis. Provisioning of family support and involving family members and carers could be part of health management plans11.
We hope that some of these bio-social factors can be targeted therapeutically by either patient-level interventions (like provisioning of family support11) or service-level improvements12.
Overall, we observed that abuse of alcohol and drugs and a diagnosis of delirium are risk factors for mortality (in both logistic regression models and survival models). The use of SGA and antidepressants were associated with lower mortality from both our logistic regression models and survival models. This may be important given that some clinicians may hesitate to prescribe given what is known about short- to medium-term side effects of these drugs that include adverse impact on cardiovascular risk profiles. While our findings on this and other points is not conclusive evidence of causality, it is in accord with observational clinical data at the national level13.
The machine learning model emphasized (for example) Alzheimer’s disease along with other co-morbidities (Fig. 7). This highlights the role of co-morbidities in determining mortality in patients with SMI and points to the need for multiple conditions to be treated simultaneously in patients. This also suggests that a pragmatic trial of robust management of co-morbidities may be justified.
Interpretability is a major design consideration of machine learning algorithms applied in healthcare. We made our predictions interpretable by using class-contrastive reasoning and counterfactual statements6.
This approach has the capability to make some black-box models explainable, which might be very useful for clinical decision support systems. We demonstrate the approach here using logistic regression and artificial neural networks. These techniques could ultimately be used to build a conversational AI that could explain its predictions to a clinician.
Our work can also be used to make clinical decision support systems. This may lead to automated alerts in electronic healthcare record systems, after thorough validation in follow-up studies.
Our study is observational in nature and we do not imply causation. Our data is a naturalistic sample from clinics and should not be used to alter clinical practice. Our aim is to raise hypotheses that will need to be tested in randomized controlled trials.
There may also be other unknown confounds and hence causal conclusions cannot be drawn from an observational study. For example, a drug that is associated with better outcomes may be preferred by clinical teams, and a drug that is associated with poorer outcomes may still be prescribed for severely ill patients because it is perceived to be effective.
There may also be under-coding of schizophrenia diagnoses. The data were from a secondary care mental health service provider and may miss important risk factors coded in primary care.
Psychiatric diagnoses are challenging and there can be potential issues related to the reliability of diagnostic categories in SMI. Sampling bias is another issue in real world electronic healthcare record data. For example, it is possible that only the most severely ill patients seek clinical help and/or get referred to secondary care. Hence the data may reflect a category of patients who are more severely ill.
Our data also lacks temporal structure, which is likely to be important in determining progression of disease. The current work relates observable features to the risk of death within the observation period. Clearly this is not as satisfactory as a model that predicts a time-based risk. It would be expected that the temporal risk conferred by different features would vary—for example, diabetes increases cardiovascular risk over decades, whereas delirium is often associated with critical illness and may be associated with an elevated mortality risk that is very immediate or proximal. A comprehensive model might involve autodiscovery of those temporal risk factors, at the price of a considerable increase in model complexity. This will require building more complex recurrent neural network models like long short-term memory models (LSTM), which will require even more data.
Important readouts like statistical significance cannot be judged from the class-contrastive heatmaps. For example, lack of family support appeared to be associated with higher mortality in the class-contrastive heatmap for the logistic regression model (Fig. 4). However, this association was not statistically significant, even though the odds ratio for lack of family support was greater than 1 in a logistic regression model (Fig. 3).
We combined several medications into the category of second-generation antipsychotics, which itself consists of a heterogenous group of medications13, and made other simplifications in our treatment of medications.
Because of heterogeneity in training data and correlations across features, reproducibility of heatmaps is a limitation. We show an additional representative example in Supplementary Fig. 1. There are a few differences between these heatmaps (Supplementary Fig. 1 and Fig. 6). This heatmap is consistent with previous results (Fig. 6), with the exception that it shows SGA are associated with an increased probability of mortality (Supplementary Fig. 1, bottom left arrow). This is not consistent with previous results from the logistic regression model and survival analysis for the effect of SGA (Figs. 2 and 3). Reconciling these results will require additional analysis and validation in an independent cohort with more patients.
Our results suggest that the class-contrastive approach is sensitive to the training data and any imbalances in features. For example, a particular binary feature may be 0 for 100 patients and 1 for 10 patients. One way to determine this sensitivity is to artificially introduce more zeros and then observe the class contrastive heatmaps.
It is possible that the counter-intuitive observations we see in the class-contrastive heatmaps (on the test set) are likely as a result of such imbalances in the training set. Because of this, reproducibility of heatmaps is a limitation of our approach.
Our approach can be helpful when explicit causal structure is modelled, and when there are binary (categorical) features and a few features which can be modified at a time. We account for correlations between features by modifying all pairs of features at a time and then observing the effect on model predictions (Fig. 7). However, this approach can become computationally challenging for higher combinations of features (all triples, quadruples, all possible combinations), or as the number of features increase.
In conclusion, our framework combines bio-social factors relevant for SMI with statistical learning, and makes them interpretable using class-contrastive techniques. Our work suggests that medications like antidepressants were associated with a reduced risk of death in a group of patients with schizophrenia. Abuse of alcohol and drugs, and a diagnosis of delirium were risk factors for death.
Our machine learning models highlight the role of co-morbidities in determining mortality in patients with SMI and the need to manage them. We hope that some of these bio-social factors can be targeted therapeutically by either patient-level or service-level interventions.
We complement explainable machine learning techniques with classical statistical analysis like logistic regression, survival models, and standardised mortality ratios. This may be a prudent and pragmatic approach for building explainable models in healthcare. We admit that the distinction between ML models and classical statistical models (like logistic regression) is artificial. Models lie on a continuum and a pragmatic approach towards explainable AI would combine and contrast all of these techniques.
The approach of combining explainable techniques and clinical knowledge with machine learning approaches may be more broadly applicable when data scientists need to work closely with domain experts (clinicians and patients).
Our approach combines clinical knowledge, health data, and statistical learning, to make predictions interpretable to clinicians using class-contrastive reasoning. We view our work as a step towards interpretable AI and personalized medicine for patients with SMI and potentially other diseases.
## Methods
### Overview of Methods
We give a brief overview of our approach in this section. Our approach is summarised in Fig. 1.
1. 1.
We take de-identified data from an electronic patient record system for mental health.
2. 2.
We define a set of high-level features that are in this example time independent. These include age, diagnostic categories (time-independent coded diagnosis at any point during the study period), and medication categories (time-independent prescription of or use of medications). We also include bio-social factors that are important in SMI like information on mental health diagnosis, relevant risk history such as a prior suicide attempt, substance abuse, and social factors such as lack of family support.
3. 3.
We use these features to predict death during the time of observation.
4. 4.
We use classical statistical models including logistic regression, survival models, and standardised mortality ratios.
5. 5.
We then fit machine learning models, comparing predictive accuracy to the classical statistical models.
6. 6.
Class-contrastive heatmaps are used to visualize the explanations of the statistical models and machine learning predictions. The corresponding class-contrastive statements also aid human interpretation.
### Mental health clinical record database
We used data from the Cambridgeshire and Peterborough NHS Foundation Trust (CPFT) Research Database. This comprises electronic healthcare records from CPFT, the single provider of secondary care mental health services for Cambridgeshire and Peterborough, UK, an area in which ~856,000 people reside. The records are de-identified using the CRATE software14 under NHS Research Ethics approval (12/EE/0407, 17/EE/0442). The CPFT Research Database operates under UK NHS Research Ethics approvals (REC references 12/EE/0407, 17/EE/0442; IRAS project ID 237953).
Data included patient demographics, mental health and physical co-morbidity diagnoses: these were derived from coded ICD-10 diagnoses and analysis of free text through natural language processing (NLP) tools15,16.
Dates of death are automatically updated via the National Health Service (NHS) Spine. We considered all patients with coded diagnoses of schizophrenia who had records in the electronic healthcare system from 2013 onwards. There were a total of 1706 patients diagnosed with schizophrenia defined by coded ICD-10 diagnosis (diagnosis code F20). We note there is under-coding of schizophrenia.
### Medicine information on prescribed drugs
We extracted medicine information for each patient by using natural language processing on clinical free text data using the GATE software15,17.
### Population mortality data
Population mortality data for England and Wales were used from the Office for National Statistics (ONS)18.
### Data input to statistical algorithms
The features fed in to our statistical and machine learning algorithms included age, gender, high-level diagnosis categories, and medication categories. We also included other bio-social factors important in SMI. All these features are used to predict mortality. The full list of features was as follows:
1. 1.
High-level medication categories were created based on domain-specific knowledge from a clinician [RNC]. These medication categories are:
second-generation antipsychotics (SGA: clozapine, olanzapine, risperidone, quetiapine, aripiprazole, asenapine, amisulpride, iloperidone, lurasidone, paliperidone, sertindole, sulpiride, ziprasidone, zotepine); first-generation antipsychotics (FGA: haloperidol, benperidol, chlorpromazine, flupentixol, fluphenazine, levomepromazine, pericyazine, perphenazine, pimozide, pipotiazine, prochlorperazine, promazine, trifluoperazine, zuclopenthixol); antidepressants (agomelatine, amitriptyline, bupropion, clomipramine, dosulepin, doxepin, duloxetine, imipramine, isocarboxazid, lofepramine, maprotiline, mianserin, mirtazapine, moclobemide, nefazodone, nortriptyline, phenelzine, reboxetine, tranylcypromine, trazodone, trimipramine, tryptophan, sertraline, citalopram, escitalopram, fluoxetine, fluvoxamine, paroxetine, vortioxetine and venlafaxine); diuretics (furosemide); thyroid medication (drug mention of levothyroxine); antimanic drugs (lithium) and medications for dementia (memantine and donepezil).
2. 2.
Relevant co-morbidities we included were diabetes (inferred from ICD-10 codes E10, E11, E12, E13 and E14 and any mentions of the drugs metformin and insulin), cardiovascular diseases (inferred from ICD-10 diagnoses codes I10, I11, I26, I82, G45 and drug mentions of atorvastatin, simvastatin and aspirin), respiratory illnesses (J44 and J45) and anti-hypertensives (mentions of the drugs bisoprolol and amlodipine).
3. 3.
We included all patients with a coded diagnosis of schizophrenia (F20). For these patients with schizophrenia, we also included any additional coded diagnosis from the following broad diagnostic categories: dementia in Alzheimer’s disease (ICD-10 code starting with F00), delirium (F05), mild cognitive disorder (F06.7), depressive disorders (F32, F33) and personality disorders (F60).
4. 4.
We also included relevant social factors: lack of family support (ICD-10 chapter code Z63) and personal risk factors (Z91: a code encompassing allergies other than to drugs and biological substances, medication noncompliance, a history of psychological trauma, and unspecified personal risk factors); alcohol and substance abuse (this was inferred from ICD-10 coded diagnoses of Z86.4, F10, F12, F17, F19 and references to thiamine, which is prescribed for alcohol abuse). Other features included are self-harm (ICD-10 codes T39, T50, X60, X61, X62, X63, X64, X78 and Z91.5), non-compliance and personal risk factors (Z91.1), referral to a crisis team at CPFT (recorded in the electronic healthcare record system) and any prior suicide attempt (in the last 6 months or any time in the past) coded in structured risk assessments.
These broad categories constituted our representation of simplified clinician-based knowledge. We used these features (including age of the patient) to predict whether a patient died any time during the time period observed (from first referral to CPFT to the present day). We did not attempt to predict the risk of dying, for instance, 1 year after first referral to CPFT. The features we used to predict mortality were also time-independent. This represents a simplified time-independent model. More detailed modelling would include temporal effects of such predictors.
Age was a predictor in all our models, including survival models. We consider time of death and time of feature collection. The observed outcome (death) was binary and this is the outcome the models are predicting but models do so via a continuous variable related to risk/probability, so this is simultaneously predicted. Our model predictions, if independently validated in another clinical setting, could be converted in to a risk or probability.
All our models, including the machine learning model, include age as a predictor. However, the class-contrastive analysis and the class-contrastive heatmaps do not include age since the feature changes (one at a time or pairwise) can be achieved only for binary (categorical) features. Hence, the class-contrastive heatmaps show the effect of changing predictors on the model prediction, over and above the contribution of age.
### Data pre-processing
Diagnostic codes were based on the International Classification of Diseases (ICD-10) coding system19. Age of patients was normalised (feature scaled) by subtracting the mean age from the age of each patient and then dividing by the standard deviation. All categorical variables, such as diagnosis and medications (described above), were converted using a one-hot encoding scheme. This is explained in detail in the Supplementary section.
### Machine learning and statistical techniques
We performed logistic regression using generalized linear models20,21. We used age (feature scaled) as a continuous predictor. There are categorical features (medications, co-morbidities and other social and personal predisposing factors) that were encoded using a one-hot representation.
For our machine learning approach, we used artificial neural networks (autoencoders) to integrate data from different sources giving a holistic picture of mental health, physical health and social factors contributing to mortality in SMI. We use the same set of features for all algorithms.
Artificial neural networks are composed of computational nodes (artificial neurons) that are connected to form a network. Each artificial neuron performs a simple computation (much like logistic regression). The neurons are organised in layers. The input layer takes in the input features, transforms them, and passes it to one or more intermediate layers called hidden layers. The hidden layer performs further transformations and passes the result to the output layer. The final output layer is used to make a prediction (in this case, about mortality).
The autoencoder is a type of artificial neural network that also performs dimensionality reduction since the hidden layer has fewer neurons than the input layer22. In our framework, the reduced dimensions of the autoencoder (output of the hidden layer) were used as input to a random forest model to predict mortality (Fig. 5). Random forests are machine learning models that build collections of decision trees23. Each decision tree makes a prediction after making a series of choices based on the input data. These decision trees are combined to build a collection (forest) that together has better predictive ability than a single tree.
We split the data into a training set (50%), validation set (25%) and test set (25%). We performed 10-fold cross-validation and regularization to penalize for model complexity. The architecture is summarised in Fig. 5. We used the following models to predict mortality:
1. 1.
Logistic regression model with all the original input features;
2. 2.
An autoencoder with the bio-social features as input. We then use the reduced dimensions from the autoencoder as input features to a random forest model (Fig. 5).
#### Machine learning methods
We used an artificial neural network, called an autoencoder, to integrate data from different sources and predict mortality. The input features are age (normalised), gender, diagnosis categories, lifestyle risk factors, social factors and medication categories. We used the same set of features for all algorithms.
Categorical features (such as medication categories) are encoded using a one-hot representation. This involves taking a vector that is as long as the number of unique values of the feature. Each position on this vector corresponds to a unique value that the categorical feature can take. Whenever a categorical feature (say, did a patient take cardiovascular medication) takes on a particular value (say True), we place a 1 (‘hot’) corresponding to that position on the vector and 0 everywhere else.
We show the architecture of the autoencoder in Fig. 5. The autoencoder is an artificial neural network with an input layer, hidden layer and an output layer. The input layer takes in the bio-social features. The output layer is used to reconstruct the input. The hidden layer of the autoencoder is used for dimensionality reduction.
The autoencoder had one hidden layer of 10 neurons. We used the hidden layer as input to a random forest model to predict mortality. A similar architecture was applied previously to electronic healthcare record data24. The choice of an autoencoder allows reduction of the feature space.
An artificial neural network has an input layer, hidden layer(s) and output layer. An activation function is used to project the input data (X) into another feature space using weights (W).
$$f(W\cdot X)$$
(1)
The weights W are determined from data using a technique called backpropagation25.
We used a ReLU (Rectified Linear Unit) activation function for the hidden layer. The form of the ReLU function is shown below:
$$f(x)=\max (0,x)$$
(2)
We used a sigmoid activation function for the final layer:
$$f(x)=\frac{1}{1+{e}^{-x}}$$
(3)
The output of the sigmoid function is positive even for negative input.
We also experimented with a hyperbolic tangent (tanh) function shown below:
$$f(x)=\frac{{e}^{x}-{e}^{-x}}{{e}^{x}+{e}^{-x}}$$
(4)
However, the cross-validation results (see discussion later) were inferior to that of the ReLU activation function.
An artificial feed-forward neural network optimizes a loss function of the form:
$$-\mathop{\sum }\limits_{i=1}^{d}log(P(Y={y}_{i}| {x}_{i},\theta))+\lambda | | \theta | |$$
(5)
This is the negative log-likelihood. There are d data points. The i th data point has a label denoted by yi and input feature vector represented by xi. The weights of the artificial neural network are represented by a vector θ. λ is a regularization parameter to prevent overfitting and reduce model complexity. λ is usually determined by cross-validation. Shown here is the L1 norm of the parameter vector (θ). The weights of the artificial neural network are determined using a technique called backpropagation25.
The autoencoder used a cross-entropy loss function, which is a measure of discrepancy between the input layer and the reconstructed hidden layer. The cross-entropy loss function used for the autoencoder had the following form:
$$\mathop{\sum }\limits_{k=1}^{m}{u}_{k}{{\mathrm{log}}}\,{v}_{k}+(1-{u}_{k}){{\mathrm{log}}}\,(1-{v}_{k})$$
(6)
where there are m features in the input layer. u represents the input layer and v represents the hidden layer. The layers are computed by applying the appropriate activation functions (Equations (1), (2) and (3)).
The final cost function is given below:
$$\mathop{\sum }\limits_{k=1}^{m}{u}_{k}{{\mathrm{log}}}\,{v}_{k}+(1-{u}_{k}){{\mathrm{log}}}\,(1-{v}_{k})+\lambda | | \theta | |$$
(7)
where the vector θ represents all the weights of the artificial neural network. There are m features in the input layer. u represents the input layer and v represents the hidden layer. We added an L1 penalty term on the weights to perform regularization and prevent overfitting. This is denoted by the term λθ. λ is a regularization parameter that we determined by 10-fold cross-validation.
We performed a 50%–25%–25% training-validation-test split of the data. We used the keras package26 with the Tensorflow backend27.
The artificial neural network was trained on the training data for a number of epochs. In one epoch the network is trained on the training dataset. The model fit is then refined over subsequent epochs. Our neural network was trained for 1000 epochs, which was assessed as being sufficient to reach convergence. We used the Adadelta method of optimization28.
We selected all hyperparameters, including the number of neurons in a hidden layer and activation functions, based on a uniform search and 10-fold cross-validation. We split the data into a training set (50%), validation set (25%), and test set (25%). We trained the model on the training set. We carried out cross-validation on the validation set. The architectural parameters and regularization parameters were then selected. This final model was then evaluated on the test set. This process of splitting the data (into training, validation and test sets), training the model and performing cross-validation was repeated 10 times.
We varied the number of neurons in the hidden layer from 2 to 20. For activation functions, we tried sigmoid, rectified linear unit (ReLU) and hyperbolic tangent (tanh). We do not use dropout regularization to keep a simple architecture and simplify the process of model selection. A hidden layer of 10 neurons and ReLU and sigmoid activation functions (for the first and second layers, respectively), were found to have the least cross-validation error.
We repeated the stochastic process of splitting the data into training and test sets and performing cross-validation 10 times. This yielded a mean AUC of 0.80 (95% confidence intervals [0.78, 0.82]).
### Class-contrastive reasoning
We explain our models using class-contrastive reasoning and class-contrastive heatmaps. The technique works as follows. The model is trained on the training set. For each patient in the test set, we independently mutate (change from 0 to 1, or 1 to 0) each categorical feature. For each patient in the test set, we use the trained model to compute the change in the predicted probability of death.
We repeat this procedure independently for each feature and each patient in the test set. We do not retrain the model when we mutate the features. The predictions are made using the trained machine learning model on the test set.
We visualize the amount of change in the model predicted probability of mortality, achieved by setting a particular feature to 1 versus 0, using a class-contrastive heatmap. The rows represent patients and columns represent the feature that has been changed from 0 to 1. The heatmaps also show a hierarchical clustering dendrogram, which is performed using an Euclidean distance metric and complete linkage23.
In another variant, we also simultaneously change all pairs of features in the test set from 0 and 0 to 1 and 1. As before, for each patient in the test set, we use the trained model to compute the change in the predicted probability of death. In this case, the class-contrastive heatmap shows the amount of change in the predicted probability of mortality, achieved by setting a particular combination of features to 1 versus 0. The rows represent patients and columns represent the combination of features that are changed simultaneously.
The class-contrastive heatmap shows patient-specific predictions. Predictions for individual patients are made in the following way: the trained model makes a prediction for the probability of death based on the modified features as input. This process is repeated for each patient and each feature (or feature combination).
### Survival analysis and standardized mortality ratios
For survival analysis, we used the entry date (exposure) as the date of referral. In cases where there were multiple referrals for a patient, we considered the earliest date. If this calculated date was earlier than the start date of our mental health clinical database (called RiO), we set it to the start date of RiO (1st December 2012). The event was death. The date of death was derived from the National Health Service (NHS) Spine.
We used a Cox proportional hazards model for patients with schizophrenia, using age (feature scaled) and the bio-social features (as outlined before) as input features.
Standardized mortality ratios (SMR) are a method to standardize and control for age and population structure29. We calculated age-standardized mortality ratios (SMR) to standardize and control for age and population structure. For calculating SMRs, we defined five-year age groups (0–4, 5–9, ..., 85–90, and >90 years). Population mortality data was used from the Office for National Statistics (ONS)18.
We calculated SMRs using the indirect method of standardization29. The denominator is the expected number of deaths in the study population and the numerator is the number of observed deaths in the study population.
Hence the indirectly standardized SMR is the ratio of the number of deaths observed in a study population to the number expected if the age-specific rates of a standard population had applied:
$${{{\rm{SMR}}}}=\frac{d}{\mathop{\sum }\nolimits_{i = 1}^{k}{n}_{i}{R}_{i}}$$
(8)
where d is the number of deaths in the study population. Say there are k age groups in the study and standard population. ni is the number of people in the ith group of the study population and Ri is the crude death rate in the ith group of the standard population. The 95% confidence intervals are SMR ± 1.96 SE(SMR)29 where SE(SMR) is given by:
$${{{\rm{SE(SMR)}}}}=\frac{\sqrt{O}}{E}$$
(9)
Here O is the observed number of deaths in the study population and E is the expected number of deaths in the study population.
### Logistic regression models
We used a logistic regression model to predict mortality in patients with schizophrenia. Age (feature scaled) and the bio-social factors were used as input. The model, in R notation, was as follows:
Death ~ age + dementia + delirium + abuse_alcohol_drugs + specific_personality_disorder + respiratory + cardiovascular + diabetes + self_harm + lack_family_support + personal_risk_factors + SGA + antidepressant + suicide_attempt + dementia_drug + antimanic_drug + thyroid + FGA + diuretic + anti_hypertensive + aspirin.
This same model was also fitted using an L1 regularized logistic regression model (details are available in the Supplementary section, subsection Sensitivity analysis).
We also fitted a logistic regression model with main effects and an interaction term between dementia in Alzheimer’s disease and cardiovascular disease. The model, in R notation, was as follows:
Death ~ dementia * cardiovascular + age + dementia + delirium + abuse_alcohol_drugs + specific_personality_disorder + respiratory + cardiovascular + diabetes + self_harm + lack_family_support + personal_risk_factors + SGA + antidepressant + suicide_attempt + dementia_drug + antimanic_drug + thyroid + FGA + diuretic + anti_hypertensive + aspirin.
### Software
All software was written in the R30 and Python programming languages. Generalized linear model (GLM) regression was performed using the glm function in R21,31. Hierarchical clustering and visualization were performed using heatmaps in the pheatmap package32. Survival analysis was conducted using the survminer package in R33. L1 regularized logistic regression was performed using the glmnet package34.
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
## Data availability
This study reports on human clinical data which cannot be published directly due to reasonable privacy concerns, as per NHS research ethics approvals and information governance rules. Qualified researchers can apply for data access by submitting an application to the Cambridgeshire and Peterborough NHS Foundation Trust (CPFT).
## Code availability
All software was written in the R30 and Python programming languages. Generalized linear model (GLM) regression was performed using the glm function in R21,31. Hierarchical clustering and visualization were performed using heatmaps in the pheatmap package32. Survival analysis was conducted using the survminer package in R33. L1 regularized logistic regression was performed using the glmnet package34. The deep learning model was built in the Python programming language using the keras package26 with the Tensorflow backend27. The code used in this study is available from the corresponding author upon reasonable request.
## References
1. 1.
Goldner, E. M., Hsu, L., Waraich, P. & Somers, J. M. Prevalence and incidence studies of schizophrenic disorders: a systematic review of the literature. Can. J. Psychiatry 47, 833–843 (2002).
2. 2.
Hayes, J. F., Marston, L., Walters, K., King, M. B. & Osborn, D. P. Mortality gap for people with bipolar disorder and schizophrenia: UK-based cohort study 2000-2014. Br. J. Psychiatry 211, 175–181 (2017).
3. 3.
Chang, C. K. et al. Life expectancy at birth for people with serious mental illness and other major disorders from a secondary mental health care case register in London. PLoS ONE 6, e19590 (2011)
4. 4.
Olfson., M., Gerhard, T., Huang, C., Crystal, S. & Stroup, T. S. Premature mortality among adults with schizophrenia in the United States. JAMA Psychiatry 72, 1172–1181 (2015).
5. 5.
Pedersen, C. B., Mors, O., Bertelsen, A., Waltoft, B.L. & Agerbo, E. A comprehensive nationwide study of the incidence rate and lifetime risk for treated mental disorders. JAMA Psychiatry 71, 573–581 (2014).
6. 6.
Sokol, K., Flach, P. Conversational Explanations of Machine Learning Predictions Through Class-contrastive Counterfactual Statements. In: Proc. Twenty-Seventh Int. Jt. Conf. Artif. Intell. California: International Joint Conferences on Artificial Intelligence Organization, 5785–5786. https://doi.org/10.24963/ijcai.2018/836 (2018).
7. 7.
Miller, T. Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019).
8. 8.
Reininghaus, U., Dutta, R., Dazzan, P., Doody, G. A. & Fearon, P. Mortality in schizophrenia and other psychoses: a 10-year follow-up of the AESOP first-episode cohort. Schizophr. Bull. 41, 664–673 (2015).
9. 9.
Kolodner, J. Case-Based Reasoning. 687. https://doi.org/10.1016/C2009-0-27670-7 (Elsevier Science, 2014).
10. 10.
Gentner, D. & Forbus, K. D. Computational models of analogy. Wiley Interdiscip. Rev. Cogn. Sci. 2, 266–276 (2011).
11. 11.
Power, P. J., Bell, R. J., Mills, R., Herman-Doig, T. & Davern, M. Suicide prevention in first episode psychosis: the development of a randomised controlled trial of cognitive therapy for acutely suicidal patients with early psychosis. Aust. N Z J. Psychiatry 37, 414–420 (2003).
12. 12.
Sahakian, B. J., Bruhl, A. B., Cook, J., Killikelly, C. & Savulich, G. The impact of neuroscience on society: cognitive enhancement in neuropsychiatric disorders and in healthy people. Philos. Trans. R. Soc. B Biol. Sci. 370, 20140214 (2015).
13. 13.
Tiihonen, J., Lonnqvist, J., Wahlbeck, K., Klaukka, T. & Niskanen, L. 11-year follow-up of mortality in patients with schizophrenia: a population-based cohort study (FIN11 study). Lancet 374, 620–627 (2009).
14. 14.
Cardinal, R. N. Clinical records anonymisation and text extraction (CRATE): an open-source software system. BMC Med. Inform Decis. Mak 17, 50 (2017).
15. 15.
Cunningham, H., Tablan, V., Roberts, A. & Bontcheva, K. Getting More Out of Biomedical Documents with GATE’s Full Lifecycle Open Source Text Analytics. PLoS Comput. Biol. 9, e1002854 (2013).
16. 16.
Wang, T. et al. Implementation of a real-time psychosis risk detection and alerting system based on electronic health records using CogStack. J. Vis. Exp. https://doi.org/10.3791/60794 (2020).
17. 17.
Sultana, J., Chang., C. K., Hayes, R. D., Broadbent, M. & Stewart, R. Associations between risk of mortality and atypical antipsychotic use in vascular dementia: a clinical cohort study. Int. J. Geriatr. Psychiatry 29, 1249–1254 (2014).
18. 18.
ONS. Death registrations summary tables—England and Wales—Office for National Statistics. https://www.ons.gov.uk/peoplepopulationandcommunity/birthsdeathsandmarriages/deaths/datasets (2017).
19. 19.
WHO (1992) The ICD-10 classification of mental and behavioural disorders : clinical descriptions and diagnostic guidelines. World Health Organization. Technical report. https://apps.who.int/iris/handle/10665/37958 (1992).
20. 20.
Winter, B. Linear models and linear mixed effects models in R with linguistic applications. arXiv Prepr 1308.5499. Preprint at https://arxiv.org/abs/1308.5499 (2013).
21. 21.
Bates, D., Machler, M., Bolker, B. & Walker, S. Fitting linear mixed-effects models using lme4. J. Stat Softw. 67, 1–48 (2015).
22. 22.
Bourlard, H. & Kamp, Y. Auto-association by multilayer perceptrons and singular value decomposition. Biol. Cybern. 59, 291–294 (1988).
23. 23.
Gareth, J., Daniela, W., Trevor, H. & Robert, T. Introduction to Statistical Learning with Applications in R. Springer. https://www.statlearning.com/ (2017).
24. 24.
Beaulieu-Jones, B. K. & Greene, C. S. Semi-supervised learning of the electronic health record for phenotype stratification. J Biomed Inform 64, 168–178 (2016).
25. 25.
Linnainmaa, S. Taylor expansion of the accumulated rounding error. BIT Numer. Math. 16, 146–160 (1976).
26. 26.
Chollet, F. keras. https://github.com/keras-team/keras (2015).
27. 27.
Abadi, M. et al. TensorFlow: a system for large-scale machine learning. In: Proc. 12th USENIX Conf. Oper. Syst. Des. Implement. (2016).
28. 28.
29. 29.
Higham, J., Flowers, J. & Hall, P. Standardisation. Technical report, Eastern Region Public Health Observatory. https://www.scotpho.org.uk/media/1403/inphorm-6-final.pdf (2005).
30. 30.
R Core Team. R: a language and environment for statistical computing. https://www.r-project.org/ (2017).
31. 31.
Kuznetsova, A., Brockhoff, P. B. & Christensen, R. H. B. lmerTest Package: tests in linear mixed effects models. J. Stat Softw. 82, 1–26 (2017).
32. 32.
Kolde, R. Pheatmap: pretty heatmaps. https://cran.r-project.org/package=pheatmap (2018).
33. 33.
Kassambara, A. Survminer: survival analysis and visualization. https://github.com/kassambara/survminer (2019).
34. 34.
Friedman, J., Hastie, T. & Tibshirani, R. Regularization paths for generalized linear models via coordinate descent. J. Stat Softw. 33, 1–22 (2010).
## Acknowledgements
This work was funded by an MRC Mental Health Data Pathfinder grant (MC_PC_17213). P.B.J. is supported by the NIHR Applied Research Collaboration East of England. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. This research was supported in part by the NIHR Cambridge Biomedical Research Centre. The views expressed are those of the authors and not necessarily those of the MRC, the NHS, the NIHR, or the Department of Health and Social Care. We thank Jenny Nelder and Jonathan Lewis for all their support during this project and Irene Egli for inspiring S.B. to think about patients with schizophrenia. This work is dedicated to the memory of Patrick Winston.
## Author information
Authors
### Contributions
S.B., P.L., P.B.J., and R.N.C. designed the study. S.B. and R.N.C. verified the underlying data. S.B. conducted the analyses and wrote the original draft of the manuscript. All authors edited the manuscript and gave final approval for publication.
### Corresponding author
Correspondence to Soumya Banerjee.
## Ethics declarations
### Competing interests
R.N.C. consults for Campden Instruments Ltd and receives royalties from Cambridge University Press, Cambridge Enterprise, and Routledge. S.B., P.L., and P.B.J. declare they have no conflicts of interest to disclose.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Banerjee, S., Lio, P., Jones, P.B. et al. A class-contrastive human-interpretable machine learning approach to predict mortality in severe mental illness. npj Schizophr 7, 60 (2021). https://doi.org/10.1038/s41537-021-00191-y
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41537-021-00191-y | 2022-01-18 15:13:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48565173149108887, "perplexity": 2212.3317923438226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300849.28/warc/CC-MAIN-20220118122602-20220118152602-00679.warc.gz"} |
http://minecraft.gamepedia.com/Talk:1.9 | Talk:1.9
This page is only for discussing the 1.9 page. Below are some common links to help you before you post.
Counter-edit warring
There has been some counter-edit warring (whatever it's called) on the page about the release date box. If you could look onto that, that would be great. MarioProtIV (talk) 12:24, 31 July 2014 (UTC)
My opinion, if it helps, is that there is no reason to declare the date is not set when an unknown release date is already good. We never set a parameter just to remove the "?", especially if we do not know what is correct. Mojang may have even set the release date, and just not told anyone. Also, by setting it to "Date not set", it causes much more editing with speculation trying to guess the year. --KnightMiner (t|c) 13:54, 31 July 2014 (UTC)
And setting it to "date not set" is an oxymoron. MajrTalk
Contribs
⎜ 13:58, 31 July 2014 (UTC)
Protect the page until the first 1.9 snapshot gets released
I got tired of unsourced information being added to the page, so I decided to put an editor warning visible only when editing the page, however, people still didn't stop adding unsourced information. Please protect this page until the first 1.9 snapshot gets released. --ToonLucas22 (talk) 14:34, 21 December 2014 (UTC)
In this case, I do not think protection is necessary, as no recent false information has been added, and in the past it was mainly the doing of a single user. If it becomes excessive, I would agree to semi-protection. KnightMiner (t·c) 20:37, 21 December 2014 (UTC)
Wither
Currently, the upcoming changes list the wither has "Planned Additions". That seems very useless, as it states nothing more than the wither is being changed, maybe even simply to include bug fixes. Can we require that the feature actually has some description? KnightMiner (t·c) 18:30, 19 January 2015 (UTC)
That's not what it says, it says "A new bar for when there are two withers". Have you tried purging the cache, perhaps? --ToonLucas22 (talk) 18:44, 19 January 2015 (UTC)
Compare the time of my post with the time of your edit.
My main point is the additions of "Secret feature", "Changes involving x" and other similar things that have been being added. We have no rule in place against undescriptive "Upcoming features", causing people to think it is fine. KnightMiner (t·c) 19:50, 19 January 2015 (UTC)
The tweet doesn't even say it's for 1.9— TheWombatGuru t | c NL Admin 20:20, 19 January 2015 (UTC)
The Wither boss bar is a darker purple. Does that help you? Fyreboy5 (talk) 14:09, 26 October 2015 (UTC)
Consistency
How to add consistency to the one-bullet-point additions? Is it better as separate bullet points (like I just made it) or empty headings? FM22 (talk) 09:43, 8 March 2015 (UTC)
I would keep it consistent with the style of 1.8 and alike. Since there are not alot of features yet, the category headers are not needed, but individual items should have their title bold, and information as bullet points (for example, the captions would state captions as the title, and the example as a bullet point, while the new commands would go under the header of "commands" or something similar) KnightMiner t/c 19:48, 8 March 2015 (UTC)
Swimming bird explained how Tomasso is working on recoding boats: https://www.youtube.com/watch?v=gsf-iBzLT9c at 2:13 he shows tweets from the developers. Should this be included in this page? --Kkkllleee (talk) 02:13, 24 March 2015 (UTC)
That's referring to Pocket Edition. Skylinerw (talk) 02:24, 24 March 2015 (UTC)
I highly doubt that refers to the Pocket Edition, as the context implies "fixing boat" while the PE boats are not in development version yet. Even so, usually we do not state upcoming fixes without bug tracker links. KnightMiner t/c 02:30, 24 March 2015 (UTC)
But Tomasso also promised boats that can hold multiple entities and the different colored wood types, those are new features, not bug-fixes. --Kkkllleee (talk) 03:31, 24 March 2015 (UTC)
The only reference in that video we can guarantee is the PC edition is the first one about fixing boats, the rest are all promised features for the Pocket Edition, and the tweets are from a the Pocket Edition dev, but never stated for PC. KnightMiner t/c 03:34, 24 March 2015 (UTC)
There are more news https://www.youtube.com/watch?v=CRCwo5Bnjfk here the tweets from the developers imply that not only do new kinds of boats would be added, but that version exclusive features in general are gonna have more notoriety across platforms. --Kkkllleee (talk) 04:31, 9 April 2015 (UTC)
I don't see anything about new kinds of boats, just changes to how existing ones behave. I'm not sure what you mean by "notoriety", but Jeb's tweet here says their goal is to get rid of version exclusive features, making the game the same on all platforms. -- Orthotopetalk 04:58, 9 April 2015 (UTC)
At 1:35, he is saying he made a boat out of birch but a bug made it so that it transformed into oak when broken, this implies that now it is considered proper for a birch boat to drop birch planks, but that makes no sense, unless he is saying that new boat kinds are going to be added, since he is against version exclusive features, then it is pretty much confirmed. --Kkkllleee (talk) 20:35, 9 April 2015 (UTC)
Pretty much confirmed is not confirmed, it is still speculation. Stating he wants to get rid of version exclusive features does not mean all Pocket edition exclusive features are coming in 1.9, nor that there are any plans to add any of those features yet. It just means exclusive features are not desired.
As for the tweet you referenced (this one, right? it would be nice for you to provide that link, rather than me needing to find it), I would not conciser that as enough proof as of yet, since jeb_ is also working on the pocket edition at this time (where colored boats are confirmed). Even if referring to the PC edition, that tweet could easily refer to current behavior), as it only mentions the recipe (built from birch) and the outcome (oak planks drop).
So in summary, while I would not doubt colored boats are planned for 1.9, there is no source yet as to them being added in that update. You could try tweeting one of the developers to ask if it is true if you want though. KnightMiner t/c 20:59, 9 April 2015 (UTC)
Thank you for clearing it up. Can you teach me how to search for tweets? --Kkkllleee (talk) 04:35, 11 April 2015 (UTC)
┌──────────────────────────┘
One of the easiest ways to find tweets is to follow the developers on twitter (a list of developers is listed on Minecraft, and their twitters are listed via their articles). Clicking the ... button on the tweet gives an option to copy the URL.
Otherwise, the Minecraft Subreddit tends to contain most tweets relating to new features.
Lastly, if you remember reading a tweet, but cannot find it, google is the easiest way to find it (just type keywords you remember, who tweeted it helps the most). KnightMiner t/c 04:45, 11 April 2015 (UTC)
Changed "hearing impaired" to "hard of hearing"
"Hearing impaired" is a rather rude term and "hard of hearing" would be preferred. 98.203.219.61 17:46, 24 March 2015 (UTC)
Posting to the talk page was not really necessary; a properly-written edit summary is sufficient. — NickTheRed37 t/c (f.k.a. Naista2002) 18:14, 24 March 2015 (UTC)
I guess he was preparing for a flame war. --Kkkllleee (talk) 03:26, 29 April 2015 (UTC)
I don't want to start an edit war
I don't want to start an edit war BDJP007301 but the reason I made the change was because the top level (one indent) point was about making the boss fight more similar to the console edition, and this is one of the features that is in the console edition and is confirmed to be added. It is relevant to the boss fight specifically as it stops you from shooting the ender crystals and you have to climb some of the pillars instead. FM22 (talk) 14:18, 3 April 2015 (UTC)
Agree — NickTheRed37 t/c (f.k.a. Naista2002) 14:22, 3 April 2015 (UTC)
Disagree - Doesn't relate to the Ender Dragon in general, which you put it under. BDJP (t|c) 15:02, 3 April 2015 (UTC)
Point taken; will correct title –Preceding unsigned comment was added by FM22 (talkcontribs) at 15:13, 03 April 2015 (UTC). Please sign your posts with ~~~~
Agree since the title had been changed to specify 'boss fight'. Skylinerw (talk) 19:20, 3 April 2015 (UTC)
Content dispute again
I changed one of the titles to match the bug tracker's, then LauraFi reverted it and BDJP007301 reverted LauraFi's revert. That was finally reverted by Sealbudsman. Should we use common grammar or use the bug tracker titles? --ToonLucas22 (talk) 12:11, 22 May 2015 (UTC)
MCT:Community portal#Bug_descriptions_controversy. Why should we use the junk tracker titles? See also: [1]LauraFi - talk 17:07, 22 May 2015 (UTC)
There is no official policy or guideline regarding bug tracker titles still, but we should gain consensus to avoid further disputes and edit wars about this in the future. --ToonLucas22 (talk) 22:46, 22 May 2015 (UTC)
Why is something as silly as this a dispute? There is no reason we should keep terribly written titles from the tracker, as the titles are hardly an "official" resource, since they are written by users just like here the wiki, only lacking a style guide. There is also no reason to go to every page and correct the titles, as the titles don't hurt anyone even if illegible, but there is even less reason to revert the title to the original title after someone corrects errors. Really, how is the wiki benefited by having no spaces in "end portal frame"? Is this really a battle worth fighting? In summary, if the new title still describes the bug (especially if better), don't revert it to the old one. That is just disruptive. KnightMiner t/c 02:27, 23 May 2015 (UTC)
I agree with KnightMiner so here is my proposed policy:
Bug tracker issue titles should retain their original text, unless such text is unclear, then the recommended approach is to edit it enough so as to keep it essentially the same but more informative.
--Kkkllleee (talk) 22:36, 25 May 2015 (UTC)
You might want to also share your proposal at the larger discussion at Minecraft_Wiki_talk:Community_portal#Bug_descriptions_controversy. – Sealbudsman (Aaron) t/c 22:41, 25 May 2015 (UTC)
Grass path, why is it still here?
OK, so, why is the grass path block still listed here on the 1.9 PC update page if it's meant for PE? Just sayin', the PE grass path page says it's exclusive to the PE version, whereas this block is also listed for inclusion in PC's 1.9. Brickticks (talk) 20:14, 22 May 2015 (UTC)
You need to look at the references. BDJP (t|c) 20:17, 22 May 2015 (UTC)
Wow. Just. Wow. R6Games (talk) 23:22, 4 June 2015 (UTC)
To get a grass path block, you need to simply use a shovel on a piece of grass, meaning the grass block. Fyreboy5 (talk) 11:33, 28 October 2015 (UTC)
Searge said: "The "?" is unrelated to the announcement @jeb_ made earlier." And it says here that block is related to that dungeon. Also the source 22: where is that dungeon mentioned? That's just bunch of pics, some of which show that new block. It should be deleted from this page, or at least it shouldn't be mentioned as source for that new block is related to the dungeon. It also isn't said anywhere that they added support for mirroring or rotating generated structures. –Preceding unsigned comment was added by Blue Banana whotookthisname (talkcontribs) at 12:03, 28 May 2015 (UTC). Please sign your posts with ~~~~
If you read the tweets and their context, Searge is purposely saying the opposite of what is true. Otherwise, why say "we did not add this very specific list of features"? Source 22 (now 28) is to show searge's ?, just in case anyone is wondering if they are the same. KnightMiner t/c 16:02, 28 May 2015 (UTC)
when i click on the link which lists the issues fixed in 1.9 (far future version), i will be redirected to the mojang bugtracker site, but the page says
[Error in the JQL Query: The character '.' is a reserved JQL character. You must enclose it in a string or use the escape '\u002e' instead. (line 1, character 49)]
instead of showing the list of fixed bugs. 77.171.37.50 16:31, 9 June 2015 (UTC)
Edit: it seems like it was a problem with my NoScript, but it still gives the error message:
The value 'Minecraft Far Future Version - 1.9+' does not exist for the field 'fixVersion'.
instead of just giving the list.
77.171.37.50 16:34, 9 June 2015 (UTC)
Fixed KnightMiner t/c 16:04, 10 June 2015 (UTC)
Spectral Arrow
Dinnerbone stated that spectral arrows will be used for utility this should be added to the page. –Preceding unsigned comment was added by Gggggminecraft (talkcontribs) at 15:16, 15 June 2015 (UTC). Please sign your posts with ~~~~
Done KnightMiner t/c 15:25, 15 June 2015 (UTC)
In the video showing the new inventory it is shown that the spectral arrow has a gold like appearance please put the new information in a subbullet (probably not the right term) there are also a few grammatical errors.Gggggminecraft (talk) 20:09, 15 June 2015 (UTC)
Done again. I tweaked the grammar a bit as well, but if you have any more specific ideas of what needs to be fixed, feel free to suggest that here or add it yourself once you become autocomfirmed KnightMiner t/c 20:44, 15 June 2015 (UTC)
This is how it probably should look
Atleast four new arrow types
One such arrow is the spectral arrow
Will be used as a utility rather than for combat
In the video showing the new inventory layout it is shown the spectral arrow has a gold
colored tip
The current info is kind of vague Gggggminecraft (talk) 20:59, 15 June 2015 (UTC)
The reason it is vague is because we can only state what is stated, to avoid speculation. Specifically, the spectral arrow was never stated to not be used for combat, but rather not for damage (it could cause status effects with less damage or change mob behavior, and even utilities can be used for combat). We also do not need to state where the information came from other then in the references, just the information stated. KnightMiner t/c 21:09, 15 June 2015 (UTC)
This is true but as much information as possible must be given maybe you should merge your idea with mine change not used for combat to not very good at hurting just give more information. Gggggminecraft (talk) 21:19, 15 June 2015 (UTC)
Snapshot 'Release Date': Would Adding be Speculation?
I'm unsure as to whether [2] counts as a confirmed (first snapshot) release date. Again, Searge is being overly specific like in the structure generation tweets which are apparently classed as reliable sources on this page, and Minecon seems quite a logical time to release the combat changes... –Preceding unsigned comment was added by FM22 (talkcontribs) at 20:49, 15 June 2015 (UTC). Please sign your posts with ~~~~
While he does say "for the next 2.5 weeks", I would go with no for a first snapshot release date there, as there is nothing stating that after the 2.5 weeks there is a snapshot. They could easily release an early combat version before then, or wait until after Minecon for snapshots. He is more likely saying he won't be working on his mysterious block for the next 2.5 weeks. KnightMiner t/c 20:55, 15 June 2015 (UTC)
Ok, I see, won't add then
Snapshot Gallery Idea
A few snapshots have been released but are not shown on the page I think that a snapshot gallery should bel added. Gggggminecraft (talk) 21:57, 15 June 2015 (UTC)
You could probably do that, try it and see if it gets removed
PancakeMan77 (talk) 15:32, 7 July 2015 (UTC)
There are no 1.9 snapshots, and in any case, a link is better, not a gallery. KnightMiner t/c 15:49, 7 July 2015 (UTC)
I Think he means screenshot gallery. I make the same mistake all the time. PancakeMan77 (talk) 16:17, 14 July 2015 (UTC)
Offhand slot not shield slot
In the changed features section under inventory the offhand slot is called the shield slot which is wrong it's not just for shields. Gggggminecraft (talk) 16:26, 16 June 2015 (UTC)
Grass Path Exclusive to PE
So, on the Grass Path page, it states "Grass paths[1] are a decorative block currently exclusive to Pocket Edition." I feel we should remove this from the page since it isnt going to be added in 1.9, unless its means that it isnt added "yet". In that case, the Grass Path page should be re-worded --vanasten1 (talk) 07:56, 23 June 2015 (UTC)
The Grass Path is worded such to state the current status of the block - it is currently only available for Pocket Edition, and until there is a 1.9 snapshot with the Grass Path block, the paragraph should remain such. See MCW:FUTURE for more info. 09:17, 23 June 2015 (UTC)
Shouldn't... that be a bug?
Strongholds Doors in strongholds are no longer mis-placed.[49] Shouldn't that be a bug, rather than a changed feature? 101.174.180.52 04:54, 24 June 2015 (UTC)
Yes, you are right. It was also covered in the planned fixes section, which used the same source to declare it as a planned fix. I removed it from the changes section KnightMiner t/c 14:59, 24 June 2015 (UTC)
Viewing Livestream Source
The youtube livestream a fair number of the content of this page is now sourced by only lets you see the past 2 hours of video. I don't get the point of sourcing this material if the source video footage is now inaccessible anyway. I'm probably missing something obvious as I don't really know much about how Youtube Live works, but just a thought... FM22 (talk) 14:19, 5 July 2015 (UTC)
You can watch the full stream on Twitch as well - http://www.twitch.tv/mojang/v/6949826?t=1h02m44s. 14:28, 5 July 2015 (UTC)
OK, thanks! I don't know too much about livestream stuff as I said before. FM22 (talk) 15:10, 5 July 2015 (UTC)
I don't know how easy it is to do, but it might be good if someone could save a copy of significant streams such as this one. Twitch has an option to save streams for people to watch later, but deletes the video after either 14 or 60 days, depending on the streamer's account status. I'm not finding a clear answer on if Youtube lets you do the same thing or not. Either way, there's no guarantee that any video will be available after the stream ends. -- Orthotopetalk 17:10, 5 July 2015 (UTC)
After the 2013 minecon Mojang uploaded every single panel video to youtube. This gives me hope that this year's panels will be archived to youtube. FM22 (talk) 17:38, 5 July 2015 (UTC)
Marc says they will be YouTube videos, he is just not sure exactly how long until they will be uploaded. (source) KnightMiner t/c 20:59, 5 July 2015 (UTC)
Shulker disguises as Blocks
I believe that Jeb said that he tried to make them able to camouflage into other blocks, but he ended up not doing that because it was to hard to code. Can someone clarify? PancakeMan77 (talk) 15:30, 7 July 2015 (UTC)
That part is noted on mentioned features, as it was specifically stated that he could not do it at this time, but would like to in the future.
Also, please use the "add section" button to make a new topic. Do not just randomly place a new section in the middle of the page. KnightMiner t/c 15:49, 7 July 2015 (UTC)
Okay, I am sorry. But what i was wondering, is if it is not being put into 1.9, why does it say under Shulker that they camouflage? That is saying that it will be in 1.9, but it will not. PancakeMan77 (talk) 17:12, 8 July 2015 (UTC)
It doesn't, it says it disguises as a block, or its shell closes making it look like a block. KnightMiner t/c 03:43, 9 July 2015 (UTC)
snapshot
Wouldn't the april fools snapshot 15w14a be the first snapshot for 1.9? It had the feature that 'combat update' could be found in it? Or is that not enough to count as an official snapshot?
It was an April Fools "snapshot", not an actual release of 1.9. Every April Fools the developers do something for April fools. That snapshot was confirmed to be 1.9, and little to none of the features in that have been talked about for 1.9. PancakeMan77 (talk) 15:52, 13 July 2015 (UTC)
The only thing that wasn't a joke was the QR code, which when scanned would reveal the name for the 1.9 update. --MarioProtIV (talk) 16:12, 13 July 2015 (UTC)
The new Enderdragon Boss fight
I've heard that you could fight the dragon once again without resetting the end, but will the ender crystals stay? Just waiting for my survival to get harder. Xtremewolves (talk) 13:51, 23 July 2015 (UTC)
If I recall, yes, they can be used in summoning a second dragon once the first is defeated. --MarioProtIV (talk) 13:54, 23 July 2015 (UTC)
Arrow
Wouldn't the "Arrows no longer collide with an invisible wall" portion be a bug fix? PancakeMan77 (talk) 21:46, 27 July 2015 (UTC)
It would be, yes. – Sealbudsman (Aaron) T/C 21:54, 27 July 2015 (UTC)
End Ships?
I came across an "end ship" that had a colored beacon (with no effect assinged), obsidian on the bottom, a brewing stand (with two Health II potions) and two chests (with diamond tools).
File:End ship.png
An end ship.
I'm no writer, so I'll just leave this info here. --FargoGoosey (talk) 14:43, 29 July 2015 (UTC)
Known new blocks for 1.9
To the best of my knowledge, here is what we know about the new blocks added in 1.9:
198:00 End Rod (Upright)
198:04 End Rod (East-West)
198:08 End Rod (North-South)
199:00 Chorus Plant
200:00 Chorus Flower
201:00 Purpur Block
202:00 Purpur Pillar
206:00 End Stone Bricks
208:00 Grass Path
We also know about the Purpur stairs and slabs:
???:00 Purpur Stairs (Ascending East, normal)
???:01 Purpur Stairs (Ascending West, normal)
???:02 Purpur Stairs (Ascending North, normal)
???:03 Purpur Stairs (Ascending South, normal)
???:04 Purpur Stairs (Ascending East, upside-down)
???:05 Purpur Stairs (Ascending West, upside-down)
???:06 Purpur Stairs (Ascending North, upside-down)
???:07 Purpur Stairs (Ascending South, upside-down)
???:?? Purpur Double Slab
???:?? Purpur Double Slab (seamless) <--- speculative
???:?? Purpur Slab (lower half)
???:?? Purpur Slab (upper half)
This leaves at least four unknown blocks (203, 204, 205, 207). I suspect Purpur Stairs is one of those four; I furthermore suspect that Purpur Double Slab and Purpur Slab are two of the remaining three, rather than being lumped in with Red Sandstone in 181 & 182.
P.S. Oh yeah and the Dragon Head. Blah
–Preceding unsigned comment was added by 24.138.38.82 (talk) at 21:48, July 29, 2015 (UTC). Please sign your posts with ~~~~
To the extent that the numeric IDs are being used (are they anymore?), you can infer what they would be by looking in the debug mode world.
They are (as of 15w31b): 198 = End Rod, 199 = Chorus Plant, 200 = Chorus Flower, 201 = Purpur Block, 202 = Purpur Pillar, 203 = Purpur Stairs, 204 = Purpur Double Slab, 205 = Purpur Slab, 206 = End Stone Bricks, 207 = Beetroot Seeds, 208 = Grass Path, 209 = End Gateway Portal block, and 210 = Structure Block.
The dragon head isn't a separate name-id or block state (ID or DV) from the regular mob head; it's only different in its block entity values.
There is no seamless purpur double slab. – Sealbudsman (Aaron) T/C 15:37, 31 July 2015 (UTC)
"1.9 is the first non-development version of the Combat Update"
In response to the statement "incorrect and confusing, mojang specifically stated 1.9 would be named the Combat Update, writing it like that would make it seem like it was split": the problem here is that the name "1.9" is ambiguous and can refer either to the entirety of the Combat Update, or to the specific version with the version number "1.9". This, however, is true for all such version releases: 1.8, 1.7(.2), etc. This has been previously discussed on the wiki, though I can't recall where it might have been (if anyone else can, please link it, both for context and because I would like to be able to reread the discussion =D ).
Ignoring that, though, there are two major arguments against treating these version numbers as synonymous with the named updates they're a part of: first is that if we do, there is suddenly no reason to have two separate articles, since both articles would have the exact same scope and the same content, and second is that this would break the pattern established by the other version number articles, namely that each version number is covered in its own dedicated article (excepting pre-Alpha versions (at least currently) due to their age and the sparseness of available information on any single version).
"1.9" is used by Mojang (and, inevitably, others) as a shorthand for the Combat Update only because it's shorter - it's convenient but not perfectly accurate, like saying the sun rises in the morning and sets at night (whereas in reality the sun isn't doing anything, and its apparent motion through the sky is instead due to the Earth's rotation about its axis). We do not have to, and indeed should not, follow such conventions, even when they are established by Mojang, when there are compelling reasons to break from them, and I have provided several compelling reasons here to ignore this particular one.
As a final point, I will note that the other articles for version numbers like this for the other named releases all have (or should have, at least; some of them may have been changed back at some point, and some may never have had the correct language) this language in them as well. 06:14, 31 July 2015 (UTC)
This discussion, while not on the exact same topic as the current one, is still relevant, and much of the reasoning in my comment there can translate, at least indirectly, to here. 06:26, 31 July 2015 (UTC)
I know you are addressing MarioProtIV's point, though I have a different concern to raise.
I agree with everything you say about the version and the update being two separate animals – yet as precise as it is, I still feel the exact phrase "first non-development version" to be clumsy. And I think it's at least partially because of this: I am not convinced the snapshots are, strictly speaking, a part of the Update, so much as they represent just the development of the Update. It seems this way to me because of the way Mojang talks about the release versus the snapshots; they don't ever announce the arrival of the Update on the blog or the tumblr until after all the snapshots and pre-releases have been exhausted, and the full version is ready. So the word "non-development" here feels redundant. Tell me if I'm far off, on this.
Anyway, if you omit that word, that leaves the phrase "1.x, the first version of the X Update, ...".
Or switch it up a little to say "1.x, the version ushering in / kicking off / introducing / launching / [other synonym] the X Update, ...".
Sealbudsman (Aaron) T/C 15:25, 31 July 2015 (UTC)
I see your point here, but in this case I think reader comprehension is more important: even if the development snapshots and prereleases aren't considered part of the current named update by Mojang, it's likely that most people will consider them to be simply because that's what makes sense. Of course, I could be mistaken about this, since I don't follow the fandom terribly closely. Other than that, I do agree that I'm not terribly fond of "non-development", but it's the best thing I've come up with that still avoids the ambiguity I pointed out here. But this is a fairly minor point overall, and I wouldn't be terribly broken up regardless of what's decided for it. 15:48, 31 July 2015 (UTC)
"... it's likely that most people will consider them to be simply because that's what makes sense."
I think we might just have different ideas about what makes sense in this case. Like, right now, to me, we're still in 1.8 territory, with sneak peeks at what's upcoming. It could be just me. But anyway ... if it comes down to just using a simpler phrase like "first release", I could get along with that. – Sealbudsman (Aaron) T/C 17:17, 31 July 2015 (UTC)
Absolutely agree with Sealbudsman
. -BDJP (t|c) 15:29, 31 July 2015 (UTC)
While I agree with Sealbudsman that "non-development" reads oddly, I do still feel development versions are part of the overall update, so maybe rewording that to "the first release of X update" (which would be consistent with both the term used in the launcher, and the term "pre-release" being before the release) KnightMiner t/c 15:49, 31 July 2015 (UTC)
Oh, "release" instead of "version"... that could work, yeah. 15:56, 31 July 2015 (UTC)
As a general note here, if this proposal is successful, the articles for named updates will also have to be updated, since currently they all equate the named update and the version number update as well. 15:56, 31 July 2015 (UTC)
Changed my mind. Unfortunately, I've now decided to
Oppose. I agree with Mario on this one. It does seem like it was split. -BDJP (t|c) 13:08, 3 August 2015 (UTC)
Oppose - It's already stated in the infobox that the official name of 1.9 is The Combat Update. That piece of prose saying "1.9, the first release of The Combat Update..." makes it look confusing. Also, I thought it was already obvious that wording it like that makes it look like separate things. --ToonLucas22 (talk) 17:31, 28 December 2015 (UTC)
Need any help with 1.9 wiki pics?
I can help you with that. I'm actually almost done ripping the 1.9 textures; I just need to get the Shulker stuff on the wiki. Does anyone know how to do the renders like the ones on the wiki? RosalinaFan573 (talk) 18:27, 31 July 2015 (UTC)
Those are usually made using a mod by BarracudaATA called MineShot, which is unlikely to be updated to 1.9 until the snapshots are done. Until then, the only way a render will exist is if someone makes one using something like blender (which would be a pain). KnightMiner t/c 01:46, 1 August 2015 (UTC)
Splash
I was playing minecraft with my friend one day in 15w31c, but he was laggy and had to restart his computer. When I quit to title, I saw a splash. It said: Where there is not light, there can spider!
I am pretty sure that splash wasn't there before. I looked at this page and I couldn't find any new splashes. –Preceding unsigned comment was added by 121.218.128.66 (talk) at 5:18, 01 August 2015 (UTC). Please sign your posts with ~~~~
According to Splash, that one was added in 1.8.2 . -- Orthotopetalk 05:39, 1 August 2015 (UTC)
Armor Stands
Under blocks, it says something about an armor stand. But aren't armor stands entities? PancakeMan77 (talk) 21:10, 2 August 2015 (UTC)
Yes, they are. I had already fixed that on 15w31a, but it seems I forgot to port the fix here. KnightMiner t/c 00:26, 3 August 2015 (UTC)
Beacons on End Ships will not stay
Searge tweeted at https://twitter.com/seargedp on July 30th, "The beacon in the end city ships will not stay, @jeb_ told me it was just for testing but I forgot to remove it before we made the snapshot." It's reasonable to assume that the beacon will be removed and therefore reasonable to remove the beacon addition from the wiki page. IDK how the wiki works exactly with upcoming features. It is quite possible the wiki includes all features of snapshots. Simply put, there is no need for the wiki to report an upcoming feature to 1.9 if that feature is already proven not to be an upcoming feature. There is a possibility for the beacon addition to be kept, but as of writing this, one couldn't assume so as it isn't said to be so. 73.41.130.151 21:24, 2 August 2015 (UTC)
The wiki only supports snapshoted/released versions except in sections marked as planned. The thing you mentioned is already noted in Planned changes, and will not be moved to the main section until it actually happened in a snapshot. KnightMiner t/c 00:24, 3 August 2015 (UTC)
I see. I assumed that if something was noted under 1.9, it meant that it was planned for 1.9. I assumed this because 1.9 isn't out yet so therefore everything is technically a "planned feature" (Snapshots aren't official). I don't expect change, but a suggestion I have is this - The 1.9 page consists of things that are planned for 1.9. If they are added in a snapshot then they are added to the 1.9 page and the snapshot page, because they are then planned for 1.9. But if something is no longer planned, then you would remove this from the 1.9 page because the 1.9 page is only for things that are planned for 1.9 (Which was stated in the first sentence). End ship beacons aren't for example. You would keep the end ship beacon addition on the snapshot page though because it was in the snapshot. This way, you can remove "planned features" from the 1.9 page because all 1.9 additions are "planned features" technically. I understand that snapshot additions can also be planned additions for 1.9 but, like end ship beacons for an example, all snapshot additions aren't 1.9 planned features. I'm basically saying this possible way of having the 1.9 page because it makes more sense to me and it makes it easier for other people to understand. TY 73.41.130.151 01:22, 5 August 2015 (UTC)
We have in the past had problems with doing things that way. Features would get stated as planned for the update (and organized among the other text), then a developer would forget or put it off for a later update, but the page would never get updated as it gets lost among the text. This would leave users unsure of what actually happened in the update, and lots of "why does this feature not work" comments.
Instead, the most orderly way to have it is the page describe all features that are currently in 1.9 (as in if the latest snapshot was released as the full update), and the "Planned" sections describe what might change or be added before the full release. KnightMiner t/c 01:41, 5 August 2015 (UTC)
Maybe there should be a kind of asterisk or note next to 1.9 things which is a link. They link to the planned changes part of the page where it says that the specific thing is planned to be changed. It lets people know that an addition to 1.9 might not stay. 73.41.130.151 02:46, 9 August 2015 (UTC)
/clone
What happened to that bit where it said that the /clone command would gain support to rotate/mirror structures? Was it announced that wasn't happening? PancakeMan77 (talk) 01:47, 11 August 2015 (UTC)
According to Searge, it's not implemented yet but will be eventually. Skylinerw (talk) 02:28, 11 August 2015 (UTC)
Broken bullets
I have noticed that on my phone any bullet points that come after a <\ref> tag are not on a new line. Is anyone else having this problem (look at 'planned' sections of this page). Is this a gamepedia bug for my phone or a syntax error? FM22 (talk) 11:42, 11 August 2015 (UTC)
Ok, I fixed the spaces after the semicolons so it shows up fine in the preview but it still displays wrong! FM22 (talk) 11:57, 11 August 2015 (UTC)
I've undone your removal of the spaces because their presence or absence has no impact on the rendered page whatsoever; the effect you saw on preview likely would have happened if you simply clicked to edit the page and then previewed without making any changes (though I'd appreciate if you could actually try that and report back).
Can you get a screenshot of the problem and provide some details of your device (type of phone, OS and browser version)? 23:41, 11 August 2015 (UTC)
Yes, that's probably what happened. I have no idea how to send the screenshot but I have a better description of the problem:
The page looks fine in preview mode, but in normal mode (reading the page) all level 1 bullets (*) are not on a new line. All other bullets work and this does not depend on <ref> tags.
I have a Moto G running Lollipop 5.0.2 and am getting the problem on Chrome (mobile version) 43.0.2357.93 (latest version)
You can upload the screenshot here or on your image sharing service of choice (I personally prefer imgur, if you're looking for a recommendation =) ). 08:26, 12 August 2015 (UTC)
http://imgur.com/a/VfXix should work FM22 (talk) 10:35, 12 August 2015 (UTC)
Can you try changing chrome://flags/#enable-gpu-rasterization to disabled and seeing what happens? 11:31, 12 August 2015 (UTC)
Ignore me, I have no idea what I'm talking about. Thanks for pointing out the actual cause here, Majr! =) 11:40, 12 August 2015 (UTC)
This is caused by some mobile view styling for profiles being applied to the 3rd section of every page rather than just the profile page, so yes, a gamepedia bug. MajrTalk
Contribs
11:38, 12 August 2015 (UTC)
Dual Wielding
So I was wondering if I should create a page about dual wielding, or what we should do with it. I think it should go somewhere in a page other than the 1.9 and 1.9 snapshot pages. PancakeMan77 (talk) 00:56, 12 August 2015 (UTC)
I would say the 'f' key mentioned in Controls, the features mentioned in Inventory and the practical uses in Tutorials/Dual Wielding FM22 (talk) 10:46, 12 August 2015 (UTC)
Also HUD if such a page exists FM22 (talk) 11:49, 12 August 2015 (UTC)
Can you make the tutorial page? I will write it up and everything, I just need the page to be there, even if it's blank PancakeMan77 (talk) 15:45, 12 August 2015 (UTC)
I don't see any reason why you wouldn't be able to create the page yourself. -- Orthotopetalk 15:53, 12 August 2015 (UTC)
I'm not sure how to create a tutorial page PancakeMan77 (talk) 16:39, 12 August 2015 (UTC)
I have already added the relevant information to inventory, controls, and heads-up display, but I do agree a tutorial would be a good idea for specific uses. KnightMiner t/c 18:30, 12 August 2015 (UTC)
Just go to: http://minecraft.gamepedia.com/Tutorials/Dual_Weilding and edit it into existence. It would probably be good to add {{Tutorials}} to the bottom, and create the relevant redirect pages. Cultist O (talk) 18:45, 12 August 2015 (UTC)
Thanks! In a little bit anyone will be able to edit. This is going to need some help from everyone PancakeMan77 (talk) 19:43, 12 August 2015 (UTC)
Okay, I made it. It will still need major editing/additions from everyone PancakeMan77 (talk) 20:04, 12 August 2015 (UTC)
I forget how we did this in history sections
Are history sections supposed to pass {{1.9}}, or was {{history}} supposed to take care of 1.9 internally? – Sealbudsman (Aaron) T/C 21:06, 13 August 2015 (UTC)
Most any template that uses an internal version links (usaually via {{version link}}) automatically adds {{release version}} internally. {{1.9}} is only really required for direct text references to the update. So yes, {{history}} takes care of {{1.9}} internally. KnightMiner t/c 22:33, 13 August 2015 (UTC)
Excellent, thanks, I didn't remember. – Sealbudsman (Aaron) T/C 23:52, 13 August 2015 (UTC)
9 new command blocks??
You can read here that 1.9 will have 9 new command blocks. That's because Searge made 9 new textures, he said.
But this is not true. 1.9 contains 3 new command blocks, because he created 3 textures for each command block, remember? :P I don't know how to change it, so I'm just gonna say it here.
~Anoniem --212.187.109.211 16:11, 19 August 2015 (UTC)
HELP!
When I added fixes from 1.9 snapshots then I accedenty removed fixes. What should I do? :'c WillMacViking (talk) 06:07, 21 August 2015 (UTC)
I think it's been reverted now. FM22 (talk) 12:38, 21 August 2015 (UTC)
Snapshots
I have something i discovered in 15w34d. I am not sure which "34" snapshot it is in. I have no way of testing it, as the others were removed. What should I do? PancakeMan77 (talk) 22:29, 25 August 2015 (UTC)
We have links to the download for the versions which are no longer included in the launcher on all the 34 snapshot pages. They were just removed from the launcher. KnightMiner t/c 23:04, 25 August 2015 (UTC)
How do I install them? I'm on a Mac. PancakeMan77 (talk) 00:27, 26 August 2015 (UTC)
Create a new folder in .minecraft/versions under the version number (eg, 15w34b), then paste the .jar and .json files in that folder. It will then appear in the launcher if you enable development versions. KnightMiner t/c 01:10, 26 August 2015 (UTC)
Swords doing the same damage as a fist? (Bug)
So, I'm playing with the snapshots, and I notice it takes longer to kill a mob with a sword. I hadn't been keeping up with the additions of the snapshots, so I assumed they made the mobs stronger. Then today I was looking at the things added in snapshots, when my friend noticed something. He said that the sword does half a heart, the same damage as a fist. I didn't believe him, so I punched a zombie to death. It took 22 hits - just like it normally does. So, I realized that the sword does the same damage as half a heart. Is this a bug? Has anyone else noticed this? 07:10, 5 September 2015 (UTC) –Preceding unsigned comment was added by 121.218.5.106 (talk). Please sign your posts with ~~~~
Please read 1.9#Gameplay 2 under the subheader combat. Weapons now have a delay before you can used them a second time. KnightMiner t/c 17:25, 5 September 2015 (UTC)
This delay is to deal the same damage per hit. Spamming is no longer useful for battle. Unless, of course, you have a, say, 20,000 sharpness sword. Such a sword, by the way, will one shot the Wither, even while spamming. I hope this helps. Fyreboy5 (talk) 14:34, 26 October 2015 (UTC)
Removing redundant information
I'm unsure as to what to remove in planned changes. I recently removed the "new potions" information as new potions have been added but it keeps getting reverted. BDJP, I thought you didn't like speculation; new potions are specualtion as new ones have been added since the source mentioned them. FM22 (talk) 12:09, 13 September 2015 (UTC)
Personally, I wouldn't think that is speculation at all. That the post was made before lingering potions were added is pointless. The post did not mention anything regarding lingering potions, it clearly said "new potions or enchantments" (not "new potion variant"), and I'm trying to get word from Dinnerbone regarding this matter. -BDJP (t|c) 12:12, 13 September 2015 (UTC)
Fair enough FM22 (talk) 12:19, 13 September 2015 (UTC)
Thoughts on 'new block type' part? http://m.imgur.com/Rg7HYOw looks like the planned block type is end stone brick and purpur... –Preceding unsigned comment was added by FM22 (talkcontribs) at 7:15, 14 September 2015 (UTC). Please sign your posts with ~~~~
It could easily be end stone bricks, as the picture is showing how good yellow sandstone/brich looks with green/purple prismarine. As such, it might be a little too vague to be worth keeping, even if they mean a different block (plus from the sound of the tweet, its a solely decorational block, meaning if there is little reason not to add it to the first snapshot) KnightMiner t/c 14:21, 14 September 2015 (UTC)
Done removed info. I will search for more redundancy in a section created a few month ago and untouched since. –Preceding unsigned comment was added by FM22 (talkcontribs) at 15:24, 14 September 2015 (UTC). Please sign your posts with ~~~~
Change and addition dichotomy
I have doubts on the nature of the new skeleton trap, because for all intents and purposes is a new entity an does not represent a change on previous horse. Also the new commands blocks are definitely new blocks so they should go on additions, but that would mean separating the changes of the regular command blocks. I'm also curious if the description on the health tag is accurate. --Kkkllleee (talk) 06:25, 21 September 2015 (UTC)
A new entity would imply a new ID, when there is none. It's still the same old horse but with additional functionality. But keep in mind that's from a technical point of view; I agree that it blurs the line between "change" and "addition", being a new gameplay mechanic but not being a new entity.
The "Health" Short tag was replaced by the "Health" Float tag, with "HealF" Float tag removed completely (equivalent to "HealF" being renamed to "Health", which changes the tag-type of the original "Health" tag). The description on the page is a bit awkward to read, but that's the general idea of it. Skylinerw (talk) 10:35, 21 September 2015 (UTC)
It's not "for all intents and purposes" a new entity: to create one you can just use /entitydata on any existing horse, and it behaves the same as any horse except for one AI routine. It's similar to (but much more dramatic than) some zombies or skeletons having the ability to pick up items, or a zombie having the ability to break doors, or a chicken being flagged to not lay eggs and to despawn. Even charged creepers and baby zombies are considered the same kind of entity. It is a new behavior for the existing entity.
As for the Health/HealF, the situation in code as of 15w38b is that if the entity's DataVersion is less than or equal to 109 then the value from HealF (or from Health cast to float if HealF isn't set) is set as the new float-valued Health and any existing HealF is deleted. The rest of the code then expects Health to be a float. The existing description is strange, please fix. Anomie x (talk) 11:42, 21 September 2015 (UTC)
Guys, what about command blocks? :( --Kkkllleee (talk) 18:17, 21 September 2015 (UTC)
As to not seperate them, I would keep them in the changes. It was a change to command block. PancakeMan77 (talk) 20:21, 13 October 2015 (UTC)
New entities
Is it true that the only new entity added is the effectCloud? What about the new projectile of the ender dragon? --Kkkllleee (talk) 05:18, 11 October 2015 (UTC)
All new entities to 1.9 are: AreaEffectCloud, DragonFireball, Shulker, ShulkerBullet, TippedArrow, and SpectralArrow. Skylinerw (talk) 05:35, 11 October 2015 (UTC)
Thanks I'll be sure to add the relevant ones. --Kkkllleee (talk) 05:57, 11 October 2015 (UTC)
Dinnerbone stated potentional release date for Minecraft 1.9
At October 15th, Dinnerbone (Nathan Adams) posted the following tweet on Twitter:
We are aiming to get Minecraft 1.9 feature complete at end of this month. Feature complete doesn't mean bug free & ready to release, though!
Should this be added to this page? End of October, it is a potential release date...
–Preceding unsigned comment was added by 83.84.23.67 (talk) at 19:43, 15 October 2015 (UTC). Please sign your posts with ~~~~
Per the end of the quote ("Feature complete doesn't mean bug free & ready to release, though!"), I would say it is not precise enough, rather just a general plan. It would be worth noting in the article header though that they plan of feature completeness by the end of October. KnightMiner t/c 20:04, 15 October 2015 (UTC)
Subtitles page?
Should there be a subtitles page? I think it should go somewhere at least. Maybe it could list all the subtitles? I don't know, I just think it should go somewhere. –Preceding unsigned comment was added by PancakeMan77 (talkcontribs) at Oct 25, 2015, 16:25 (UTC). Please sign your posts with ~~~~
Look no further than Subtitles, for all your subtitle needs. – Sealbudsman 21:30, 25 October 2015 (UTC)
AreaEffectCloud
Should there be a dedicated page to the AreaEffectCloud entity? I believe most other entities have their own page, and all the relevant NBT data could be listed there. 04:57, 1 November 2015 (UTC)
Or maybe the link that already exists on Lingering Potion is enough? 04:58, 1 November 2015 (UTC)
Since the entire effect is directly related to the potion (as opposed to having an independent use), I would just expand the coverage cover it on Lingering Potion, like we do with the snowball or arrow entities. KnightMiner t/c 20:46, 1 November 2015 (UTC)
1.10 or 2.0
The general convention for versioning software is to treat each number in the version seperately, with the major and minor version numbers being independent. This is demonstrated very well by the MC:PE alpha, which is currently version 0.13 (not 1.3). WillMackViking, the Minecraft Alpha went from 1.2 Alpha to 1.0 Beta and the minecraft Beta went from 1.8 Beta to 1.0 release (although there were some pre-releases for "Beta 1.9" which was then released as 1.0.0). There is absolutely no precedent anywhere in either convention or Minecraft's version history to go from 1.9 to 2.0, unless the devs want to completely start Minecraft from scratch. Additionally, 1.10 is listed as a version on the bug tracker and there have been tweets about this issue. FM22 (talk) 18:05, 9 November 2015 (UTC)
WillMacViking, You are edit warring and participating in bad faith. People have explained to you how versioning works, there isn't now any excuse why you should be acting as hostile as this. There isn't a burden on people to provide 'evidence' why you should stop, as if this kind of hostage-taking is your right. You should stop because you should have come and discussed it here, under the rules of how edit wars are supposed to be averted.
Will https://twitter.com/Dinnerbone/status/625957736526839808 and https://bugs.mojang.com/browse/MC-90861?focusedCommentId=259787 help put this ridiculous topic to bed? – Sealbudsman 18:40, 9 November 2015 (UTC)
Answer: Thank you for giving a link and a good explanation so I will stop editing. I just thought that it would be dumb to call it 1.10 because its nearly like 1.1 so it would be hard to search on youtube etc. Sorry and thank you again –Preceding unsigned comment was added by WillMacViking (talkcontribs) at 19:08, 09 November 2015 (UTC). Please sign your posts with ~~~~
That's fine, but we don't make the names, we just call them as they are, dumb or not. Thanks. – Sealbudsman 19:13, 9 November 2015 (UTC)
Might I say that this is irrelevant to 1.9. This section should be on the talk page of 1.10. Fyreboy5 (talk) 13:40, 2 December 2015 (UTC)
It might seem so if you ignore the context of the discussion, which is the nextparent link field in the infobox. KnightMiner t/c 15:22, 2 December 2015 (UTC)
Although mentioned by Dinnerbone and written on this page, I haven't found any evidence that pressing left and right in alternation makes you paddle any faster in a boat. I have tried "rowing" at various speeds and none of them work any better than just holding the "a" and "d" keys--in fact the boat is slower a lot of the time. Additionally I have seen on Reddit that other people caould not confirm this feature existed either. Can anyone get this "paddling" feature working? FM22 (talk) 23:22, 14 November 2015 (UTC)
Firework Rocket Update
When I was playing with the fireworks, I noticed they changed it. Let me tell you how.
When crafting a rocket with no firework star, the number of gunpowder to the rocket will not change its height - probably a bug.
When I place a firework rocket, it spawns a rocket exactly where I place it, such as if I use it on the edge of two blocks, it will spawn exactly there. Because of this, it can go through blocks if placed on the underside.
Even though this change is small, it is a change. Fyreboy5 (talk) 12:51, 4 January 2016 (UTC)
Can you try to figure out which snapshot this was added in? PancakeMan77 (talk) 16:17, 18 January 2016 (UTC)
The Release date?
Someone wrote the release date "February 25,2015" instead of "February 25,2016".Please fix it. 20:30, 17 February 2016 (UTC)~~ ThunderEagle14 (talk) 20:30, 17 February 2016 (UTC)
I'll do it, but you certainly can too! Thanks for noticing. – Sealbudsman 20:32, 17 February 2016 (UTC)
Release date DOW is incorrect
It should read "Monday, February 29". Also, "February" is spelled incorrectly in the sidebar. Scudobuio (talk) 11:53, 23 February 2016 (UTC)
Issue with pre-releases in navigation
The pre-releases are appearing under snapshots, builds and twice in the pre-release section on the navigation bar on the right hand side of the page. Anyone know what is causing the issue? 10:16, 28 February 2016 (UTC)
Yep, my fault, I forgot to remove some debug stuff I was using when fixing an error with {{development version list}}. KnightMiner t/c 14:48, 28 February 2016 (UTC)
1.9 out today!
Get ready to make note of it being official, it'll be noted that 1.9 MAY BE OUT. 96.237.27.238 12:31, 29 February 2016 (UTC)
What? If 1.9 comes out today, we will state it as released, otherwise we will state it was not released when planned. We already know about the planned release date here (as stated on the article), so additional warning is unneeded. KnightMiner t/c 15:14, 29 February 2016 (UTC)
Why Protect The Page?
Well guys, I don't think that you have to protect this page, I mean try to prevent vandalism but not COMPLETELY protect it! That's not fair to the people who don't vandalize pages. Who would edit it? (and don't you DARE say "An admin!" Or "A mod!" becasue you know what I mean and that's called being a smart alec. Thank you!) FizzyCocoaPerson t/c 15:14, 29 February 2016 (UTC)
It's only semi-protected: any registered user in the 'autoconfirmed' group can edit the page. This deters casual vandals while allowing our valued contributors to keep pages up to date. Also, looking at your edits, I would disagree with your implication that you're one of the "people who don't vandalize pages". -- Orthotopetalk 19:06, 29 February 2016 (UTC)
Well Orthrope, I Didn't know that. I'm sorry about that pointless edit I just made. Also, you're correct I did use to vandalize pages BUT I don't do it anymore, as back then, I was immature and kind of a troll, however I don't do that anymore and I really have changed since then. FizzyCocoaPerson t/c 15:14, 29 February 2016 (UTC)
What to do with Planned Additions/Changes
What should be done about the planned additions and changes? They are no longer planned for 1.9, as it is already out. Should they just be deleted entirely?PancakeMan77 (talk) 22:02, 29 February 2016 (UTC)
I think they'd go in mentioned features. – Sealbudsman 22:23, 29 February 2016 (UTC)
Is that all the trivia we could get?
Trivia
• It took a long time.
• No, like a really long time.
• Like, so long.
• Did we mention it took a while to make?!
Come on, we can do better than that. –Preceding unsigned comment was added by DigiDuncan (talkcontribs) at 13:34, March 3, 2016 (UTC). Please sign your posts with ~~~~
If you have something else to put there, there's no-one stopping you. – Sealbudsman 19:38, 3 March 2016 (UTC)
Trivia sections are a new thing on version articles, and really besides facts on development length there is nothing else that does not already fit elsewhere on the article. KnightMiner t/c 21:21, 3 March 2016 (UTC)
Trivia sections aren't something that should be encouraged anyways. If something's worth noting in an article, there's almost certainly a better place for it to go than a trivia section, and if it's not worth noting, sticking it in a trivia section doesn't suddenly make it noteworthy. 11:34, 5 March 2016 (UTC)
Controls Changes
I noticed that there seemed to be significant changes to the controls in terms of inventory management and what left-clicking and right-clicking do in terms of no longer splitting stacks or easily dragging things around, etc. Could someone please reference these changes and how to do these things now that they are changed? Thanks. --KRaZyXmAn (talk) 01:53, 19 March 2016 (UTC)
I haven't noticed any changes along those lines in 1.9. Anomie x (talk) 11:17, 19 March 2016 (UTC)
1.9 Feature - missing from list
I am not sure what it is called but it has the effect of automatically hopping up onto one block high increases in the terrain when moving across land blocks. I can not find this new 1.9 upgrade feature listed or being discussed any where. I for one have found that this new feature is a major pain when digging landscape or moving around dangerous areas. I would like to be able to TURN IT OFF but I do not think that is possible. Where would I go to request such an option be added to the game? –Preceding unsigned comment was added by 73.89.181.185 (talk) at 22:00, 28 July 2016 (UTC). Please sign your posts with ~~~~
Because it's not a 1.9 feature, it's a 1.10 feature. The Auto-jump option is available under "Controls". Skylinerw (talk) 22:02, 28 July 2016 (UTC)
If you wanted to suggest any other features to the game, I would try https://www.reddit.com/r/minecraftsuggestions. – Sealbudsman 22:06, 28 July 2016 (UTC) | 2017-04-26 11:54:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5501415133476257, "perplexity": 3000.9840405080945}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121305.61/warc/CC-MAIN-20170423031201-00140-ip-10-145-167-34.ec2.internal.warc.gz"} |
http://www.scientificlib.com/en/Mathematics/LX/WeightedProjectiveSpace.html | # .
In algebraic geometry, a weighted projective space P(a0,...,an) is the projective variety Proj(k[x0,...,xn]) associated to the graded ring k[x0,...,xn] where the variable xk has degree ak.
Properties
• If d is a positive integer then P(a0,a1,...,an) is isomorphic to P(a0,da1,...,dan) (with no factor of d in front of a0), so one can without loss of generality assume that any set of n variables a have no common factor greater than 1. In this case the weighted projective space is called well-formed.
• The only singularities of weighted projective space are cyclic quotient singularities.
• A weighted projective spaces is a Fano variety and a toric variety.
• The weighted projective space P(a0,a1,...,an) is isomorphic to the quotient of projective space by the group that is the product of the groups of roots of unity of orders a0,a1,...,an acting diagonally.
References
Dolgachev, Igor (1982), "Weighted projective varieties", Group actions and vector fields (Vancouver, B.C., 1981), Lecture Notes in Math. 956, Berlin: Springer, pp. 34–71, doi:10.1007/BFb0101508, MR 0704986 | 2021-09-21 05:49:53 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8645197749137878, "perplexity": 536.1675165993125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057158.19/warc/CC-MAIN-20210921041059-20210921071059-00469.warc.gz"} |
http://physics.stackexchange.com/questions/22419/equations-of-motion-in-2d?answertab=votes | # Equations of motion in 2D [closed]
I'm struggling with a seemingly simple problem in 2D motion. Basically, the question is, given accelerations in $x$ and $y$ ($a_x$ and $a_y$) as well as the angular velocity ($\omega$), how can we find the trajectory of the motion? Also, how can we report the motion like a computer mouse, i.e. in the reference frame of the sensor?
-
## closed as unclear what you're asking by ja72, Nathaniel, Chris White, AlanSE, Manishearth♦Jul 5 '13 at 14:01
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question.If this question can be reworded to fit the rules in the help center, please edit the question.
You do know that, for a mouse, $\omega r_{ball}=\sqrt{v%x^t+v_y^2}$? Theres a similar relation for acceleration. My apologies if you did infact mention this, mathjax isn't working for me atn. – Manishearth Mar 15 '12 at 17:44
Fwiw, a mouse has no accelerometer/gyroscopes. It jas two wheels in contact with the roller. These wheels are attached a slotted wheel each. Lihht is passed through the wheel and a detector measures it. The frequency of oscillation of the light signal is proportional to the speed. Mind you, this is for a mechanical mouse. An optical mouse uses some nifty technique (akin to barcode scanners) that I forgot. – Manishearth Mar 15 '12 at 17:50
@Manishhearh thanks for your comments. My question does not really concern existing computer mice. I am just thinking about building one only with accelerometers and gyros. – Shapul Mar 15 '12 at 18:07
Why the gyroscope? Just double-integrate x,y. Also, truncate small velocities to zero, to prevent drift. – Mike Dunlavey Mar 16 '12 at 12:39
@Shapul, if you would like to determine the position from this, you would have to do some numerical integration. By the way Dunlavey has a good point, since with this method you will be bound to have (small) errors and therefore drift. – fibonatic Jul 4 '13 at 21:09
In this hypothetical situation, you can transform the unit vectors in the global reference frame $\hat{x}$ and $\hat{y}$ using the same rotation matrix, to the unit vectors in the transformed coordinate system:
$$\hat{x}' = R \hat{x}\\ \hat{y}' = R \hat{y}$$
This is what you did implicitly by transforming $x\hat{x}+y\hat{y}$
In the same way, you can do the back transformation
$$\hat{x}=R^{-1}\hat{x}'\\ \hat{y}=R^{-1}\hat{y}'$$
Which can be used for the back-transformation of the dislpacement in the local frame to displacement in the global frame.
A practical issue will be that errors in you integrated angle $\alpha$ will accumulate, making the mouse annoying to use.
-
Ah, yes, I think you are right. It is not really complicated but in my confusion after returning back to basic kinematics after many years, I totally missed it. You are right accumulation of errors in integration. Real sensors will also have drift making integration (and double integration) really difficult. – Shapul Mar 15 '12 at 19:48
@Shapul I'm glad I could help! – Bernhard Mar 15 '12 at 20:32 | 2014-08-20 16:44:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7121797800064087, "perplexity": 933.1824288211335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500811391.43/warc/CC-MAIN-20140820021331-00053-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://forum.azimuthproject.org/plugin/ViewComment/18480 | I'd never heard of \$$\textbf{Belnap4}\$$, but my knowledge of multi-valued logics is poor. Here's what this little poset looks like:
In this picture, taken from the _Stanford Encylopedia of Philosophy_, the two intermediate truth values are called \$$\varnothing\$$, where we are _ignorant_ of whether something is true or false, and \$$\\{\bot,\top\\} \$$, where we have _contradictory information_ saying that something is both true and false! \$$\top\$$ is true and \$$\bot\$$ is false.
So, in a \$$\textbf{Belnap4}\$$-category, we can say
* yes, \$$x \leq y\$$
* no, \$$x \nleq y \$$
* I don't know whether \$$x \leq y\$$ or \$$x \nleq y \$$
* I've got contradictory information suggesting both \$$x \leq y\$$ and \$$x \nleq y \$$.
Cool!
There's a nice monoidal monotone \$$f: \textbf{Bool} \to \textbf{Belnap4} \$$ embedding ordinary Boolean logic in this 4-valued logic, so using "base change" (as explained in [comment #54](https://forum.azimuthproject.org/discussion/comment/18470/#Comment_18470)) we can turn any preorder into a \$$\textbf{Belnap4}\$$-category.
But this is more interesting: are there any monoidal monotones \$$g: \textbf{Bool} \to \textbf{Belnap4} \$$? If so, we can use them to "crush down" \$$\textbf{Belnap4}\$$-categories into preorders.
In general it should be lots of fun to combine multi-valued logic with enriched categories as we are doing here. | 2021-12-02 16:58:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8697125315666199, "perplexity": 3154.4899177180787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362230.18/warc/CC-MAIN-20211202145130-20211202175130-00035.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/83507-orthonormal-basis-question.html | # Math Help - Orthonormal Basis question
1. ## Orthonormal Basis question
Okay, say you have an orthogonal basis on a weighted inner product. If you're going to try and find the corresponding orthonormal basis, should you be going off the weighted inner product or the regular inner product? In other words, if I'm trying to find ||v||, should I be finding it using the weighted inner product, or just the regular inner product thing?
I'm trying to be general with my explanation so that I don't get the finger-pointing "DO YOUR OWN HOMEWORK", and so that I can still understand the concept. If I was unclear, I can actually post the problem that I have a question in.
2. Originally Posted by Hashy
Okay, say you have an orthogonal basis on a weighted inner product. If you're going to try and find the corresponding orthonormal basis, should you be going off the weighted inner product or the regular inner product? In other words, if I'm trying to find ||v||, should I be finding it using the weighted inner product, or just the regular inner product thing?
I'm trying to be general with my explanation so that I don't get the finger-pointing "DO YOUR OWN HOMEWORK", and so that I can still understand the concept. If I was unclear, I can actually post the problem that I have a question in.
If you are given an inner product then you should use the associated norm (in other words with the weighting). | 2014-07-23 14:15:34 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9013455510139465, "perplexity": 252.54963880824684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997878518.58/warc/CC-MAIN-20140722025758-00133-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://amp.en.depression.pp.ua/9558678/1/exponential-utility.html | # ⓘ Exponential utility. In economics and finance, exponential utility is a specific form of the utility function, used in some contexts because of its convenience ..
## ⓘ Exponential utility
In economics and finance, exponential utility is a specific form of the utility function, used in some contexts because of its convenience when risk is present, in which case expected utility is maximized. Formally, exponential utility is given by:
u c = { 1 − e − a c / a ≠ 0 c a = 0 {\displaystyle uc={\begin{cases}1-e^{-ac}/a&a\neq 0\\c&a=0\\\end{cases}}}
c {\displaystyle c} is a variable that the economic decision-maker prefers more of, such as consumption, and a {\displaystyle a} is a constant that represents the degree of risk preference (a > 0 {\displaystyle a> 0} for risk aversion, a = 0 {\displaystyle a=0} for risk-neutrality, or a < 0 {\displaystyle a
• aversion. Isoelastic function Constant elasticity of substitution Exponential utility Risk aversion Ljungqvist, Lars Sargent, Thomas J. 2000 Recursive
• alternative utility functions such as: CES constant elasticity of substitution, or isoelastic utility Isoelastic utility Exponential utility Quasilinear
• value of the utility function. Notable special cases of HARA utility functions include the quadratic utility function, the exponential utility function
• measure which depends on the risk aversion of the user through the exponential utility function. It is a possible alternative to other risk measures as
• with quadratic utility 2 two - periods, exponential utility and normally - distributed returns, 3 infinite - periods, quadratic utility and stochastic
• in a graph in which each player has a utility function that depends only on him and his neighbors. As the utility function depends on fewer other players
• risk aversion, a term in Economics referring to a property of the exponential utility function Cara Sucia disambiguation Caras disambiguation Carra
• assertion of David P. Reed that the utility of large networks, particularly social networks, can scale exponentially with the size of the network The
• this integral exists. Exponential discounting and hyperbolic discounting are the two most commonly used examples. Discounted utility Intertemporal choice
• multiplying by a fixed constant. Logarithmic growth is the inverse of exponential growth and is very slow. A familiar example of logarithmic growth is
• investor s remaining lifetime. Under certain assumptions including exponential utility and a single asset with returns following an ARMA 1, 1 process, a | 2021-12-07 10:23:23 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9809702634811401, "perplexity": 1638.6269583657354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363337.27/warc/CC-MAIN-20211207075308-20211207105308-00380.warc.gz"} |
https://stats.stackexchange.com/questions/161876/bayesian-neural-networks-very-multimodal-posterior | # Bayesian neural networks: very multimodal posterior?
Question:
How do Bayesian treatments of neural networks address the fact that the posterior has an exponentially large number of modes?
Background:
There seems to be a lot of interest in Bayesian treatments of neural networks, where we attempt to model the posterior distribution over network weights given the data, using e.g. Laplace approximation, Monte Carlo, or variational methods. In principle, this would allow you to integrate over model parameters to avoid overfitting and to provide well-calibrated uncertainty estimates for predictions.
For multilayer perceptrons, the posterior has an exponentially large number of symmetrical modes since the parameters are not identifiable. (As pointed out in Kevin Murphy's book "Machine Learning: A Probabilistic Perspective", Chapter 16.5.5, we can permute the identities of any of the hidden units without affecting the likelihood, leading to $H!$ equivalent settings of the parameters, where $H$ is the number of hidden units. If the neural net uses an activation function like $\tanh$ which is an odd function ($-\tanh(x)=\tanh(-x)$), there are also $2^H$ sign-flip degeneracies since we can pick a hidden unit and flip the sign of all its incoming edges as long as we also flip the sign of all its outgoing edges.)
So for even a tiny feedforward net with $H=15$, the posterior will have $>10^{12}$ posterior modes. This sounds like it could be a big problem for Monte Carlo approximations, for example, since there's no way you could draw even one sample from each of the modes. On the other hand, I guess it could be the case that since the posterior modes introduced by parameter unidentifiability all equivalent, you're fine as long as you model at least one of them well...
Is this actually a problem? If so, how can it be addressed?
• The sign-flipping problem can be addressed by pinning one of the indeterminate parameter sets to be positive. The symmetric multimodality problem is only a concern if you care about which hidden node corresponds to what latent features -- otherwise, finding a single mode is sufficient (due to symmetry). – Sycorax Jul 17 '15 at 1:12
• Cool question. I wonder if it might be solved by some kind of ordering of the nodes? The permutations are effectively irrelevant/equivalent, right? So for a single-hidden-layer model, you could order the nodes in the first layer by their incoming weights. Of course, that's only going to work in trivial cases (e.g. one input variable), but maybe there's a way to expand it for more complex structures. – naught101 Feb 25 '16 at 5:19 | 2019-10-22 09:11:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7700626254081726, "perplexity": 338.69160375091565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987813307.73/warc/CC-MAIN-20191022081307-20191022104807-00422.warc.gz"} |
https://www.studysmarter.us/textbooks/math/precalculus-enhanced-with-graphing-utilities-6th/trigonometric-functions/q-46-find-the-exact-value-of-each-expression-do-not-use-a-ca/ | Suggested languages for you:
Americas
Europe
Q. 46
Expert-verified
Found in: Page 381
### Precalculus Enhanced with Graphing Utilities
Book edition 6th
Author(s) Sullivan
Pages 1200 pages
ISBN 9780321795465
# Find the exact value of each expression. Do not use a calculator.$\mathrm{sec}\pi -\mathrm{csc}\frac{\pi }{2}$
The exact value of the given expression is $-2$.
See the step by step solution
## Step 1. Given information.
The given expression is:
$\mathrm{sec}\pi -\mathrm{csc}\frac{\pi }{2}$
## Step 2. Determine the exact value.
In the trigonometric table $\mathrm{sec}\pi =-1$ and $\mathrm{csc}\frac{\pi }{2}=1$. Substitute these values in the given expression.
$\begin{array}{rcl}\mathrm{sec}\pi -\mathrm{csc}\frac{\pi }{2}& =& \left(-1\right)-\left(1\right)\\ & =& -2\end{array}$
## Step 3. Conclusion.
The exact value of the given expression is $-2$. | 2023-03-24 22:29:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8428210616111755, "perplexity": 2391.9897456105223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00167.warc.gz"} |
https://www.studysmarter.us/textbooks/physics/fundamentals-of-physics-10th-edition/force-and-motion-i/q13p-figure-533-shows-an-arrangement-in-which-four-disks-are/ | ### Select your language
Suggested languages for you:
Americas
Europe
Q13P
Expert-verified
Found in: Page 117
### Fundamentals Of Physics
Book edition 10th Edition
Author(s) David Halliday
Pages 1328 pages
ISBN 9781118230718
# Figure 5.33 shows an arrangement in which four disks are suspended by cords. The longer, top cord loops over a frictionless pulley and pulls with a force of magnitude 98 N on the wall to which it is attached. The tensions in the three shorter cords are${{\mathbit{T}}}_{{\mathbf{1}}}$${\mathbf{=}}{\mathbf{58}}{\mathbf{.}}{\mathbf{8}}{\mathbf{N}}{\mathbf{,}}{{\mathbit{T}}}_{{\mathbf{2}}}{\mathbf{=}}{\mathbf{49}}{\mathbf{.}}{\mathbf{0}}{\mathbf{N}}$and ${{\mathbit{T}}}_{\mathbf{3}\mathbf{}}{\mathbf{=}}{\mathbf{9}}{\mathbf{.}}{\mathbf{8}}{\mathbf{N}}$. (a) What is the mass of disk A (b) What is the mass of disk B, (c) What is the mass of disk C, and (d) What is the mass of disk D?
a) Mass of Disk A is $4.0\mathrm{kg}$.
b) Mass of Disk B is $1.0\mathrm{kg}$.
c) Mass of Disk C is $4.0\mathrm{kg}$.
d) Mass of Disk D is $1.0\mathrm{kg}$.
See the step by step solution
## Step 1: The given data
1. The magnitude of force is, ${F}_{\mathrm{Pull}}=98\mathrm{N}$.
2. The tensions in three strings is,${T}_{1}$$=58.8\mathrm{N}$,${T}_{2}=49.0\mathrm{N}$,${T}_{3}=9.8\mathrm{N}$ .
3. The acceleration due to gravity is,$g=9.8\mathrm{m}/{\mathrm{s}}^{2}$ .
## Step 2: Understanding the concept of tension and weight
The tension will be equal to the weight of the mass attached to that string. With this, we can find the masses of all disks. Since the system is in equilibrium, the net force on all the disks is zero.
Formula:
Weight of the block, $W=mg$ (i)
Where m is the mass of disk, g is the acceleration due to gravity.
Tension in the string balances the weight of the block, $T-w=0$ (ii)
## Step 3: (d) Calculation for mass of disk D
As T3 has attached to only disk D, so from equation (ii), we get
${T}_{3}={m}_{D}g$
Substitute the values in the above equation.
$9.8\mathrm{N}={m}_{D}×9.8\mathrm{m}/{\mathrm{s}}^{2}$
${m}_{D}=1.0\mathrm{kg}$${m}_{D}=1.0\mathrm{kg}$
Hence, the value of mass of disk D is $1.0\mathrm{kg}$
## Step 4: (c) Calculation for mass of disk C
As, T2 has attached to disk C and D, so from equation (ii), we get
${T}_{2}=\left({m}_{c}+{m}_{D}\right)g\phantom{\rule{0ex}{0ex}}49\mathrm{N}=\left({\mathrm{m}}_{\mathrm{c}}+1\mathrm{kg}\right)×9.8\mathrm{m}/{\mathrm{s}}^{2}\left(\because {m}_{\mathit{D}}=1.0\mathrm{kg}\right)\phantom{\rule{0ex}{0ex}}\frac{49}{9.8}={m}_{c}+1\phantom{\rule{0ex}{0ex}}{m}_{c}+1=5\phantom{\rule{0ex}{0ex}}{m}_{c}=4.0\mathrm{kg}$
## Step 5: (b) Calculation for mass of disk B
As T1 has attached to disk B, C and D, so from equation (ii), we get
${T}_{1}=\left({m}_{B}+{m}_{C}+{m}_{D}\right)g\phantom{\rule{0ex}{0ex}}58.8\mathrm{N}=\left({\mathrm{m}}_{\mathrm{B}}+4\mathrm{kg}+1\mathrm{kg}\right)9.8\mathrm{m}/{\mathrm{s}}^{2}\left(\because {m}_{D}=1.0\mathrm{kg}&{m}_{c}=4.0\mathrm{kg}\right)\phantom{\rule{0ex}{0ex}}\left({m}_{B}\mathit{+}\mathit{5}\right)=6\phantom{\rule{0ex}{0ex}}{\mathrm{m}}_{\mathrm{B}}=1.0\mathrm{kg}$
Hence, the value of mass of disk B is $1.0\mathrm{kg}$
## Step 6: (a) Calculation of mass of disk A
As the pulling force given to us is 98 N. So that would be equal to the sum of weights of all disks.
${F}_{pull}=\left({m}_{A}+{m}_{B}+{m}_{C}+{m}_{D}\right)g\phantom{\rule{0ex}{0ex}}98\mathrm{N}=\left({m}_{A}+1\mathrm{kg}+4\mathrm{kg}+1\mathrm{kg}\right)×9.8\mathrm{m}/{\mathrm{s}}^{2}\left(\because {m}_{B}=1.0\mathrm{kg},{m}_{D}=1.0\mathrm{kg}&{\mathrm{m}}_{\mathrm{C}}=4.0\mathrm{kg}\right)\phantom{\rule{0ex}{0ex}}\left({m}_{A}+6\right)=10\phantom{\rule{0ex}{0ex}}{m}_{A}=4.0\mathrm{kg}\phantom{\rule{0ex}{0ex}}\mathrm{The}\mathrm{mass}\mathrm{of}\mathrm{disk}\mathrm{A}\mathrm{is}4.0\mathrm{kg}$ | 2023-03-23 01:08:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 36, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6643929481506348, "perplexity": 1310.7199956524064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00628.warc.gz"} |
http://www.tug.org/pipermail/texhax/2010-May/014969.html | # [texhax] \cline spaces
Sam Albers tonightsthenight at gmail.com
Wed May 12 23:01:26 CEST 2010
Hello there,
I can't seem to find the answer to the following table issue I am having in
LaTeX. The example below work fine but I would like a small space between
the \cline's such that the level heading are better distinguished form each
other. So I would like a spaces in the line that separate the multi spanning
column headers but not the other horizontal lines.
Is this possible? The example below should illustrate what I am talking
Sam
\documentclass[12pt, pdftex]{article}
\usepackage{multirow}
\begin{document}
\begin{table}[htp]
\begin{center}
{\scriptsize
\begin{tabular}{lrrrrrrr}
\hline
& & \multicolumn{2}{c}{level1} & \multicolumn{2}{c}{level2} &
\multicolumn{2}{c}{level3} \\
\cline{3-4} \cline{5-6} \cline{7-8}
\noalign{\smallskip}
type & f1 & f2 & f3 & f4 & f5 & f6 & f7 \\
\hline
Section & 2.3 & 2.288 & 0.009 & 1.779 & 0.008 & 0.338 & 2.1\\
Period & 2.8 & 3.309 & 0.002 & 4.060 & 0.000 & 0.866 & 2.3\\
\hline
\end{tabular}
}
\end{center}
\end{table}
\end{document}
--
*****************************************************
Sam Albers
Geography Program
University of Northern British Columbia
3333 University Way
Prince George, British Columbia | 2018-02-24 18:03:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4074852764606476, "perplexity": 1894.4319828427463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815918.89/warc/CC-MAIN-20180224172043-20180224192043-00798.warc.gz"} |
https://socratic.org/questions/how-do-you-find-the-slope-given-y-6x-5 | # How do you find the slope given y= -6x+5?
Apr 9, 2018
The gradient can be found two ways.
#### Explanation:
First the gradient can be found from the general formula $y = m x + b$ where $m$ is the gradient.
This means that $m = - 6$
The second way the gradient can be found is by using calculus and differentiating the function.
$y = - 6 x + 5$
$y ' = - 6$
The gradient is $- 6$
Apr 9, 2018
$\text{slope } = - 6$
#### Explanation:
$\text{the equation of a line in "color(blue)"slope-intercept form}$ is.
•color(white)(x)y=mx+b
$\text{where m is the slope and b the y-intercept}$
$y = - 6 x + 5 \text{ is in this form}$
$\text{with slope m } = - 6$
Apr 9, 2018
Gradient $= - 6$
#### Explanation:
The 'Slope' can also be known as the 'gradient' which is how steep the line is.
As this is in the form:
$y = m x + c$
$m$ is the gradient. and $c$ is the $y$ intercept.
As the line is $y = - 6 x + 5$
$\to$ Gradient $= - 6$ | 2022-10-05 13:11:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 20, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9296715259552002, "perplexity": 1366.6564459147332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00672.warc.gz"} |
http://mathhelpforum.com/algebra/110388-logarithmic-half-life-question-print.html | # logarithmic half life question
• Oct 25th 2009, 01:23 PM
Jarsht
logarithmic half life question
Hi I'm new to this site so sorry if I've posted this in the wrong area, although this is part of my math12 curriculum.
If a 200g substance decays to 17g in 28 days, determine the half life of this substance.
17=200(1/2)^(x/28)
please tell me if this is correct or not.
Thank you very much.
• Oct 25th 2009, 01:31 PM
e^(i*pi)
Quote:
Originally Posted by Jarsht
Hi I'm new to this site so sorry if I've posted this in the wrong area, although this is part of my math12 curriculum.
If a 200g substance decays to 17g in 28 days, determine the half life of this substance.
17=200(1/2)^(x/28)
please tell me if this is correct or not.
Thank you very much.
$A(t) = A_0e^{-\lambda t}$
$\lambda = \frac{ln(2)}{t_{1/2}}$
$A(t) = A_0e^{-\frac{t\,ln(2)}{t_{1/2}}}$
$17 = 200e^{-\frac{28ln(2)}{t_{1/2}}}$
$-\frac{28ln(2)}{t_{1/2}} = ln \left(\frac{17}{200}\right)$
$t_{1/2} = -\frac{28ln(2)}{ln \left(\frac{17}{200}\right)}$
• Oct 25th 2009, 01:33 PM
Jarsht
Thanks that's what I thought, props to you my fair scholar. | 2016-10-23 18:11:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8173563480377197, "perplexity": 1077.222273822984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719397.0/warc/CC-MAIN-20161020183839-00515-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://iq.opengenus.org/travelling-salesman-problem-brute-force/ | # Travelling Salesman Problem (Basics + Brute force approach)
#### Algorithms Graph Algorithms
In this article we will start our discussion by understanding the problem statement of The Travelling Salesman Problem perfectly and then go through the naive bruteforce approach for solving the problem using a mathematical concept known as "permutation"
## What is the problem statement ?
Travelling Salesman Problem is based on a real life scenario, where a salesman from a company has to start from his own city and visit all the assigned cities exactly once and return to his home till the end of the day. The exact problem statement goes like this,
"Given a set of cities and distance between every pair of cities, the problem is to find the shortest possible route that visits every city exactly once and returns to the starting point."
There are two important things to be cleared about in this problem statement,
• Visit every city exactly once
• Cover the shortest path
## Visualizing the problem
We can visualize the problem by creating a graph data structure having some nodes and weighted edges as path lengths. For example have a look at the following image,
For example - Node 2 to Node 3 takes a weighted edge of 17.
We need to find the shortest path covering all the nodes exactly once, which is highlighted in the figure below for the above graph.
## Steps To Solve the Problem
There are few classical and easy steps that we must follow to solve the TSP problem,
• Finding Adjacent matrix of the graph, which will act as an input.
• performing the shortest_path algorithm, by coding out a function.
• Understanding C++ STL on using next_permutation.
### Step-1 - Finding Adjacent Matrix Of the Graph
You will need a two dimensional array for getting the Adjacent Matrix of the given graph. Here are the steps;
• Get the total number of nodes and total number of edges in two variables namely num_nodes and num_edges.
• Create a multidimensional array edges_list having the dimension equal to num_nodes * num_nodes
• Run a loop num_nodes time and take two inputs namely first_node and second_node * everytime as two nodes having an edge between them and place the edges_list[first_node][second_node] position equal to 1.
• Finally after the loop executes we have an adjacent matrix available i.e edges_list.
/// Getting the number of nodes and number of edges as input
int num_nodes,num_edges;
cin >> num_nodes >> num_edges;
/// creating a multi-dimensional array
int** edges_list = new int*[num_nodes];
for(int i=0;i<num_nodes;i++)
{
edges_list[i] = new int[num_nodes];
for(int j=0;j<num_nodes;j++)
{
edges_list[i][j] = 0;
}
}
for(int i=0;i<num_edges;i++)
{
int first_node,second_node,weight;
cin >> first_node >> second_node >> weight;
edges_list[first_node][second_node] = weight;
edges_list[second_node][first_node] = weight;
}
Time Complexity - O(V^2), space complexity - O(V^2), where V is the number of nodes
### Step - 2 - Performing The Shortest Path Algorithm
The most important step in designing the core algorithm is this one, let's have a look at the pseudocode of the algorithm below.
• Considering a starting source city, from where the salesman will strat. We can consider any city as the starting point and by default we have considered 0 as the starting point here.
• Generating the permutation of the rest cities. Suppose we have total N nodes and we have considered one node as the source, then we need to generate the rest (N-1)! (Factorial of N-1) permutations.
• We need to calculate the edge sum (path sum) for every permutation and take a track of the minumum path sum with each permutation.
• Return the minimum edge cost.
class brute_force
{
public:
int shortest_path_sum(int** edges_list, int num_nodes)
{
/// Picking a source city
int source = 0;
vector<int> nodes;
/// pushing the rest num_nodes-1 cities into a bundle
for(int i=0;i<num_nodes;i++)
{
if(i != source)
{
nodes.push_back(i);
}
}
int n = nodes.size();
int shortest_path = INT_MAX;
/// generating permutations and tracking the minimum cost
while(next_permutation(nodes.begin(),nodes.end()))
{
int path_weight = 0;
int j = source;
for (int i = 0; i < n; i++)
{
path_weight += edges_list[j][nodes[i]];
j = nodes[i];
}
path_weight += edges_list[j][source];
shortest_path = min(shortest_path, path_weight);
}
return shortest_path;
}
};
### Step - 3 - Understanding next_permutation in C++ STL
It's a good practise to understand the functions from Standard Template Library on what they take as arguement, their working mechanism and their output. In this algorithm we have used a function named next_permutation(), which takes two Bidirectional Iterators namely, (here vector::iterator) nodes.begin() and nodes.end().
This functions returns a Boolean Type (i.e. either true or false).
Working Mechanism :
This function rearranges the objects in [nodes.begin(),nodes.end()], where the [] represents both iterator inclusive, in a lexicographical order. If there exits a greater lexicographical arrangement than the current arrangement then the function returns true else it returns false.
Lexicographical order is also known as dictionary order in mathematics.
## The Main Code
#include <bits/stdc++.h>
using namespace std;
class brute_force
{
public:
int shortest_path_sum(int** edges_list, int num_nodes)
{
int source = 0;
vector<int> nodes;
for(int i=0;i<num_nodes;i++)
{
if(i != source)
{
nodes.push_back(i);
}
}
int n = nodes.size();
int shortest_path = INT_MAX;
while(next_permutation(nodes.begin(),nodes.end()))
{
int path_weight = 0;
int j = source;
for (int i = 0; i < n; i++)
{
path_weight += edges_list[j][nodes[i]];
j = nodes[i];
}
path_weight += edges_list[j][source];
shortest_path = min(shortest_path, path_weight);
}
return shortest_path;
}
};
int main()
{
/// Getting the number of nodes and number of edges as input
int num_nodes,num_edges;
cin >> num_nodes >> num_edges;
/// creating a multi-dimensional array
int** edges_list = new int*[num_nodes];
for(int i=0;i<num_nodes;i++)
{
edges_list[i] = new int[num_nodes];
for(int j=0;j<num_nodes;j++)
{
edges_list[i][j] = 0;
}
}
for(int i=0;i<num_edges;i++)
{
int first_node,second_node,weight;
cin >> first_node >> second_node >> weight;
edges_list[first_node][second_node] = weight;
edges_list[second_node][first_node] = weight;
}
for(int i=0;i<num_nodes;i++)
{
for(int j=0;j<num_nodes;j++)
{
cout << edges_list[i][j] << " ";
}
cout << endl;
}
cout << endl << endl;
brute_force approach1;
cout << approach1.shortest_path_sum(edges_list,num_nodes) << endl;
return 0;
}
### Complexity
The time complexity of the algorithm is dependent upon the number of nodes. If the number of nodes is n then the time complexity will be proportional to n! (factorial of n) i.e. O(n!).
The most amount of space in this graph algorithm is taken by the adjacent matrix which is a n * n two dimensional matrix, where n is the number of nodes. Hence the space complexity is O(n^2).
#### Abhijit Tripathy
I have the attitude of a learner, the courage of an entrepreneur and the thinking of an optimist, engraved inside me. I wish to be a leader in my community of people. | 2020-09-30 22:45:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3853311538696289, "perplexity": 2366.952182059105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402128649.98/warc/CC-MAIN-20200930204041-20200930234041-00438.warc.gz"} |
http://xnerv.wang/mmap-vs-reading-blocks/ | Question
I’m working on a program that will be processing files that could potentially be 100GB or more in size. The files contain sets of variable length records. I’ve got a first implementation up and running and am now looking towards improving performance, particularly at doing I/O more efficiently since the input file gets scanned many times.
Is there a rule of thumb for using mmap() versus reading in blocks via C++'s fstream library? What I’d like to do is read large blocks from disk into a buffer, process complete records from the buffer, and then read more.
The mmap() code could potentially get very messy since mmap’d blocks need to lie on page sized boundaries (my understanding) and records could potentially like across page boundaries. With fstreams, I can just seek to the start of a record and begin reading again, since we’re not limited to reading blocks that lie on page sized boundaries.
How can I decide between these two options without actually writing up a complete implementation first? Any rules of thumb (e.g., mmap() is 2x faster) or simple tests?
Answer by Dietrich Epp
I was trying to find the final word on mmap / read performance on Linux and I came across a nice post (link) on the Linux kernel mailing list. It’s from 2000, so there have been many improvements to IO and virtual memory in the kernel since then, but it nicely explains the reason why mmap or read might be faster or slower.
• A call to mmap has more overhead than read (just like epoll has more overhead than poll, which has more overhead than read). Changing virtual memory mappings is a quite expensive operation on some processors for the same reasons that switching between different processes is expensive.
• The IO system can already use the disk cache, so if you read a file, you’ll hit the cache or miss it no matter what method you use.
However,
• Memory maps are generally faster for random access, especially if your access patterns are sparse and unpredictable.
• Memory maps allow you to keep using pages from the cache until you are done. This means that if you use a file heavily for a long period of time, then close it and reopen it, the pages will still be cached. With read, your file may have been flushed from the cache ages ago. This does not apply if you use a file and immediately discard it. (If you try to mlock pages just to keep them in cache, you are trying to outsmart the disk cache and this kind of foolery rarely helps system performance).
• Reading a file directly is very simple and fast.
The discussion of mmap/read reminds me of two other performance discussions:
• Some Java programmers were shocked to discover that nonblocking I/O is often slower than blocking I/O, which made perfect sense if you know that nonblocking I/O requires making more syscalls.
• Some other network programmers were shocked to learn that epoll is often slower than poll, which makes perfect sense if you know that managing epoll requires making more syscalls.
Conclusion: Use memory maps if you access data randomly, keep it around for a long time, or if you know you can share it with other processes (MAP_SHARED isn’t very interesting if there is no actual sharing). Read files normally if you access data sequentially or discard it after reading. And if either method makes your program less complex, do that. For many real world cases there’s no sure way to show one is faster without testing your actual application and NOT a benchmark.
(Sorry for necro’ing this question, but I was looking for an answer and this question kept coming up at the top of Google results.) | 2019-09-22 19:29:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2853947579860687, "perplexity": 1313.2816284345813}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575627.91/warc/CC-MAIN-20190922180536-20190922202536-00550.warc.gz"} |
http://theoryapp.com/tag/regular-expression/ | ## Regular Languages and Finite Automata
Language An alphabet $$\Sigma$$ is a finite set of symbols, for example $$\Sigma=\{0,1\}$$. A string is a finite sequence of symbols from $$\Sigma$$. We denote the empty string by $$\epsilon$$. The set of all strings over $$\Sigma$$ is $$\Sigma^\star$$, using
Tagged with: , ,
Posted in Theory
## Regular Expressions in Java
Regular Expression Basics Regular expressions (regex) are an effective way of describing common patterns in strings. For example, all phone numbers in North America have 10 digits; this can be easily described by regular expressions: [0-9]{10}, which matches with 10
Tagged with: ,
Posted in Java | 2017-10-21 10:22:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2914622128009796, "perplexity": 816.0310214982453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824733.32/warc/CC-MAIN-20171021095939-20171021115939-00776.warc.gz"} |
https://www.physicsforums.com/threads/optics-problem.742913/ | # Optics problem
1. Mar 12, 2014
### rogeralms
1. The problem statement, all variables and given/known data
Using the results of Problems 4.70, that is EQs. (4.98) and (4.99), show that
Rparallel + Tparalllel = 1
2. Relevant equations
Rparallel = ( tan^2 ( thetai - thetat) ) / (tan^2 (thetai + thetat) )
Tparallel = (sin (2*thetai) * sin (2*thetat))/ sin^2 (thetai + thetat)
3. The attempt at a solution
After getting this far (shown below) I took it to the math help center at my university and they couldn't solve it any further than what I had done:
First put both in the same denominator
sin^2 (thetai - thetat)) / cos^2(thetai - thetat) * cos^2(thetai + thetat/sin^2(thetai + thetat which gives a common denominator of cos^2(thetai-thetat)* sin^2(thetai + thetat)
For brevity I will call thetai = i and thetat = t
Now we have sin^2(i-t)*cos^2(i+t) + sin (2*i)*sin(2*t)/ cos^2(i-t)*sin^2(i+t)
I tried (1 - cos^2(i-t)*(1-sin^2(i+t) + sin(2*i)*sin(2*t)/ cos^2(i-t)*sin^2(i+t)
which puts the minus on cos and plus angle on sin which matches the denominator but that is as far as I got which was further than the help desk at my university.
Can someone give me a hint as to which identities I should use to work this out?
You have my undying gratitude and about a million photons of positive energy sent to you for your help!
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
2. Mar 14, 2014
### scurty
Here's two identities that might help:
$\sin^2(x-y) = \sin^2(x+y) - \sin(2x)\sin(2y)$
$\cos^2(x-y) = \cos^2(x+y) + \sin(2x)\sin(2y)$ | 2018-03-21 15:41:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4389898478984833, "perplexity": 3622.152355861182}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647660.83/warc/CC-MAIN-20180321141313-20180321161313-00643.warc.gz"} |
https://aliquote.org/micro/2019-07-08-11-05-39/ | # aliquote
## < a quantity that can be divided into another a whole number of time />
TIL about the brewsci/bio tap for Homebrew. #bioinformatics | 2021-06-19 03:43:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2532927095890045, "perplexity": 5409.005480387448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643380.40/warc/CC-MAIN-20210619020602-20210619050602-00286.warc.gz"} |
http://math.gatech.edu/seminars-and-colloquia-by-series?series_tid=40&page=1 | ## Seminars and Colloquia by Series
Friday, October 20, 2017 - 13:55 , Location: Skiles 006 , John Etnyre , Georgia Tech , Organizer: John Etnyre
Note this talk is only 1 hour (to allow for the GT MAP seminar at 3.
In this series of talks I will introduce branched coverings of manifolds and sketch proofs of most the known results in low dimensions (such as every 3 manifold is a 3-fold branched cover over a knot in the 3-sphere and the existence of universal knots). This week we will continue studying branched covers of surfaces. Among other things we should be able to see how to use branched covers to see some relations in the mapping class group of surfaces.
Friday, October 13, 2017 - 13:55 , Location: Skiles 006 , John Etnyre , Georgia Tech , Organizer: John Etnyre
In this series of talks I will introduce branched coverings of manifolds and sketch proofs of most the known results in low dimensions (such as every 3 manifold is a 3-fold branched cover over a knot in the 3-sphere and the existence of universal knots). Along the way several open problems will be discussed.
Friday, September 29, 2017 - 13:55 , Location: Skiles 006 , Peter Lambert-Cole , Georgia Institute of Technology , Organizer: Peter Lambert-Cole
In this talk, I will present Arnold's famous ADE classification of simple singularities.
Friday, September 22, 2017 - 13:55 , Location: Skiles 006 , None , None , Organizer: John Etnyre
Friday, September 15, 2017 - 13:55 , Location: Skiles 006 , Peter Lambert-Cole , Georgia Institute of Technology , Organizer: Peter Lambert-Cole
In this series of talks, I will introduce basic concepts and results in singularity theory of smooth and holomorphic maps. In the first talk, I will present a gentle introduction to the elements of singularity theory and give a proof of the well-known Morse Lemma that illustrates key geometric and algebraic principles of singularity theory.
Friday, September 1, 2017 - 13:55 , Location: Skiles 006 , None , None , Organizer: John Etnyre
Friday, August 25, 2017 - 13:55 , Location: Skiles 006 , None , None , Organizer: John Etnyre
Friday, April 14, 2017 - 14:00 , Location: Skiles 006 , None , None , Organizer: John Etnyre
Friday, March 17, 2017 - 14:00 , Location: Skiles 006 , John Etnyre , Georgia Tech , Organizer: John Etnyre
This will be a 1.5 hour (maybe slightly longer) seminar.
Following up on the previous series of talks we will show how to construct Lagrangian Floer homology and discuss it properties.
Friday, March 10, 2017 - 14:00 , Location: Skiles 006 , John Etnyre , Georgia Tech , Organizer: John Etnyre
This will be a 1.5 hour seminar.
Following up on the previous series of talks we will show how to construct Lagrangian Floer homology and discuss it properties. | 2018-09-19 15:15:06 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8037909865379333, "perplexity": 1699.673729900131}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156252.31/warc/CC-MAIN-20180919141825-20180919161825-00070.warc.gz"} |
https://pycache.de/fourier/ | A friend of mine used to say that the Fourier transform must have come from hell. This collection of notes covers most of my encounters with the Fourier transform - in the context of programming.
Contents:
Basic NumPy functionalities
Numpy is the basic library for scientific programming in Python and it has its own implementation of the fast Fourier transform (FFT) algorithm. A summary of all Fourier-related functions is given in the NumPy docs. Let me highlight the most essential functions here:
• np.fft.fft: Compute the one-dimensional discrete Fourier Transform.
• np.fft.fftfreq: Return the Discrete Fourier Transform sample frequencies. This function is used to obtain the frequencies corresponding to the output of np.fft.fft for data visualization and postprocessing purposes.
• np.fft.fftshift: Shift the zero-frequency component to the center of the spectrum. By default, the zero-frequency component is the first element of the array returned by np.fft.fft and negative frequencies are located in the second half of the array. For data visualization, we need to have the zero-frequency component at the center of the array. This function handles odd- and even- length arrays correctly and should be used instead of manual solutions.
Time shifting: the first pitfall
Let us attempt to perform the Fourier transform of a Gaussian signal
The Fourier transform of a Gaussian signal is also Gaussian, which makes it easy to check the result.
import matplotlib.pylab as plt
import numpy as np
# Gaussian signal
# (parameters are chosen such that both signal and FT plot nicely)
N = 100
time, dt = np.linspace(0, 10, N, endpoint=False, retstep=True)
sigma = .25
tau = 5
sig = 1/(sigma * np.sqrt(2*np.pi)) * np.exp(-1/2 * ((time-tau) / sigma)**2)
freq = np.fft.fftfreq(N, dt)
ft_sig = np.fft.fft(sig)
fig = plt.figure(figsize=(7, 3))
ax1 = plt.subplot(121, title="signal")
ax1.plot(time, sig)
ax1.set_xlabel("time $t$ [s]")
ax1.set_ylabel("amplitude $g$ [a.u.]")
ax2 = plt.subplot(122, title="Fourier transform")
ax2.plot(np.fft.fftshift(freq), np.fft.fftshift(ft_sig.real))
ax2.set_xlabel("frequency $f$ [Hz]")
ax2.set_ylabel("amplitude $G$ [a.u.]")
plt.tight_layout()
plt.savefig("shift_pitfall.png", dpi=120)
plt.close()
What happened? The Fourier transformed signal rapidly changes signs while one could only make out a Gaussian envelope. To understand what went wrong, we need to take a closer look at what numpy.fft.fft actually does.
First, let us consider the continuous Fourier transform $G(f)$ of a signal $g(t)$,
In order to discretize this equation, we replace the integral by a sum of $N$ points, forcing us to reduce the integration interval from $(-\infty, \infty)$ to $(0, N)$. Furthermore, we choose the substitutions $f \rightarrow k, k \in \Bbb N$ and $t \rightarrow n/N, n \in \Bbb N$, which leads to the normalization $dt \rightarrow 1$. The discrete signals are now described as $G_k = G(f_k)$ and $g_n = g(t_n)$. The discrete Fourier transform can thus be written as
Note how the definition of $t=0$ has become $n=0$, which brings us back to the original problem. The origin of $g_k$ is located at $k=0$ (not at the center of the array $k = N/2$). Thus, in order to get the Fourier transform of our Gaussian signal right, we would have to shift $g_k$ such that its maximum is located at $k=0$ (the first element of the array). We could achieve this by means of np.fft.fftshift, but that only works as long as the center of $g$ coincides exactly with $n=N/2$. A more elegant solution is to directly correct for the temporal shift $\tau$ after the Fourier transform. Let’s consider a shifted function $g(t-\tau)$ in the equation of the continuous Fourier transform:
We would like to get rid of the shift $\tau$ and thus substitute $t \rightarrow t + \tau$.
This step only affects the Fourier kernel and results in the additional term $\exp (- 2 \pi i f \tau)$ which can be pulled out of the integral. This oscillatory term is a simple time shift and explains the artifacts in the figure above. We can correct for this time shift by multiplying $G_k$ with $\exp (+ 2 \pi i f \tau)$:
# correct for time shift tau
ft_cor = np.fft.fft(sig) * np.exp(2*np.pi*1j*freq*tau)
fig = plt.figure(figsize=(7, 3))
ax1 = plt.subplot(121, title="signal")
ax1.plot(time, sig)
ax1.set_xlabel("time $t$ [s]")
ax1.set_ylabel("amplitude $g$ [a.u.]")
ax2 = plt.subplot(122, title="time-shift corrected Fourier transform")
ax2.plot(np.fft.fftshift(freq), np.fft.fftshift(ft_cor.real))
ax2.set_xlabel("frequency $f$ [Hz]")
ax2.set_ylabel("amplitude $G$ [a.u.]")
plt.tight_layout()
plt.savefig("shift_corrected.png", dpi=120)
plt.close()
Note that this correction works for any real-valued $\tau$ (as long as the support of $g(t)$ is within the interval $0\,\text{s} < t < 10\,\text{s}$).
Frequency shifting
In some cases, it can be useful to manipulate a signal such that it shows up at a predefined frequency in Fourier space. A frequency shift can be described with
In other words, the signal $g(t)$ must be multiplied by the complex exponential $\exp(-2\pi i f_0t)$ to shift its Fourier transform by $f_0$. Here is an example for a shift by 2.2 Hz.
# multiply input with complex exponential
sig_shift = sig * np.exp(2*np.pi*1j*(time-tau)*2.2)
ft_shift = np.fft.fft(sig_shift) * np.exp(2*np.pi*1j*freq*tau)
fig = plt.figure(figsize=(7, 3))
ax1 = plt.subplot(121, title="signal × complex exponential")
ax1.plot(time, sig_shift.real)
ax1.set_xlabel("time $t$ [s]")
ax1.set_ylabel("amplitude $g$ [a.u.]")
ax2 = plt.subplot(122, title="frequency-shifted Fourier transform")
ax2.plot(np.fft.fftshift(freq), np.fft.fftshift(ft_shift.real))
ax2.set_xlabel("frequency $f$ [Hz]")
ax2.set_ylabel("amplitude $G$ [a.u.]")
plt.tight_layout()
plt.savefig("shift_frequency.png", dpi=120)
plt.close()
Note again that $\tau$ must be included to correctly shift the frequency, hence the term time-tau in the complex exponential.
Time scaling
The time scaling property of the Fourier transform states that a change of the sampling frequency in the input signal is equivalent to a scaled signal in Fourier space.
In this example, the time axis is scaled by a factor of two, which leads to a Fourier signal that is scaled by a factor of two and narrowed by a factor of one half.
# scale by a factor of 2
freq_sc = np.fft.fftfreq(N, dt/2)
time_sc = time / 2
tau_sc = tau / 2
sig_sc = 1/(sigma * np.sqrt(2*np.pi)) * np.exp(-1/2 *
((time_sc-tau_sc) / sigma)**2)
ft_sc = np.fft.fft(sig_sc) * np.exp(2*np.pi*1j*freq_sc*tau_sc)
ft_sc = np.fft.fftshift(ft_sc)
freq = np.fft.fftshift(freq)
freq_sc = np.fft.fftshift(freq_sc)
fig = plt.figure(figsize=(7, 3))
ax1 = plt.subplot(121, title="time-scaled signal")
ax1.plot(time, sig_sc.real)
ax1.set_xlabel("time $t$ [s]")
ax1.set_ylabel("amplitude $g$ [a.u.]")
ax2 = plt.subplot(122, title="scaled Fourier transform")
ax2.plot(freq, ft_sc.real)
ax2.set_xlabel("frequency $f$ [Hz]")
ax2.set_ylabel("amplitude $G$ [a.u.]")
plt.tight_layout()
plt.savefig("scale_time.png", dpi=120)
plt.close()
Image translation
Many applications of the Fourier transform involve image analysis. It is possible to perform the trivial task of image translation with the Fourier transform. If the image is translated by a non-integer amount of pixels, then the interpolation takes place with the Fourier kernel (sine and cosine functions). For this example, we use a downscaled image of the lunar eclipse, recorded on July 27th 2018.
import matplotlib.image as mpimg
fy = np.fft.fftfreq(moon.shape[0]).reshape(-1, 1)
fx = np.fft.fftfreq(moon.shape[1]).reshape(1, -1)
ft_moon = np.fft.fft2(moon) * np.exp(2*np.pi*1j*(fx*10.5 + fy*10))
moon_tr = np.fft.ifft2(ft_moon)
fig = plt.figure(figsize=(7, 3.6))
ax1 = plt.subplot(121, title="moon")
ax1.imshow(moon, cmap="gray", interpolation="none")
ax2 = plt.subplot(122, title="translated moon")
ax2.imshow(moon_tr.real, cmap="gray", interpolation="none")
plt.tight_layout()
plt.savefig("moon_translated.png", dpi=120)
plt.close()
The image is translated by 10 pixels along the y-axis and by 10.5 pixels along the x-axis. The resulting interpolation along the x-axis leads to horizontal ringing artifacts.
Performing image translation with the Fourier transform might be fast, but for higher accuracy, other interpolation methods (e.g. splines) might be better suited, especially when sharp boundaries (dark-bright) are present.
Image modulation: Holograms
The Fourier transform can be used for the analysis of digital holograms. In the life sciences, digital holographic imaging is used to quantify the refractive index of cells. To achieve that, a laser beam is split into two beams, one passes through the sample and the other serves as a reference. When these two beams are brought back together at a slightly tilted angle, they generate an interference pattern, periodic stripes that can be recorded with a regular camera, that is modulated by the phase delay introduced by the varying refractive index of the sample.
The example hologram shows an HL60 cell - the intensity data clearly reveals a cell, but we are after the phase data. The modulation of the phase data becomes visible when tracing the interference pattern (dark stripes) through the cell: they appear to be deformed at the cell boundary. This modulation can be extracted with Fourier analysis. The interference pattern can be described as a cosine function, whose Fourier transform are two delta functions, the so-called sidebands. Isolating one of those sidebands (see arrow in the image below) and performing an inverse Fourier transform, reveals the part of the light that passed through the cell and the phase delay can be computed.
cell = mpimg.imread("cell_hologram.png")
ft_cell = np.fft.fft2(cell)
ft_cell_copy = np.copy(ft_cell)
# suppress central band
ft_cell[0, :] = 0
ft_cell[:, 0] = 0
# determine sideband position
xmax = np.argmax(np.max(ft_cell, axis=1))
ymax = np.argmax(ft_cell[xmax])
# move sideband to zero frequency
ft_cell_rolled = np.roll(ft_cell, (-xmax, -ymax), axis=(0, 1))
# apply sideband filter
ft_cell_rolled[20:-20, :] = 0
ft_cell_rolled[:, 20:-20] = 0
# invert to get sideband modulation
modulation = np.fft.ifft2(ft_cell_rolled)
# compute phase
phase = np.angle(modulation)
fig = plt.figure(figsize=(7, 2.8))
ax1 = plt.subplot(131, title="hologram")
ax1.imshow(cell.real, cmap="gray", interpolation="bilinear")
ax2 = plt.subplot(132, title="Fourier transform")
ax2.imshow(np.fft.fftshift(np.log(1 + np.abs(ft_cell_copy))),
interpolation="none")
ax3 = plt.subplot(133, title="wrapped phase")
ax3.imshow(phase, cmap="coolwarm", interpolation="none")
for ax in [ax1, ax2, ax3]:
ax.set_xticks([])
ax.set_yticks([])
plt.savefig("cell_modulation.png", dpi=120)
plt.close()
Note that the phase is wrapped in the interval $(0, 2\pi)$, i.e. there are $2\pi$ phase jumps (from red to blue) that have to be “unwrapped” for further analysis.
Scaling images
The Fourier transform can also be used to up- or downscale images. In the above example, the inverse Fourier transform was performed for a much larger frequency space than necessary, because we actually cropped the sideband to 40 by 40 pixels. If we only take the inverse Fourier transform of the cropped sideband, we get an idea of the actual image resolution. In short, upscaling with the Fourier transform means that the image is interpolated with cosine functions. On the other hand, downscaling with the Fourier transform means that high-frequency contributions are omitted. The illustration below additionally makes use of a phase unwrapping algorithm that is part of the scikit-image library.
from skimage.restoration import unwrap_phase
ft_cell_low = np.zeros((40, 40), dtype=complex)
ft_cell_low.flat[:] = ft_cell_rolled[ft_cell_rolled != 0]
modulation_low = np.fft.ifft2(ft_cell_low)
# compute phase
phase_low = unwrap_phase(np.angle(modulation_low))
phase = unwrap_phase(phase)
fig = plt.figure(figsize=(7, 3.6))
ax1 = plt.subplot(121, title="unwrapped phase")
ax1.imshow(phase, cmap="coolwarm", interpolation="none")
ax2 = plt.subplot(122, title="unwrapped phase (actual resolution)")
ax2.imshow(phase_low, cmap="coolwarm", interpolation="none")
plt.tight_layout()
plt.savefig("cell_downsampled.png", dpi=120)
plt.close()
Watermarks
A watermark is a modification of an image, often used to prevent (or track) the usage of an image by others. Watermarks are usually just image overlays, but they can also be hidden in Fourier space. Note that the modification in Fourier space results in distortions, present everywhere in the image, whose intensity depends on the number of frequencies used and the corresponding amplitudes. In this example, an image of the lunar eclipse is watermarked with a smiley.
moon = mpimg.imread("moon.png")
fy = np.fft.fftfreq(moon.shape[0]).reshape(-1, 1)
fx = np.fft.fftfreq(moon.shape[1]).reshape(1, -1)
ft_moon = np.fft.fft2(moon)
ft_mark = np.fft.fft2(moon_mark.real)
fig = plt.figure(figsize=(7, 2.8))
ax1 = plt.subplot(131, title="moon")
ax1.imshow(moon, cmap="gray", interpolation="none")
ax2 = plt.subplot(132, title="watermarked moon")
ax2.imshow(moon_mark.real, cmap="gray", interpolation="none")
ax3 = plt.subplot(133, title="Fourier transform")
ax3.imshow(np.log(1 + np.abs(np.fft.fftshift(ft_mark))), interpolation="none") | 2019-01-19 18:34:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7821093797683716, "perplexity": 1660.8469250638736}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583680452.20/warc/CC-MAIN-20190119180834-20190119202834-00360.warc.gz"} |
https://riccardoriparazioni.it/foxit-phantompdf-business-crack-fullzip/accessori-smartphone/ | # Foxit PhantomPDF Business Crack !FULL!zip
· [New release] Crack.UFS.Explorer.Professional.Recovery.5.2 · Ultrastar 390 songs pack · Virtual Riot · . · · m3-data-recovery-52-1-li Why i want to use Dictionary? and if in future i again upgrade laptop i would not loose this dictionary in future if i use? A: You can add a placeholder.txt file to your local storage, like ~/Library/Application Support/DictionaryMaker/placeholders.txt, and write the placeholder dictionary, like {phonetic,0:1}. Q: prove $\sum_{n=1}^{\infty} \frac{1}{n^3}\sum_{k=1}^{n} \frac{1}{\sqrt{k}} = \sum_{n=1}^{\infty} \frac{1}{n^3}$ I have proven that $\displaystyle \sum_{n=1}^{\infty} \frac{1}{n^3}$ diverges. I would like to prove that $\displaystyle \sum_{n=1}^{\infty} \frac{1}{n^3} \sum_{k=1}^{n} \frac{1}{\sqrt{k}}$ diverges. I have tried to prove that $\displaystyle \sum_{n=1}^{\infty} \frac{1}{n^3} \sum_{k=1}^{n} \frac{1}{\sqrt{k}} > \sum_{n=1}^{\infty} \frac{1}{n^3}$ but I am not sure that this is true. could someone give me a hint? A: Let $S_N$ denote the partial sum of the series: S_N=\sum_{n=1}^N \frac{1}{n^3} \sum_{k=1}^n \frac{1}{\sqrt{k}}=\sum_{n=1}^N \frac{1}{n^3}\sum_{k=1}^n \frac{\sqrt k}{\sqrt k}=\sum_{n=1}^N | 2022-12-07 00:48:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6122908592224121, "perplexity": 2731.111025319549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711121.31/warc/CC-MAIN-20221206225143-20221207015143-00288.warc.gz"} |
http://mathdl.maa.org/mathDL/?pa=content&sa=viewDocument&nodeId=3233 | Search
Keyword
## Algebra: Introductory Surveys
Allenby, R.B.J.T. Rings, Fields and Groups: An Introduction to Abstract Algebra New York, NY: Edward Arnold, 1983.
** Artin, Michael. Algebra Englewood Cliffs, NJ: Prentice Hall, 1991.
Bhattacharya, P.B.; Jain, S.K.; and Nagpaul, S.R. Basic Abstract Algebra New York, NY: Cambridge University Press, 1986.
*** Birkhoff, Garrett and Mac Lane, Saunders. A Survey of Modern Algebra, New York, NY: Macmillan, 1965, 1977. Fourth Edition.
Burnside, William Snow and Panton, Arthur William. The Theory of Equations with an Introduction to the Theory of Binary Algebraic Forms, Mineola, NY: Dover, 1960. 2 Vols.
Burton, D. Abstract Algebra Dubuque, IA: William C. Brown, 1988.
* Childs, Lindsay. A Concrete Introduction to Higher Algebra New York, NY: Springer-Verlag, 1979.
Dean, Richard A. Classical Abstract Algebra New York, NY: Harper and Row, 1990.
* Fraleigh, John B. A First Course in Abstract Algebra, Reading, MA: Addison-Wesley, 1976, 1989. Fourth Edition.
** Gallian, Joseph A. Contemporary Abstract Algebra, Lexington, MA: D.C. Heath, 1986, 1990. Second Edition.
Goldstein, Larry J. Abstract Algebra: A First Course Englewood Cliffs, NJ: Prentice Hall, 1973.
* Herstein, I.N. Abstract Algebra, New York, NY: Macmillan, 1986, 1990. Second Edition.
Hillman, Abraham P. and Alexanderson, Gerald L. A First Undergraduate Course in Abstract Algebra, Belmont, CA: Wadsworth, 1973, 1988. Fourth Edition.
* Hungerford, Thomas W. Abstract Algebra: An Introduction Philadelphia, PA: Saunders College, 1990.
Kostrikin, A.I. Introduction to Algebra New York, NY: Springer-Verlag, 1982.
Lang, Serge. Undergraduate Algebra New York, NY: Springer-Verlag, 1987.
Marcus, Marvin. Introduction to Modern Algebra New York, NY: Marcel Dekker, 1978.
McCoy, Neal H. and Janusz, Gerald J. Introduction to Modern Algebra, Boston, MA: Allyn and Bacon, 1960, 1987. Fourth Edition.
Pinter, Charles C. A Book of Abstract Algebra New York, NY: McGraw-Hill, 1982.
## Algebra: Constructive and Computational Algebra
Barbeau, Edward J. Polynomials New York, NY: Springer-Verlag, 1989.
Connell, Ian. Modern Algebra: A Constructive Approach New York, NY: Elsevier Science, 1982.
* Dobbs, David E. and Hanks, Robert. A Modern Course on the Theory of Equations Washington, NJ: Polygonal, 1980.
* Humphreys, J.F. and Prest, M.Y. Numbers, Groups, and Codes New York, NY: Cambridge University Press, 1989.
Mines, Ray; Richman, Fred; and Ruitenburg, Wim. A Course in Constructive Algebra New York, NY: Springer-Verlag, 1988.
Sims, Charles C. Abstract Algebra: A Computational Approach New York, NY: John Wiley, 1984.
## Algebra: Applied Algebra
Birkhoff, Garrett and Bartee, Thomas C. Modern Applied Algebra New York, NY: McGraw-Hill, 1970.
* Dornhoff, Larry L. and Hohn, Franz E. Applied Modern Algebra New York, NY: Macmillan, 1978.
Laufer, Henry B. Discrete Mathematics and Applied Modern Algebra Boston, MA: Prindle, Weber and Schmidt, 1984.
* Lidl, Rudolf and Pilz, Gunter. Applied Abstract Algebra New York, NY: Springer-Verlag, 1984.
Lipson, John D. Elements of Algebra and Algebraic Computing Reading, MA: Addison-Wesley, 1981.
** Mackiw, George. Applications of Abstract Algebra New York, NY: John Wiley, 1985.
* Bourbaki, Nicolas. Elements of Mathematics: Algebra, New York, NY: Springer-Verlag, 1989, 1990. 2 Vols.
** Cohn, Paul M. Algebra, New York, NY: John Wiley, 1974, 1982. 2 Vols., Second Edition.
*** Herstein, I.N. Topics in Algebra, New York, NY: John Wiley, 1975. Second Edition.
* Hungerford, Thomas W. Algebra New York, NY: Springer-Verlag, 1974.
* Jacobson, Nathan. Lectures in Abstract Algebra, New York, NY: Springer-Verlag, 1953, 1975. 2 Vols.
*** Jacobson, Nathan. Basic Algebra I and II, New York, NY: W.H. Freeman, 1974, 1989. Second Edition.
Kostrikin, A.I. and Shafarevich, Igor R., eds. Algebra I: Basic Notions of Algebra New York, NY: Springer-Verlag, 1990.
Ledermann, Walter and Vajda, Steven, eds. Algebra New York, NY: John Wiley, 1980. Handbook of Applicable Mathematics, Volume I.
*** Mac Lane, Saunders and Birkhoff, Garrett. Algebra, New York, NY: Chelsea, 1988. Third Edition.
*** van der Waerden, B.L. Algebra, New York, NY: Springer-Verlag, 1991. (Original title: Modern Algebra.)
## Algebra: Group Theory
Aschbacher, Michael. The Finite Simple Groups and Their Classification New Haven, CT: Yale University Press, 1980.
** Budden, F.J. The Fascination of Groups New York, NY: Cambridge University Press, 1972.
* Burn, R.P. Groups: A Path to Geometry New York, NY: Cambridge University Press, 1985, 1987.
Conway, John Horton, et al. Atlas of Finite Groups: Maximal Subgroups and Ordinary Characters for Simple Groups New York, NY: Clarendon Press, 1985.
* Curtis, Charles W. and Reiner, Irving. Methods of Representation Theory with Applications to Finite Groups and Orders New York, NY: John Wiley, 1981.
* Curtis, Charles W. and Reiner, Irving. Representation Theory of Finite Groups and Associative Algebras New York, NY: John Wiley, 1962.
Dixon, J.D. Problems in Group Theory New York, NY: Blaisdell, 1967.
Feigelstock, S. Additive Groups of Rings Brooklyn, NY: Pitman, 1983.
Fuchs, L. Abelian Groups New York, NY: Academic Press, 1970.
* Gorenstein, Daniel. The Classification of Finite Simple Groups New York, NY: Plenum Press, 1983.
** Gorenstein, Daniel. Finite Simple Groups: An Introduction to Their Classification New York, NY: Plenum Press, 1982.
Grove, L.C. and Benson, C.T. Finite Reflection Groups, New York, NY: Springer-Verlag, 1985. Second Edition.
*** Hall, Marshall, Jr. The Theory of Groups, New York, NY: Chelsea, 1973. Second Edition.
Hall, Marshall, Jr. and Senior, J.K. Groups of Order $2^n(n eq 6)$ New York, NY: Macmillan, 1964.
Hill, Victor E. Groups, Representations, and Characters New York, NY: Hafner Press, 1975.
Johnson, D.L. Presentations of Groups New York, NY: Cambridge University Press, 1976.
* Kaplansky, Irving. Infinite Abelian Groups, Ann Arbor, MI: University of Michigan Press, 1969. Revised Edition.
* Kurosh, Alexander G. The Theory of Groups, New York, NY: Chelsea, 1960, 1970. 2 Vols., Second Edition.
** Ledermann, Walter. Introduction to Group Characters, New York, NY: Cambridge University Press, 1977, 1987. Second Edition.
*** Rotman, Joseph J. An Introduction to the Theory of Groups, Needham Heights, MA: Allyn and Bacon, 1965, 1984. Third Edition.
Scott, William R. Group Theory Mineola, NY: Dover, 1987.
Serre, Jean-Pierre. Linear Representations of Finite Groups New York, NY: Springer-Verlag, 1977.
Weinstein, Michael. Examples of Groups Washington, NJ: Polygonal, 1977.
* Weyl, Hermann. The Classical Groups: Their Invariants and Representatives Princeton, NJ: Princeton University Press, 1946.
## Algebra: Rings and Ideals
Cohn, Paul M. Free Rings and Their Relations, New York, NY: Academic Press, 1985. Second Edition.
* Goodearl, K.R. and Warfield, R.B., Jr. An Introduction to Noncommutative Noetherian Rings New York, NY: Cambridge University Press, 1989.
* Herstein, I.N. Rings with Involution Chicago, IL: University of Chicago Press, 1976.
*** Herstein, I.N. Non-Commutative Rings Washington, DC: Mathematical Association of America, 1968.
* Jacobson, Nathan. The Structure of Rings, Providence, RI: American Mathematical Society, 1964. Revised Edition.
Jans, J. Rings and Homology New York, NY: Holt, Rinehart and Winston, 1964.
** Kaplansky, Irving. Fields and Rings, Chicago, IL: University of Chicago Press, 1969, 1974. Revised Second Edition.
Kostrikin, A.I. and Shafarevich, Igor R., eds. Algebra II: Non-Commutative Rings, Identities New York, NY: Springer-Verlag, 1991.
* Lambek, Joachim. Lectures on Rings and Modules, New York, NY: Chelsea, 1976. Second Edition.
* McConnell, J.C. and Robson, J.C. Noncommutative Noetherian Rings New York, NY: John Wiley, 1988.
McCoy, Neal H. Rings and Ideals Washington, DC: Mathematical Association of America, 1948.
McCoy, Neal H. The Theory of Rings New York, NY: Chelsea, 1973.
Passman, Donald S. The Algebraic Structure of Group Rings Melbourne, FL: Robert E. Krieger, 1985.
* Robinson, Abraham. Numbers and Ideals San Francisco, CA: Holden-Day, 1965.
Rowen, Louis Halle. Polynomial Identities in Ring Theory New York, NY: Academic Press, 1980.
* Rowen, Louis Halle. Ring Theory, New York, NY: Academic Press, 1988. 2 Vols.
* Sharpe, David. Rings and Factorization New York, NY: Cambridge University Press, 1987.
Small, Lance W. Noetherian Rings and Their Applications Providence, RI: American Mathematical Society, 1987.
Stenstrom, Bo. Rings of Quotients: An Introduction to Methods of Ring Theory New York, NY: Springer-Verlag, 1975.
## Algebra: Fields and Galois Theory
* Adamson, Iain T. Introduction to Field Theory, New York, NY: Cambridge University Press, 1982. Second Edition.
* Artin, Emil. Galois Theory, Notre Dame, IN: University of Notre Dame Press, 1966. Second Revised Edition.
Brawley, Joel V. and Schnibben, George E. Infinite Algebraic Extensions of Finite Fields Providence, RI: American Mathematical Society, 1989.
* Edwards, Harold M. Galois Theory New York, NY: Springer-Verlag, 1984.
** Gaal, Lisl. Classical Galois Theory with Examples, New York, NY: Chelsea, 1973, 1988. Fourth Edition.
Garling, D.J.H. A Course in Galois Theory New York, NY: Cambridge University Press, 1986.
*** Hadlock, Charles R. Field Theory and Its Classical Problems Washington, DC: Mathematical Association of America, 1978.
Lang, Serge. Cyclotomic Fields I and II, New York, NY: Springer-Verlag, 1978--80, 1990. Second Edition.
* Lidl, Rudolf and Niederreiter, Harald. Introduction to Finite Fields and Their Applications New York, NY: Cambridge University Press, 1986.
* Lieber, Lillian R. Galois and the Theory of Groups: A Bright Star in Mathesis Brooklyn, NY: Galois Institute of Mathematics and Art, 1961.
* McCarthy, Paul J. Algebraic Extensions of Fields, New York, NY: Chelsea, 1976. Second Edition.
Rotman, Joseph J. Galois Theory New York, NY: Springer-Verlag, 1990.
*** Stewart, Ian. Galois Theory, New York, NY: Chapman and Hall, 1989. Second Edition.
## Algebra: Commutative Algebra
** Atiyah, Michael F. and MacDonald, I.G. Introduction to Commutative Algebra Reading, MA: Addison-Wesley, 1969.
* Bourbaki, Nicolas. Elements of Mathematics: Commutative Algebra, New York, NY: Springer-Verlag, 1989.
Hutchins, Harry C. Examples of Commutative Rings Washington, NJ: Polygonal, 1981.
* Kaplansky, Irving. Commutative Rings, Boston, MA: Allyn and Bacon, 1974. Revised Edition.
Kunz, E. Introduction to Commutative Algebra and Algebraic Geometry New York, NY: Birkhauser, 1985.
Matsumura, Hideyuki. Commutative Ring Theory New York, NY: Cambridge University Press, 1986.
* Nagata, Masayoshi. Local Rings New York, NY: Interscience, 1962.
*** Zariski, Oscar and Samuel, Pierre. Commutative Algebra, New York, NY: Springer-Verlag, 1975, 1976. 2 Vols.
## Algebra: Homological Algebra
Geramita, Anthony V. and Small, Charles. Introduction to Homological Methods in Commutative Rings Kingston: Queen's University Press, 1976.
* Hilton, Peter J. and Stammbach, U. A Course in Homological Algebra New York, NY: Springer-Verlag, 1971.
** Mac Lane, Saunders. Homology New York, NY: Springer-Verlag, 1963.
Northcott, D.G. A First Course of Homological Algebra New York, NY: Cambridge University Press, 1973, 1980.
** Rotman, Joseph J. An Introduction to Homological Algebra New York, NY: Academic Press, 1979.
## Algebra: Category Theory
* Barr, Michael; and Wells, Charles. Category Theory for Computing Science Hemel Hemstead, UK: Prentice Hall International, 1990.
Blyth, T.S. Categories White Plains, NY: Longman, 1986.
* Herrlich, Horst and Strecker, George E. Category Theory: An Introduction, Berlin: Heldermann Verlag, 1979. Second Edition.
Lambek, Joachim and Scott, P.J. Introduction to Higher Order Categorical Logic New York, NY: Cambridge University Press, 1986, 1988.
** Mac Lane, Saunders. Categories for the Working Mathematician New York, NY: Springer-Verlag, 1971.
Pareigis, B. Categories and Functors New York, NY: Academic Press, 1970.
## Algebra: Lie Algebras
Humphreys, James E. Introduction to Lie Algebras and Representation Theory New York, NY: Springer-Verlag, 1972.
* Jacobson, Nathan. Lie Algebras Mineola, NY: Dover, 1979.
* Kaplansky, Irving. Lie Algebras and Locally Compact Groups Chicago, IL: University of Chicago Press, 1971.
* Samelson, Hans. Notes on Lie Algebras New York, NY: Springer-Verlag, 1990.
Winter, David J. Abstract Lie Algebras Cambridge, MA: MIT Press, 1972.
## Algebra: Universal Algebra
Burris, Stanley and Sankappanavar, H.P. A Course in Universal Algebra New York, NY: Springer-Verlag, 1981.
* Cohn, Paul M. Universal Algebra Norwell, MA: D. Reidel, 1981.
* Gratzer, George. Universal Algebra, New York, NY: Springer-Verlag, 1979. Second Edition.
## Algebra: Special Topics
Artin, Emil. Geometric Algebra New York, NY: John Wiley, 1957.
* Bachmann, Friedrich; Schmidt, Eckart; and Garner, Cyril W.L. n-gons Toronto: University of Toronto Press, 1975.
Bass, Hyman. Algebraic $K$-Theory Redwood City, CA: Benjamin Cummings, 1968.
* Halmos, Paul R. Lectures on Boolean Algebras New York, NY: Springer-Verlag, 1974.
Meldrum, J.D.P. Near-rings and Their Links with Groups Brooklyn, NY: Pitman, 1985.
Monk, J. Donald and Bonnet, Robert, eds. Handbook of Boolean Algebras, Amsterdam: North-Holland, 1989. 3 Vols.
* Montgomery, Susan, et al., eds. Selected Papers on Algebra Washington, DC: Mathematical Association of America, 1977.
Sikorski, R. Boolean Algebras New York, NY: Springer-Verlag, 1960.
Silvester, John R. Introduction to Algebraic K-Theory New York, NY: Chapman and Hall, 1981. | 2013-06-20 10:11:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4178633987903595, "perplexity": 7179.000615999929}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711406217/warc/CC-MAIN-20130516133646-00060-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://people.smp.uq.edu.au/MatthewDavis/matts_arXiv/mailings/0061.html | # Matt's arXiv selection: week ending 23 February 2006.
From: Matthew Davis <mdavis_at_physics.uq.edu.au>
Date: Thu, 8 Mar 2007 16:06:46 +1000 (EST)
The following message was sent to the matts_arxiv list by Matthew Davis <mdavis_at_physics.uq.edu.au>
Hi subscribers,
My apologies for the lateness of this week's email. The semester started here
at the beginning of last week, and this year I've had some previously unfamiliar
duties that have soaked up all my time and then some. Firstly I developed a
contextualised lab on projectile motion based around frogs (for the biologists),
and then I've had to pick up supervising the proper first year physics labs on
top of running our stat mech course. It's been interesting given that I think
the last time I tried to do some experimental physics was some saturated
absorption spectroscopy in my own undergraduate degree. (Apart from being the
person looking through the IR viewer when we first got the Rb MOT running in the
UQ lab back in 2002, which doesn't really count I think!)
This week (well, almost two weeks ago) sees 18 new abstracts, and 18
replacements:
------------------------------------------------------------------------------
\\
Paper: cond-mat/0702399
Date: Fri, 16 Feb 2007 17:43:31 GMT (589kb)
Title: Introduction to FFLO phases and collective mode in the BEC-BCS crossover
Authors: R.Combescot
Categories: cond-mat.supr-con cond-mat.soft
Comments: 20 pages, to be published in the Proceedings of the 2006 Enrico Fermi
Summer School on "Ultracold Fermi gases", organized by M. Inguscio, W.
Ketterle and C.Salomon (Varenna, Italy, June 2006)
Subj-class: Superconductivity; Soft Condensed Matter
\\
The main focus of this paper is a discussion of what might happen in a BCS
superfluid when there is an imbalance between the two populations of fermionic
particles forming Cooper pairs, having in mind the case of ultracold Fermi
gases. In the last part the evolution in the BEC-BCS crossover of the
collective mode arising in such a superfluid (with balanced atomic populations)
is briefly considered.
\\ ( http://arXiv.org/abs/cond-mat/0702399 , 589kb)
------------------------------------------------------------------------------
\\
Paper: quant-ph/0702162
Date: Thu, 15 Feb 2007 22:07:11 GMT (453kb)
Title: Trapping and observing single atoms in the dark
Authors: T. Puppe, I. Schuster, A. Grothe, A. Kubanek, K. Murr, P.W.H. Pinkse,
and G. Rempe
Categories: quant-ph
\\
A single atom strongly coupled to a cavity mode is stored by
three-dimensional confinement in blue-detuned cavity modes of different
longitudinal and transversal order. The vanishing light intensity at the trap
center reduces the light shift of all atomic energy levels. This is exploited
to detect a single atom by means of a dispersive measurement with 90%
confidence in 0.030 ms, limited by the photon-detection efficiency. As the atom
switches resonant cavity transmission into cavity reflection, the atom can be
detected while scattering only a few spontaneous photons.
\\ ( http://arXiv.org/abs/quant-ph/0702162 , 453kb)
------------------------------------------------------------------------------
\\
Paper: quant-ph/0702168
Date: Fri, 16 Feb 2007 13:39:10 GMT (339kb)
Title: Enhanced Spontaneous Emission Into The Mode Of A Cavity QED System
Authors: M. L. Terraciano, R. Olson Knell, D. L. Freimund, L. A. Orozco, J. P.
Clemens, and P. R. Rice
Categories: quant-ph
Comments: 9 pages, 2 figures, to appear in May 2007 Optics Letters
\\
We study the light generated by spontaneous emission into a mode of a cavity
QED system under weak excitation of the orthogonally polarized mode. Operating
in the intermediate regime of cavity QED with comparable coherent and
decoherent coupling constants, we find an enhancement of the emission into the
undriven cavity mode by more than a factor of 18.5 over that expected by the
solid angle subtended by the mode. A model that incorporates three atomic
levels and two polarization modes quantitatively explains the observations.
\\ ( http://arXiv.org/abs/quant-ph/0702168 , 339kb)
------------------------------------------------------------------------------
\\
Paper: quant-ph/0702170
Date: Fri, 16 Feb 2007 17:47:47 GMT (396kb)
Title: Quantum states for Heisenberg limited interferometry
Authors: H. Uys and P. Meystre
Categories: quant-ph
\\
The phase resolution of interferometers is limited by the so-called
Heisenberg limit, which states that the optimum phase sensitivity is inversely
proportional to the number of interfering particles $N$, a $1/\sqrt{N}$
improvement over the standard quantum limit. We have used simulated annealing,
a global optimization strategy, to systematically search for quantum
interferometer input states that approach the Heisenberg limited uncertainty in
estimates of the interferometer phase shift. We compare the performance of
these states to that of other non-classical states already known to yield
Heisenberg limited uncertainty.
\\ ( http://arXiv.org/abs/quant-ph/0702170 , 396kb)
------------------------------------------------------------------------------
\\
Paper: quant-ph/0702175
Date: Fri, 16 Feb 2007 22:58:12 GMT (6777kb)
Title: On the Transport of Atomic Ions in Linear and Multidimensional Ion Trap
Arrays
Authors: D. Hucul, M. Yeo, S. Olmschenk, C. Monroe, W.K. Hensinger, J. Rabchuk
Categories: quant-ph
\\
Trapped atomic ions have become one of the most promising architectures for a
quantum computer, and current effort is now devoted to the transport of trapped
ions through complex segmented ion trap structures in order to scale up to much
larger numbers of trapped ion qubits. This paper covers several important
issues relevant to ion transport in any type of complex multidimensional rf
(Paul) ion trap array. We develop a general theoretical framework for the
application of time-dependent electric fields to shuttle laser-cooled ions
along any desired trajectory, and describe a method for determining the effect
of arbitrary shuttling schedules on the quantum state of trapped ion motion. In
addition to the general case of linear shuttling over short distances, we
introduce issues particular to the shuttling through multidimensional
junctions, which are required for the arbitrary control of the positions of
large arrays of trapped ions. This includes the transport of ions around a
corner, through a cross or T junction, and the swapping of positions of
multiple ions in a laser-cooled crystal. Where possible, we make connection to
recent experimental results in a multidimensional T junction trap, where
arbitrary 2-dimensional transport was realized.
\\ ( http://arXiv.org/abs/quant-ph/0702175 , 6776kb)
------------------------------------------------------------------------------
\\
Paper: quant-ph/0702176
Date: Sat, 17 Feb 2007 00:44:32 GMT (193kb)
Title: Quantum theory of degenerate $\chi^{(3)}$ two-photon state
Authors: Jun Chen, Kim Fook Lee, and Prem Kumar
Categories: quant-ph
\\
We developed a quantum theory for degenerate $\chi^{(3)}$ two-photon state
generated from optical fiber, and compared the theory predictions with an
experimental result which exhibits a Hong-Ou-Mandel dip visibility of around
94%. Excellent agreement between theory and experiment has been achieved, and
we attribute the missing 6% visibility mainly to spatial mode mismatch between
signal and idler photons at the beamsplitter.
\\ ( http://arXiv.org/abs/quant-ph/0702176 , 193kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0702412
Date: Sat, 17 Feb 2007 14:33:24 GMT (484kb)
Title: Mass flows and angular momentum density for $p_x+ip_y$ paired fermions
in a harmonic trap
Authors: Michael Stone, Inaki Anduaga
Categories: cond-mat.supr-con
Subj-class: Superconductivity
\\
We present a simple two-dimensional model of a $p_x+ip_y$ superfluid in which
the mass flow that gives rise to the intrinsic angular momentum is easily
calculated by numerical diagonalization of the Bogoliubov-de Gennes operator.
We find that, at zero temperature and for constant director $\bf l$, the mass
flow closely follows the Ishikawa-Mermin-Muzikar formula ${\bf j}_{\rm mass}= \frac 12 {\rm curl} (\rho \hbar {\bf l}/2)$.
\\ ( http://arXiv.org/abs/cond-mat/0702412 , 484kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0702431
Date: Mon, 19 Feb 2007 10:34:18 GMT (77kb)
Title: Excitations in a non-equilibrium Bose-Einstein condensate of
exciton-polaritons
Authors: M. Wouters and I. Carusotto
Categories: cond-mat.other
Subj-class: Other
\\
We have developed a mean-field model to describe the dynamics of a
non-equilibrium Bose-Einstein condensate of exciton-polaritons in a
semiconductor microcavity. The spectrum of elementary excitations around the
stationary state is analytically studied in different geometries. A diffusive
behaviour of the Goldstone mode is found in the spatially homogeneous case and
new features are predicted for the Josephson effect in a two-well geometry.
\\ ( http://arXiv.org/abs/cond-mat/0702431 , 77kb)
------------------------------------------------------------------------------
\\
Paper: physics/0702146
Date: Sat, 17 Feb 2007 03:10:57 GMT (220kb)
Title: Magneto-electrostatic trapping of ground state OH molecules
Authors: Brian C. Sawyer, Benjamin L. Lev, Eric R. Hudson, Benjamin K. Stuhl,
Manuel Lara, John L. Bohn, Jun Ye
Categories: physics.atom-ph physics.chem-ph
Subj-class: Atomic Physics; Chemical Physics
\\
We report the magnetic confinement of neutral, ground state hydroxyl radicals
(OH) at a density of $\sim3\times10^{3}$ cm$^{-3}$ and temperature of $\sim$30
mK. An adjustable electric field of sufficient magnitude to polarize the OH is
superimposed on the trap in either a quadrupole or homogenous field geometry.
The OH is confined by an overall potential established via molecular state
mixing induced by the combined electric and magnetic fields acting on the
molecule's electric dipole and magnetic dipole moments, respectively. An
effective molecular Hamiltonian including Stark and Zeeman terms has been
constructed to describe single molecule dynamics inside the trap. Monte Carlo
simulation using this Hamiltonian accurately models the observed trap dynamics
in various trap configurations. Confinement of cold polar molecules in a
magnetic trap, leaving large, adjustable electric fields for control, is an
important step towards the study of low energy dipole-dipole collisions.
\\ ( http://arXiv.org/abs/physics/0702146 , 220kb)
------------------------------------------------------------------------------
\\
Paper: physics/0702154
Date: Sun, 18 Feb 2007 23:31:06 GMT (904kb)
Title: Planar Atom Trap and Magnetic Resonance 'Lens' Designs
Authors: M. Barbic, C. P. Barrett, T. H. Emery, and A. Scherer
Categories: physics.atom-ph physics.med-ph
Comments: 19 text pages, 8 figures
Subj-class: Atomic Physics; Medical Physics
\\
We present various planar magnetic designs that create points above the plane
where the magnitude of the static magnetic field is a local minimum. Structures
with these properties are of interest in the disciplines of neutral atom
confinement, magnetic levitation, and magnetic resonance imaging. Each planar
permanent magnet design is accompanied by the equivalent planar single
non-crossing conductor design. Presented designs fall into three categories
producing: a) zero value magnetic field magnitude point minima, b) non-zero
magnetic field magnitude point minima requiring external bias magnetic field,
and c) self-biased non-zero magnetic field magnitude point minima. We also
introduce the Principle of Amperean Current Doubling in planar perpendicularly
magnetized thin films that can be used to improve the performance of each
permanent magnet design we present. Single conductor current-carrying designs
are suitable for single layer lithographic fabrication, as we experimentally
demonstrate. Finally, we present the case that nanometer scale recording of
perpendicular anisotropy thin magnetic films using presently available data
storage technology can provide the ultimate miniaturization of the presented
designs.
\\ ( http://arXiv.org/abs/physics/0702154 , 904kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0702458
Date: Tue, 20 Feb 2007 11:27:27 GMT (142kb)
Title: Semiclassical quantization of the Bogoliubov spectrum
Authors: Andrey R. Kolovsky
Categories: cond-mat.stat-mech
Subj-class: Statistical Mechanics
\\
We analyze the Bogoliubov spectrum of the 3-sites Bose-Hubbard model with
finite number of Bose particles by using a semiclassical approach. The
Bogoliubov spectrum is shown to be associated with the low-energy regular
component of the classical Hubbard model. We identify the full set of the
integrals of motions of this regular component and, quantizing them, obtain the
energy levels of the quantum system. The critical values of the energy, above
which the regular Bogoliubov spectrum evolves into a chaotic spectrum, is
indicated as well.
\\ ( http://arXiv.org/abs/cond-mat/0702458 , 142kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0702462
Date: Tue, 20 Feb 2007 14:49:36 GMT (31kb)
Title: Efimov states near a Feshbach resonance
Authors: P. Massignan and H. T. C. Stoof
Categories: cond-mat.other
Subj-class: Other
\\
We describe three-body collisions in the resonant regime close to a Feshbach
resonance by taking fully into account two-body scattering processes occurring
in both the open and closed channels.
We extract the temperature dependence of the three-body recombination rate,
and find very good agreement with the experimental results of Kraemer et al.
[Nature 440, 315 (2006)] that recently provided the first convincing
observation of Efimov physics.
In addition, we obtain the atom-dimer scattering length, that may be of
relevance in future experiments.
\\ ( http://arXiv.org/abs/cond-mat/0702462 , 31kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0702466
Date: Tue, 20 Feb 2007 15:57:14 GMT (149kb)
Title: Spin Drag and Spin-Charge Separation in Cold Fermi Gases
Authors: Marco Polini and Giovanni Vignale
Categories: cond-mat.str-el
Comments: 4 pages, 4 figures, submitted
Subj-class: Strongly Correlated Electrons
\\
Low-energy spin and charge excitations of one-dimensional interacting
fermions are completely decoupled and propagate with different velocities.
These modes however can decay due to several possible mechanisms. In this paper
we expose a new facet of spin-charge separation: not only the speeds but also
the damping rates of spin and charge excitations are different. While the
propagation of long-wavelength charge excitations is essentially ballistic,
spin propagation is intrinsically damped and diffusive. We suggest that cold
Fermi gases trapped inside a tight atomic waveguide offer the opportunity to
measure the spin-drag relaxation rate that controls the broadening of a spin
packet.
\\ ( http://arXiv.org/abs/cond-mat/0702466 , 149kb)
------------------------------------------------------------------------------
\\
Paper: quant-ph/0702193
Date: Tue, 20 Feb 2007 12:01:00 GMT (180kb)
Title: Light scattering from ultracold atoms in optical lattices as an optical
probe of quantum statistics
Authors: Igor B. Mekhov, Christoph Maschler, Helmut Ritsch
Categories: quant-ph
\\
We study off-resonant collective light scattering from ultracold atoms
trapped in an optical lattice. Scattering from different atomic quantum states
creates different quantum states of the scattered light, which can be
distinguished by measurements of the spatial intensity distribution, quadrature
variances, photon statistics, or spectral measurements. In particular,
angle-resolved intensity measurements reflect global statistics of atoms (total
number of radiating atoms) as well as local statistical quantities (single-site
statistics even without an optical access to a single site) and pair
correlations between different sites. As a striking example we consider
scattering from transversally illuminated atoms into an optical cavity mode.
For the Mott insulator state, similar to classical diffraction, the number of
photons scattered into a cavity is zero due to destructive interference, while
for the superfluid state it is nonzero and proportional to the number of atoms.
Moreover, we demonstrate that light scattering into a standing-wave cavity has
a nontrivial angle dependence, including the appearance of narrow features at
angles, where classical diffraction predicts zero.
\\ ( http://arXiv.org/abs/quant-ph/0702193 , 180kb)
------------------------------------------------------------------------------
\\
Paper: quant-ph/0702194
Date: Tue, 20 Feb 2007 14:21:36 GMT (43kb)
Title: Multiatom cooperative emission following single-photon absorption:
Dicke-state dynamics
Authors: I.E. Mazets and G. Kurizki
Categories: quant-ph
Comments: accepted for J. Phys. B as a Fast Track Communication
\\
We investigate conditions under which multiatom absorption of a single photon
leads to cooperative decay. Our analysis reveals the symmetry properties of the
multiatom Dicke states underlying the cooperative decay dynamics and their
spatio-temporal manifestations, particularly, the forward-directed spontaneous
emission investigated by Scully et al.
\\ ( http://arXiv.org/abs/quant-ph/0702194 , 43kb)
------------------------------------------------------------------------------
\\
Paper: physics/0702161
Date: Tue, 20 Feb 2007 18:41:00 GMT (109kb)
Title: The degenerate Fermi gas with renormalized density-dependent
interactions in the K harmonic approximation
Authors: Seth T. Rittenhouse and Chris H. Greene (Dept. of Physics and JILA
Categories: physics.atom-ph
Comments: 23 pages, 8 figures, submitted to PRA
Subj-class: Atomic Physics
\\
We present a simple implementation of a density-dependent, zero-range
interactions in a degenerate Fermi gas described in hyperspherical coordinates.
The method produces a 1D effective potential which accurately describes the
ground state energy as a function of the hyperradius, the rms radius of the two
spin component gas throughout the unitarity regime. In the unitarity regime the
breathing mode frequency is found to limit to the non-interacting value. A
dynamical instability, similar to the Bosenova, is predicted to be possible in
gases containing more than three spin components, for large, negative, two-body
scattering lengths.
\\ ( http://arXiv.org/abs/physics/0702161 , 73kb)
------------------------------------------------------------------------------
\\
Paper: physics/0702164
Date: Tue, 20 Feb 2007 09:45:36 GMT (376kb)
Title: An experimental study of intermodulation effects in an atomic fountain
frequency standard
Authors: Jocelyne Gu\'{e}na (METAS, LKB - Lhomond), Gregor Dudle (METAS),
Pierre Thomann (LTF-IMT)
Categories: physics.atom-ph physics.ins-det
Proxy: ccsd hal-00132061
Subj-class: Atomic Physics; Instrumentation and Detectors
\\
The short-term stability of passive atomic frequency standards, especially in
pulsed operation, is often limited by local oscillator noise via
intermodulation effects. We present an experimental demonstration of the
intermodulation effect on the frequency stability of a continuous atomic
fountain clock where, under normal operating conditions, it is usually too
small to observe. To achieve this, we deliberately degrade the phase stability
of the microwave field interrogating the clock transition. We measure the
frequency stability of the locked, commercial-grade local oscillator, for two
modulation schemes of the microwave field: square-wave phase modulation and
square-wave frequency modulation. We observe a degradation of the stability
whose dependence with the modulation frequency reproduces the theoretical
predictions for the intermodulation effect. In particular no observable
degradation occurs when this frequency equals the Ramsey linewidth.
presently equal to 2x10-13 at 1s, is limited by atomic shot-noise and therefore
could be reduced were the atomic flux increased.
\\ ( http://arXiv.org/abs/physics/0702164 , 376kb)
------------------------------------------------------------------------------
\\
Paper (*cross-listing*): gr-qc/0702118
Date: Wed, 21 Feb 2007 18:35:48 GMT (111kb)
Title: Is it possible to detect gravitational waves with atom interferometers?
Authors: G. M. Tino, F. Vetrano
Categories: gr-qc physics.atom-ph quant-ph
Subj-class: General Relativity and Quantum Cosmology; Atomic Physics
\\
We investigate the possibility to use atom interferometers to detect
gravitational waves. We discuss the interaction of gravitational waves with an
atom interferometer and analyze possible schemes.
\\ ( http://arXiv.org/abs/gr-qc/0702118 , 111kb)
------------------------------------------------------------------------------
\\
Paper: physics/0702192
Date: Thu, 22 Feb 2007 11:15:06 GMT (776kb)
Title: Ionization of Sodium and Rubidium nS, nP and nD Rydberg atoms by
Authors: I.I. Beterov, D.B. Tretyakov, I.I. Ryabtsev, A. Ekers, N.N. Bezuglov
Categories: physics.atom-ph physics.plasm-ph
Comments: 14 pages, 6 figures, 6 tables in Appendix
Subj-class: Atomic Physics; Plasma Physics
\\
Results of theoretical calculations of ionization rates of Rb and Na Rydberg
atoms by blackbody radiation (BBR) are presented. Calculations have been
performed for nS, nP and nD states of Na and Rb, which are commonly used in a
variety of experiments, at principal quantum numbers n=8-65 and at three
ambient temperatures of 77, 300 and 600 K. A peculiarity of our calculations is
that we take into account the contributions of BBR-induced redistribution of
population between Rydberg states prior to photoionization and field ionization
by extraction electric field pulses. The obtained results show that these
phenomena affect both the magnitude of measured ionization rates and shapes of
their dependencies on n. The calculated ionization rates are compared with the
results of our earlier measurements of BBR-induced ionization rates of Na nS
and nD Rydberg states with n=8-20 at 300 K. A good agreement for all states
except nS with n>15 is observed. We also present the useful analytical formulae
for quick estimation of BBR ionization rates of Rydberg atoms.
\\ ( http://arXiv.org/abs/physics/0702192 , 776kb)
------------------------------------------------------------------------------
The replacements:
------------------------------------------------------------------------------
\\
Paper: cond-mat/0612670
replaced with revised version Fri, 16 Feb 2007 14:12:18 GMT (29kb)
Title: Anderson Localization of Expanding Bose-Einstein Condensates in Random
Potentials
Authors: Laurent Sanchez-Palencia (LCFIO), David Cl\'{e}ment (LCFIO), Pierre
Lugan (LCFIO), Philippe Bouyer (LCFIO), Georgy V. Shlyapnikov (LPTMS), Alain
Aspect (LCFIO)
Categories: cond-mat.other
Proxy: ccsd hal-00122278
Subj-class: Other
\\ ( http://arXiv.org/abs/cond-mat/0612670 , 29kb)
------------------------------------------------------------------------------
\\
Paper: quant-ph/0603218
replaced with revised version Mon, 19 Feb 2007 17:11:28 GMT (222kb)
Title: A Stern-Gerlach experiment for slow light
Authors: Leon Karpa and Martin Weitz
Categories: quant-ph cond-mat.soft physics.atom-ph
Comments: 11 pages, 3 figures. Nature Physics 2, 332 (2006)
Subj-class: Quantum Physics; Atomic Physics; Soft Condensed Matter
\\ ( http://arXiv.org/abs/quant-ph/0603218 , 222kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0508365
replaced with revised version Sun, 18 Feb 2007 20:08:48 GMT (210kb)
Title: Raman Spectroscopy of Mott insulator states in optical lattices
Authors: P. Blair Blakie
Categories: cond-mat.other
Subj-class: Other
Journal-ref: New Journal of Physics 8, 157 (2006)
\\ ( http://arXiv.org/abs/cond-mat/0508365 , 210kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0606416
replaced with revised version Sun, 18 Feb 2007 15:34:39 GMT (654kb)
Title: Mixing of ultracold atomic clouds by merging of two magnetic traps
Authors: Jesper Fevre Bertelsen, Henrik Kjaer Andersen, Sune Mai, and Michael
Budde
Categories: cond-mat.other physics.atom-ph
Comments: 12 pages, 13 figures. Fig. 10 corrected. Fig. 13 updated with more
points and better statistics. A couple of paragraphs rephrased and typos
corrected. References updated
Subj-class: Other; Atomic Physics
Journal-ref: Phys. Rev. A 75, 013404 (2007)
DOI: 10.1103/PhysRevA.75.013404
\\ ( http://arXiv.org/abs/cond-mat/0606416 , 654kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0607179
replaced with revised version Sat, 17 Feb 2007 02:50:40 GMT (286kb)
Title: Visualization of vortex bound states in polarized Fermi gases at
unitarity
Authors: Hui Hu, Xia-Ji Liu, and Peter D. Drummond
Categories: cond-mat.supr-con cond-mat.stat-mech
Comments: 4 pages, and 4 figures; Published version in PRL
Subj-class: Superconductivity; Statistical Mechanics
Journal-ref: Phys. Rev. Lett. 98, 060406 (2007)
DOI: 10.1103/PhysRevLett.98.060406
\\ ( http://arXiv.org/abs/cond-mat/0607179 , 286kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0607405
replaced with revised version Sat, 17 Feb 2007 02:40:47 GMT (513kb)
Title: Density fingerprint of giant vortices in Fermi gases near a Feshbach
resonance
Authors: Hui Hu and Xia-Ji Liu
Categories: cond-mat.supr-con cond-mat.str-el
Comments: 4 pages and 5 figures; Published version in PRA
Subj-class: Superconductivity; Strongly Correlated Electrons
Journal-ref: Phys. Rev. A 75, 011603(R) (2007)
DOI: 10.1103/PhysRevA.75.011603
\\ ( http://arXiv.org/abs/cond-mat/0607405 , 513kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0610448
replaced with revised version Sat, 17 Feb 2007 02:56:58 GMT (228kb)
Title: Phase diagram of a strongly interacting polarized Fermi gas in one
dimension
Authors: Hui Hu, Xia-Ji Liu, and Peter D. Drummond
Categories: cond-mat.supr-con cond-mat.stat-mech
Comments: 4 pages, 5 figures; title changed; published version in PRL
Subj-class: Superconductivity; Statistical Mechanics
Journal-ref: Phys. Rev. Lett. 98, 070403 (2007)
DOI: 10.1103/PhysRevLett.98.070403
\\ ( http://arXiv.org/abs/cond-mat/0610448 , 228kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0702195
replaced with revised version Sat, 17 Feb 2007 02:25:08 GMT (379kb)
Title: Mean field thermodynamics of a spin-polarized spherically trapped Fermi
gas at unitarity
Authors: Xia-Ji Liu, Hui Hu, and Peter D. Drummond
Categories: cond-mat.str-el cond-mat.stat-mech
Comments: 14 pages + 9 figures; Published version in PRA
Subj-class: Strongly Correlated Electrons; Statistical Mechanics
Journal-ref: Phys. Rev. A 75, 023614 (2007)
DOI: 10.1103/PhysRevA.75.023614
\\ ( http://arXiv.org/abs/cond-mat/0702195 , 379kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0608246
replaced with revised version Tue, 20 Feb 2007 10:46:19 GMT (53kb)
Title: Quantum phase transition in a two-dimensional system of dipoles
Authors: G.E. Astrakharchik, J. Boronat, I.L. Kurbakov, Yu.E. Lozovik
Categories: cond-mat.supr-con
Subj-class: Superconductivity
Journal-ref: Phys. Rev. Lett. 98, 060405 (2007)
DOI: 10.1103/PhysRevLett.98.060405
\\ ( http://arXiv.org/abs/cond-mat/0608246 , 53kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0605755
replaced with revised version Wed, 21 Feb 2007 15:42:11 GMT (63kb)
Title: Disorder-Induced Shift of Condensation Temperature for Dilute Trapped
Bose Gases
Authors: Matthias Timmer, Axel Pelster, and Robert Graham
Categories: cond-mat.dis-nn
http://www.theo-phys.uni-essen.de/tp/ags/pelster_dir
Subj-class: Disordered Systems and Neural Networks
Journal-ref: Europhys. Lett. 76, 760-766 (2006)
\\ ( http://arXiv.org/abs/cond-mat/0605755 , 63kb)
------------------------------------------------------------------------------
Paper: cond-mat/0609212
replaced with revised version Mon, 19 Feb 2007 21:04:45 GMT (229kb)
Title: Exotic Superconducting Phases of Ultracold Atom Mixtures on Triangular
Lattices
Authors: L. Mathey, S.-W. Tsai, A.H. Castro Neto
Categories: cond-mat.supr-con
Comments: 6 pages, 4 figures, extended version
Subj-class: Superconductivity
\\ ( http://arXiv.org/abs/cond-mat/0609212 , 229kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0607546
replaced with revised version Wed, 21 Feb 2007 15:36:34 GMT (13kb)
Title: Bose Condensed Gas in Strong Disorder Potential With Arbitrary
Correlation Length
Authors: Patrick Navez, Axel Pelster, Robert Graham
Categories: cond-mat.dis-nn
http://www.theo-phys.uni-essen.de/tp/ags/pelster_dir
Subj-class: Disordered Systems and Neural Networks
Journal-ref: Appl. Phys. B 86, 395-398 (2007)
\\ ( http://arXiv.org/abs/cond-mat/0607546 , 13kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0608542
replaced with revised version Wed, 21 Feb 2007 17:41:49 GMT (184kb)
Title: Three-boson recombination at ultralow temperatures
Authors: M.T. Yamashita, T. Frederico and Lauro Tomio
Categories: cond-mat.soft
Subj-class: Soft Condensed Matter
Journal-ref: Physics Letters A 363, 468 (2007)
\\ ( http://arXiv.org/abs/cond-mat/0608542 , 184kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0702223
replaced with revised version Wed, 21 Feb 2007 18:33:20 GMT (112kb)
Title: Coherent state path integral and super-symmetry for condensates composed
of bosonic and fermionic atoms
Authors: Bernhard Mieck
Categories: cond-mat.stat-mech
Comments: 123 pages; a second part, particularly applicable for d=2 spatial
dimensions, is in preparation
Subj-class: Statistical Mechanics
\\ ( http://arXiv.org/abs/cond-mat/0702223 , 112kb)
------------------------------------------------------------------------------
\\
Paper: quant-ph/0610022
replaced with revised version Wed, 21 Feb 2007 18:29:52 GMT (180kb)
Title: Demonstration of a Tunable-Bandwidth White Light Interferometer using
Anomalous Dispersion in Atomic Vapor
Authors: G.S. Pati, M. Salit, K. Salit, and M.S. Shahriar
Categories: quant-ph
\\ ( http://arXiv.org/abs/quant-ph/0610022 , 180kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0702526
Date: Thu, 22 Feb 2007 16:09:46 GMT (729kb)
Title: Dynamics and superfluidity of an ultracold Fermi gas
Authors: Sandro Stringari
Categories: cond-mat.stat-mech cond-mat.supr-con
Comments: 24 pages, to be published in the Proceedings of the 2006 Enrico Fermi
Summer School on "Ultracold Fermi gases", organized by M. Inguscio, W.
Ketterle and C.Salomon (Varenna, Italy, June 2006)
Subj-class: Statistical Mechanics; Superconductivity
\\
The purpose of this paper is to review some of the dynamic and superfluid
features exhibited by ultracold Fermi gases with special emphasis on the
effects of the external confinement which will be assumed in most cases of
harmonic shape.
\\ ( http://arXiv.org/abs/cond-mat/0702526 , 729kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0608282
replaced with revised version Thu, 22 Feb 2007 09:38:14 GMT (301kb)
Title: Thermodynamics of the BCS-BEC crossover
Authors: R. Haussmann, W. Rantner, S. Cerrito and W. Zwerger
Categories: cond-mat.stat-mech
Subj-class: Statistical Mechanics
Journal-ref: Phys. Rev. A 75, 023610 (2007)
\\ ( http://arXiv.org/abs/cond-mat/0608282 , 301kb)
------------------------------------------------------------------------------
\\
Paper: cond-mat/0610437
replaced with revised version Thu, 22 Feb 2007 16:54:49 GMT (41kb)
Title: Attractive Fermi gases with unequal spin populations in highly elongated
traps
Authors: G. Orso
Categories: cond-mat.other cond-mat.supr-con
Subj-class: Other; Superconductivity
Journal-ref: Physical Review Letters 98, 070402 (2007)
\\ ( http://arXiv.org/abs/cond-mat/0610437 , 41kb)
------------------------------------------------------------------------------
The next email shouldn't be too far away....
Matt.
--
=========================================================================
Dr M. J. Davis, Senior Lecturer in Physics
School of Physical Sciences, email: mdavis_at_physics.uq.edu.au
University of Queensland, ph : +61 7 334 69824
Brisbane, QLD 4072, fax : +61 7 336 51242
Australia. http://www.physics.uq.edu.au/people/mdavis/
=========================================================================
Matt's arXiv selection: weekly summary of cold-atom papers from arXiv.org
http://www.physics.uq.edu.au/people/mdavis/matts_arXiv/
=========================================================================
Legal stuff: Unless stated otherwise, this e-mail represents only the
views of the sender and not the views of the University of Queensland
=========================================================================
Received on Thu Mar 08 2007 - 16:06:46 EST
This archive was generated by hypermail 2.2.0 : Thu May 08 2008 - 11:51:41 EST | 2020-04-03 03:49:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49161574244499207, "perplexity": 12202.063498190859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510287.30/warc/CC-MAIN-20200403030659-20200403060659-00460.warc.gz"} |
http://cnx.org/content/m11608/latest/ | # Connexions
You are here: Home » Content » Molecular Distance Measures
### Lenses
What is a lens?
#### Definition of a lens
##### Lenses
A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.
##### What is in a lens?
Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.
##### Who can create a lens?
Any individual member, a community, or a respected organization.
##### What are tags?
Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.
#### Affiliated with (What does "Affiliated with" mean?)
This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• Rice Digital Scholarship
This module is included in aLens by: Digital Scholarship at Rice UniversityAs a part of collection: "Geometric Methods in Structural Computational Biology"
Click the "Rice Digital Scholarship" link to see all content affiliated with them.
#### Also in these lenses
• eScience, eResearch and Computational Problem Solving
This module is included inLens: eScience, eResearch and Computational Problem Solving
By: Jan E. OdegardAs a part of collection: "Geometric Methods in Structural Computational Biology"
Click the "eScience, eResearch and Computational Problem Solving" link to see all content selected in this lens.
### Recently Viewed
This feature requires Javascript to be enabled.
# Molecular Distance Measures
Module by: Lydia E. Kavraki. E-mail the author
Summary: Given a set of structures of the same molecule, it is often necessary to decide which are more similar or less similar to each other. This module presents a few ways to approach that problem, including root mean squared distance (RMSD), least RMSD, and intramolecular distance measures.
## Comparing Molecular Conformations
Molecules are not rigid. On the contrary, they are highly flexible objects, capable of changing shape dramatically through the rotation of dihedral angles. We need a measure to express how much a molecule changes going from one conformation to another, or alternatively, how different two conformations are from each other. Each distinct shape of a given molecule is called a conformation. Although one could conceivably compute the volume of the intersection of the alpha shapes for two conformations (see Molecular Shapes and Surfaces for an explanation of alpha shapes) to measure the shape change, this is prohibitively computationally expensive. Simpler measures of distance between conformations have been defined, based on variables such as the Cartesian coordinates for each atom, or the bond and torsion angles within the molecule. When working with Cartesian coordinates, one can represent a molecular conformation as a vector whose components are the Cartesian coordinates of the molecule's atoms. Therefore, a conformation for a molecule with N atoms can be represented as a 3N-dimensional vector of real numbers.
## RMSD and lRMSD
One of the most widely accepted difference measures for conformations of a molecule is least root mean square deviation (lRMSD). To calculate the RMSD of a pair of structures (say x and y), each structure must be represented as a 3N-length (assuming N atoms) vector of coordinates. The RMSD is the square root of the average of the squared distances between corresponding atoms of x and y. It is a measure of the average atomic displacement between the two conformations:
However, when molecular conformations are sampled from molecular dynamics or other forms of sampling, it is often the case that the molecule drifts away from the origin and rotates in an arbitrary way. The lRMSD distance aims at compensating for these facts by representing the minimum RMSD over all possible relative positions and orientations of the two conformations under consideration. Calculating the lRMSD consists of first finding an optimal alignment of the two structures, and then calculating their RMSD. Note that aligning two conformations may require both a translation and rotation. In other words, before computing the RMSD distance, it is necessary to remove the translation of the centroid of both conformations and to perform an "optimal alignment" or "optimal rotation" of them, since these two factors artificially increase the RMSD distance between them.
Finding the optimal rotation to minimize the RMSD between two point sets is a well-studied problem, and several algorithms exist. The Kabsch Algorithm [1][2], which is implemented in several molecular modeling packages, solves a matrix equation for the three dimensional rotation matrix corresponding to the optimal rotation. An alternative approach, discussed in detail after the matrix method, uses a compact representation of rotational transformations called quaternions [3][4]. Quaternions are currently the preferred representation for global rotation in calculating lRMSD, since they require less numbers to be stored and are easy to re-normalize. In contrast, re-normalization of orthonormal matrices is quite expensive and potentially numerically unstable. Both quaternions and their application to global alignment of conformations will be presented after the next section.
## Optimal Alignment for lRMSD Using Rotation Matrices
This section presents a method for computing the optimal rotation between 2 datasets as an orthonormal rotation matrix. As stated earlier, this approach is slightly more numerically unstable (since guaranteeing the orthonormality of a matrix is harder than the unit length of a quaternion) and requires taking care of the special case when the resulting matrix may not be a proper rotation, as discussed below.
As stated earlier, the optimal alignment requires both a translation and a rotation. The translational part of the alignment is easy to calculate. It can be proven that the optimal alignment is obtained by translating one set so that its centroid coincides with the other set's centroid (see section 2-C of [3] [link] for proof). The centroid of a point set a is simply the average position of all its points:
We can then redefine each point in two sets A and B as a deviation from the centroid: Given this notation relative to the centroid, we can explicitly set the centroids to be equal and proceed with the rotational part of the alignment.
One of the first references to the solution of this problem in matrix form is from Kabsch [1][2]. The Kabsch method uses Lagrange multipliers to solve a minimization problem to find the optimal rotation. Here, we present a slightly more intuitive method based on matrix algebra and properties, that achieves the same result. This formulation can be found in [4] and [5]. Imagine we wish to align two conformations composed of N atoms each, whose Cartesian coordinates are given by the vectors xx and yy. The main idea behind this approach is to find a 3x3 orthonormal matrix UU such that the application of UU to the atom positions of one of the data vectors, xx, aligns it as best as possible with the other data vector, yy, in the sense that the quantity to minimize is the distance d(Ux,y)d(Ux,y), where xx and yy are assumed to be centered, that is, both their centroids coincide at the origin (centering both conformations is the first step). Mathematically, this problem can be stated as the minimization of the following quantity:
When E is a minimum, the square root of its value becomes the least RMSD (lRMSD) between xx and yy. Being an orthonormal rotation matrix, UU needs to satisfy the orthonormality property U U T =I U U T I , where II is the identity matrix. The orthonormality contraint ensures that the rows and columns are mutually orthogonal, and that their length (as vectors) is one. Any orthonormal matrix represents a rigid orientation (transformation) in space. The only problem with this approach as is, is that all orthonormal matrices encode a rigid transformation, but if the rows/columns of the matrix do not constitute a right handed system, then the rotation is said to be improper. In an improper rotation, one of the three directions may be "mirrored". Fortunately, this case can be detected easily by computing the determinant of the matrix UU, and if it is negative, correcting the matrix. Denoting UxUx as x'x', and moving the constant factor N to the left, the formula for the error becomes:
An alternative way to represent the two point sets, rather than a one-dimensional vector or as separate atom coordinates, is using two 3xN matrices (N atoms, 3 coordinates for each). Using this scheme, xx is represented by the matrix XX and yy is represented by the matrix YY. Note that column 1iN1iN in these matrices stands for point (atom) xixi and yiyi, respectively. Using this new representation, we can write:
where X'=UXX'UX and Tr(A)Tr(A) stands for the trace of matrix A, the sum of its diagonal elements. It is easy to see that that the trace of the matrix to the right amounts precisely to the sum on the left (simply carrying out the multiplication of the first row/column should convince the reader). The right-hand side of the equation can be expanded into:
Which follows from the properties of the trace operator, namely: Tr(A+B)=Tr(A)+Tr(B), Tr(AB)=Tr(BA)Tr(A+B)=Tr(A)+Tr(B), Tr(AB)=Tr(BA), Tr(Tr(ATAT)=Tr(A))=Tr(A), and Tr(kA)=kTr(A)Tr(kA)=kTr(A). Furthermore, the first two terms in the expansion above represent the sum of the squares of the components xixi and yiyi, so it can be rewritten as:
Note that the xx components do not need to be primed (i.e., x'x') since the rotation UU around the origin does not change the length of xixi. Note that the summation above does not depend on UU, so minimizing E is equivalent to maximizing Tr(Tr(YTX'YTX')). For this reason, the rest of the discussion focuses on finding a proper rotation matrix UU that maximizes Tr(Tr(YTX'YTX')). Remembering that X'=UXX'UX, the quantity to maximize is then Tr(Tr((YTU)XYTUX)). From the property of the trace operator, this is equivalent to Tr(Tr((XYT)UXYTU)). Since XYTXYT is a square 3x3 matrix, it can be decomposed through the Singular Value Decomposition technique (SVD) into XYT=VSWTXYTVSWT, where VV and WTWT are the matrices of left and right eigenvectors (which are orthonormal matrices), respectively, and SS is a diagonal 3x3 matrix containing the eigenvalues s1s1, s2s2, s3s3 in decreasing order. Again from the properties of the trace operator, we obtain that:
If we introduce the 3x3 matrix TT as the product T=WTUV T WT UV , we can rewrite the above expression as:
Since TT is the product of orthonormal matrices, it is itself an orthonormal matrix and det(T)=+/-1det(T)=+/-1. This means that the absolute value of each element of this matrix is no more than one, from where the last equality follows. It is obvious that the maximum value of the left hand side of the equation is reached when the diagonal elements of TT are equal to 1, and since it is an orthonormal matrix, all other elements must be zero. This results in T=ITI. Moreover, since T=WTUV T WT UV , we can write that WTUV=I WT UV I , and because WW and VV are orthonormal, WWT=I W WT I and VVT=I V VT I . Multiplying WTUV WT UV by WW to the left and VTVT to the right yields a solution for UU:
Where VV and WTWT are the matrices of left and right eigenvectors, respectively, of the covariance matrix C=XYT C X YT . This formula ensures that UU is orthonormal (the reader should carry out the high-level matrix multiplication and verify this fact).
The only remaining detail to take care of is to make sure that UU is a proper rotation, as discussed before. It could indeed happen that det(U)=-1det(U)=-1 if its rows/columns do not make up a right-handed system. When this happens, we need to compromise between two goals: maximizing Tr(Tr(YTX'YTX')) and respecting the constraint that det(U)=+1det(U)=+1. Therefore, we need to settle for the second largest value of Tr(Tr(YTX'YTX')). It is easy to see what the second largest value is; since:
then the second largest value occurs when T11=T22=+1T11T22+1 and T33=-1T33-1. Now, we have that TT cannot be the identity matrix as before, but instead it has the lower-right corner set to -1. Now we finally have a unified way to represent the solution. If det(C)>0det(C)>0, TT is the identity; otherwise, it has a -1 as its last element. Finally, these facts can be expressed in a single formula for the optimal rotation UU by stating:
where d=sign(det(C))dsign(det(C)). In the light of the preceding derivation, all the facts that have been presented as a proof can be succinctly put as an algorithm for computing the optimal rotation to align two data sets xx and yy:
### Optimal rotation
1. Build the 3xN matrices XX and YY containing, for the sets xx and yy respectively, the coordinates for each of the N atoms after centering the atoms by subtracting the centroids.
2. Compute the covariance matrix C=XYTCXYT
3. Compute the SVD (Singular Value Decomposition) of C=VSWTCVSWT
4. Compute d=sign(det(C))dsign(det(C))
5. Compute the optimal rotation UU as
## Optimal Alignment for lRMSD Using Quaternions
Another way of solving the optimal rotation for the purposes of computing the lRMSD between two conformations is to use quaternions. These provide a very compact way of representing rotations (only 4 numbers as compared to 9 or 16 for a rotation matrix) and are extremely easy to normalize after performing operations on them. Next, a general introduction to quaternions is given, and then they will be used to compute the optimal rotation between two point sets.
### Introduction to Quaternions
Quaternions are an extension of complex numbers. Recall that complex numbers are numbers of the form a + bi, where a and b are real numbers and i is the canonical imaginary number, equal to the square root of -1. Quaternions add two more imaginary numbers, j and k. These numbers are related by the set of equalities in the following figure:
These equalities give rise to some unusual properties, especially with respect to multiplication.
Given this definition of i, j, and k, we can now define a quaternion.
Based on the definitions of i, j and k, we can also derive rules for addition and multiplication of quaternions. Assume we have two quaternions, p and q, defined as follows: Addition of p and q is fairly intuitive: The dot product and magnitude of a quaternion also closely resemble those operations for vectors. Note that a unit quaternion is a quaternion with magnitude 1 under this definition: Multiplication, however, is not, due to the definitions of i, j, and k: Quaternion multiplication also has two equivalent matrix forms which will become relevant later in the derivation of the alignment method: These useful properties of quaternion multiplication can be derived easily using the matrix form for multiplication, or they can be proved by carrying out the products:
### Quaternions and Three-Dimensional Rotations
A number of different methods exist for denoting rotations of rigid objects in three-dimensional space. These are introduced in a module on protein kinematics. Unit quaternions represent a rotation of an angle around an arbitrary axis. A rotation by the angle theta about an axis represented by the unit vector v = [x, y, z] is represented by a unit quaternion:
Like rotation matrices, quaternions may be composed with each other via multiplication. The major advantage of the quaternion representation is that it is more robust to numerical instability than orthonormal matrices. Numerical instability results from the fact that, because computers use a finite number of bits to represent real numbers, most real numbers are actually represented by the nearest number the computer is capable of representing. Over a series of floating point operations, the error caused by this inexact representation accumulates, quite rapidly in the case of repeated multiplications and divisions. In manipulating orthonormal transformation matrices, this can result in matrices that are no longer orthonormal, and therefore not valid rigid transformations. Finding the "nearest" orthonormal matrix to an arbitrary matrix is not a well-defined problem. Unit-length quaternions can accumulate the same kind of a numerical error as rotation matrices, but in the case of quaternions, finding the nearest unit-length quaternion to an arbitrary quaternion is well defined. Additionally, because quaternions correspond more directly to the axis-angle representation of three-dimensional rotations, it could be argued that they have a more intuitive interpretation than rotation matrices. Quaternions, with four parameters, are also more memory efficient than 3x3 matrices. For all of these reasons, quaternions are currently the preferred representation for three-dimensional rotations in most modeling applications.
Vectors can be represented as purely imaginary quaternions, that is, quaternions whose scalar component is 0. The quaternion corresponding to the vector v = [x, y, z] is q = xi + yj + zk.
We can perform rotation of a vector in quaternion notation as follows:
### Optimal Alignment with Quaternions
The method presented here is from Berthold K. P. Holm, "Closed-form solution of absolute orientation using unit quaternions." Journal of the Optical Society of America A, 4:629-642.
The alignment problem may be stated as follows:
• We have two sets of points (atoms) A and B for which we wish to find an optimal alignment, defined as the alignment for which the root mean square difference between each point in A and its corresponding point in B is minimized.
• We know which point in A corresponds to which point in B. This is necessary for any RMSD-based method.
As for the case of rotation matrices, the translational part of the alignment consists of making the centroids of the two data sets coincide. To find the optimal rotation using quaternions, recall that the dot product of two vectors is maximized when the vectors are in the same direction. The same is true when the vectors are represented as quaternions. Using this property, we can define a quantity that we want to maximize (proof here):
Equivalently, using the last property from the section "Introduction to quaternions", we get: Now, recall that quaternion multiplication can be represented by matrices, and that the quaterions a and b have a 0 real component: Using these matrices, we can derive a new form for the objective function: where: The quaternion that maximizes this product is the eigenvector of N that corresponds to its most positive eigenvalue (proof here). The eigenvalues can be found by solving the following equation, which is quartic in lambda: This quartic equation can be solved by a number of standard approaches. Finally, given the maximum eigenvalue lambda-max, the quaternion corresponding to the optimal rotation is the eigenvector v: A closed-form solution to this equation for v can be found by applying techniques from linear algebra. One possible algorithm, based on constructing a matrix of cofactors, is presented in appendix A5 of the source paper [3] [link].
In summary, the alignment algorithm works as follows:
• Recalculate atom coordinates as displacements from the centroid of each molecule. The optimal translation superimposes the centroids.
• Construct the matrix N based on matrices A and B for each atom.
• Find the maximum eigenvalue by solving the quartic eigenvalue equation.
• Find the eigenvector corresponding to this eigenvalue. This vector is the quaternion corresponding to the optimal rotation.
This method appears computationally intensive, but has the major advantage over other approaches of being a closed-form, unique solution.
## Intramolecular Distance and Related Measures
RMSD and lRMSD are not ideally suited for all applications. For example, consider the case of a given conformation A, and a set S of other conformations generated by some means. The goal is to estimate which conformations in S are closest in potential energy to A, making the assumption that they will be the conformations most structurally similar to A. The lRMSD measure will find the conformations in which the overall average atomic displacement is least. The problem is that if the quantity of interest is the potential energy of conformations, not all atoms can be treated equally. Those on the outside of the protein can often move a fair amount without dramatically affecting the energy. In contrast, the core of the molecule tends to be more compact, and therefore a slight change in the relative positions of a pair of atoms could lead to overlap of the atoms, and therefore a completely infeasible structure and high potential energy. A class of distance measures and pseudo-measures based on intramolecular distances have been developed to address this shortcoming of RMSD-based measures.
Assume we wish to compare two conformations P and Q of a molecule with N atoms. Let pijpij be the distance between atom i and atom j in conformation P, and let qijqij be the same distance for conformation Q. Then the intramolecular distance is defined as
One of the main computational advantages of this class of approaches is that we do not have to compute the alignment between P and Q. On the other hand, for this metric we need to sum over a quadratic number of terms, whereas for RMSD the number of terms is linear in the number of atoms. Approximations can be made to speed up this computation, as shown in [7]. Also, the intramolecular distance measure given above, which is sometimes referred to as the dRMSD, is subject to the problem that pairs of atoms most distant from each other are the ones that contribute the greatest amount to their measured difference.
An interesting open problem is to come up with physically meaningful molecular distance metric that allows for fast nearest neighbor computations. This can be useful for, for example, clustering conformations. One proposed method is the contact distance. Contact distance requires constructing a contact map matrix for each conformation indicating which pairs of atoms are less than some threshold separation. The distance measure is then a measure of the difference of the contact maps.
Other distance measures attempt to weight each pair in the dRMSD based on how close the atoms are, with closer pairs given more weight, in keeping with the intuition that small changes in the relative positions of nearby atoms are more likely to result in collisions. One such measure is the normalized Holm and Sander Score. This score is technically a pseudo-measure rather than a measure because it does not necessarily obey the triangle inequality.
The definition of distance measures remains an open problem. For reference on ongoing work, see articles that compare several methods, such as [5] [link].
The first two papers are the original descriptions of the Kabsch Algorithm, and use rotations represented as orthonormal matrices to find the correct rotational transformation. Many software packages use this alignment method. The third and fourth papers use quaternions. The alignment method presented in the previous section comes from the third paper:
## References
1. Kabsch, W. (1976). A Solution for the Best Rotation to Relate Two Sets of Vectors. Acta Crystallographica, 32, 922-923.
2. Kabsch, W. (1978). A Discussion of the Solution for the Best Rotation to Relate Two Sets of Vectors. Acta Crystallographica, 34, 827-828.
3. Horn, Berthold K. P. (1986). Closed-form solution of absolute orientation using unit quaternions. Journal of the Optical Society of America, 4, 629-642.
4. Coutsias, E. A., C. Seok and K. A. Dill. (1978). Using quaternions to calculate RMSD. Journal of Computational Chemistry, 25, 1849-1857.
5. Golub, G. H. and Loadn, C. F. V. (1996). Matrix Computations. (third). Johns Hopkins University Press.
6. Wallin, S., J. Farwer and U. Bastolla. (2003). Testing similarity measures with continuous and discrete protein models. Proteins, 50, 144-157.
7. Schwarzer, F. and Lotan, I. (2003). Approximation of protein structure for fast similarity measures. ACM. Proceedings of the seventh annual international conference on research in computational molecular biology.
## Content actions
### Give feedback:
My Favorites (?)
'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.
| A lens I own (?)
#### Definition of a lens
##### Lenses
A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.
##### What is in a lens?
Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.
##### Who can create a lens?
Any individual member, a community, or a respected organization.
##### What are tags?
Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.
| External bookmarks | 2013-05-20 07:16:18 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8079133033752441, "perplexity": 639.3861570521598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698493317/warc/CC-MAIN-20130516100133-00029-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.vcalc.com/wiki/MichaelBartmess/Final+Velocity+%28from+constant+a%29 | # Final Velocity (from constant a)
Not Reviewed
V_f =
Tags:
Rating
Copied from
ID
MichaelBartmess.Final Velocity (from constant a)
UUID
05d0683c-9aee-11e5-9770-bc764e2038f2
This equation computes the square of the final velocity that a body would achieve after traveling in a straight line some distance at constant acceleration. This an illustrative step in calculating the actual v_f based on acceleration and time. See the derivation below.
The remaining step, to take the square root of both sides of this equation happens in the sister equation:
## INPUTS
• x_i - the initial displacement
• x_f - the final displacement
• a - the constant acceleration
• V_0 - the initial velocity
## DERIVATION
Since acceleration is constant, we know that the final velocity is the sum of the initial velocity and the velocity increase due to the acceleration. In other words:
[1] V_f = V_i + a * t
We also know that the distance traveled, d, is the sum of the distance the object would travel at its starting velocity, V_i, plus the distance it would travel while increasing velocity from V_i to V_f:
[2] D = (V_i * t) + (1/2 * (V_f - V_i) * t)
[3] D = t * (V_i + 1/2 * V_f - 1/2 * V_i)
[4] D = t * 1/2 (V_i + V_f)
[5] => t = (2 * D) / (V_i + V_f)
Substituting [5} into [1]:
[6] V_f = V_i + a * ((2 * D) / (V_i + V_f))
Multiplying both sides by '(V_i + V_f):
[7] V_i *V_f + V_f^2 = V_i^2 + V_i * V_f + 2*A*D
Cancelling term V_i* V_f:
[8] V_f^2 = V_i^2 + 2*a*D, where D = x_f - x_0
[9] V_f^2 = V_i^2 + 2*a*(x_f - x_0)
This equation [9] computes the resultant V_f^2, which is not useful in most cases, so we want to get the square root of this resultant:
[10] V_f = sqrt(V_i^2 + 2*a*(x_f - x_0))`
Khan Academy's Average velocity for constant acceleration
This equation, Final Velocity (from constant a), is listed in 1 Collection. | 2019-08-24 09:41:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8448463678359985, "perplexity": 3391.3969864503515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320156.86/warc/CC-MAIN-20190824084149-20190824110149-00121.warc.gz"} |
https://ntnuopen.ntnu.no/ntnu-xmlui/handle/11250/237302?show=full | dc.contributor.advisor Hellevik, Leif Rune nb_NO dc.contributor.author Dæhli, Lars Edvard nb_NO dc.date.accessioned 2014-12-19T12:02:42Z dc.date.available 2014-12-19T12:02:42Z dc.date.created 2013-09-19 nb_NO dc.date.issued 2013 nb_NO dc.identifier 649767 nb_NO dc.identifier ntnudaim:9995 nb_NO dc.identifier.uri http://hdl.handle.net/11250/237302 dc.description.abstract The elastic stiffness properties of soft tissues can be estimated by the use of locally induced displacements and shear waves. In this work, we have made a two-dimensional plane strain finite element model to simulate a soft tissue with a stiffer elastic inclusion. The soft tissue was subjected to an acoustic radiation force impulse. The elastic inclusion represents a potential tumor within the healthy tissue. We have used a tissue-mimicking gel-agar phantom to represent the viscoelastic material properties of soft tissue and calibrated a three-element Maxwell model based on stress relaxation data from an experiment carried out on the gel-agar phantom. The calibrated Maxwell model was verified in a finite element simulation of the stress relaxation test. The acoustic radiation force generated by a focused linear array transducer was determined from an ultrasound pressure field simulation. A three-element Gaussian function was fitted to the resulting acoustic radiation force field and implemented as a body force in the finite element model. From the finite element analyses of the soft tissue with an inclusion, we found that the applied body force induced a local axial displacement in the focal region, which gave rise to a shear wave propagating away from the region of excitation. Based on the time dependent axial displacement profile in the focal region and the shear wave propagation through the heterogeneous tissue, we have examined three different ways of estimating the elastic stiffness: (i) using the shear wave speeds; (ii) using shear wave reflection factor values; (iii) using the time to peak displacement in the focal region. We found that the shear wave speed was accurately ($<0.15$ \% deviance) represented in the soft tissue and could be used to estimate the elastic stiffness in this region. However, the shear wave speed in the tumor was dependent upon the size and shape of the tumor, which resulted in unreliable stiffness estimates. The shear wave reflections from the tumor were rather complex and the reflection factor was highly dependent upon the shape of the tumor. Also, we must know the elastic stiffness value of the healthy tissue in advance, since the shear wave reflection only provides information about the relative stiffness difference between the healthy tissue and the tumor. Thus, this method may be used to locate an inclusion, but cannot be used to quantify the stiffness of neither the surrounding tissue nor the inclusion. The time to peak displacement was inversely related to the stiffness and independent of the load magnitude, which is favorable for medical imaging application. However, the time to peak displacement was dependent upon the impulse time of the applied load and can only be directly related to the elastic stiffness for a perfectly Gaussian ultrasound beam. Also, limitations of the pulse repetition frequency can make it difficult to detect the peak displacement.The results in this thesis indicate that stiffness estimation methods based on shear wave speed measurements are most reliable. nb_NO dc.language eng nb_NO dc.publisher Institutt for konstruksjonsteknikk nb_NO dc.title FEM simulations of an Acoustic Radiation Force Impulse applied to a Soft Tissue with a Tumor Inclusion nb_NO dc.type Master thesis nb_NO dc.source.pagenumber 164 nb_NO dc.contributor.department Norges teknisk-naturvitenskapelige universitet, Fakultet for ingeniørvitenskap og teknologi, Institutt for konstruksjonsteknikk nb_NO
| 2021-01-16 17:22:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7383829355239868, "perplexity": 1541.741503870556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506832.21/warc/CC-MAIN-20210116165621-20210116195621-00104.warc.gz"} |
https://crypto.stackexchange.com/questions/96312/is-it-okay-to-avoid-a-plaintext-iv-in-aes | # Is it okay to avoid a plaintext IV in AES?
### The scenario
Using AES 256 with CBC mode. (Authentication is done separately. Ignored here.)
### The goal (explained more later)
To avoid sending an unencrypted IV.
But since this is being done using .NET whose function forces us to use an IV, we can't just prepend 16 random bytes and then toss away the first 16 bytes after decryption.
### The plan
Prepend 16 random bytes ("IV1"), and besides that use 16 bytes with value zero as the IV ("IV0"). Then send the ciphertext without its first 16 bytes.
Decryption will be done by the receiver first determining what the first block of ciphertext will be (for that AES key, for any message) by encrypting something in the aforementioned manner (which will have to be done only once per key) and taking the first 16 bytes of the resulting ciphertext.
They then prepend those bytes to the cropped-ciphertext-received to get the original uncropped ciphertext, and then decrypt it with an IV ("IV0") of 16 bytes of value zero (i.e. they use .NET's decryption function, feeding it the required IV which is those 16 zeros).
They then discard the first 16 bytes of the result (which is received from .NET after .NET discards the first-first 16 bytes which are the IV) because those are the 16 random bytes prepended ("IV1").
### But why?
Communicating the IV in plaintext gives a brute-force attacker an edge - they can decrypt only one block (16 bytes) and compare it to the IV in order to check if the key is correct. (Perhaps there are more attack channels possible which I am unaware of.)
### So my question is
Does this plan seem fine, or am is there some pitfall in it?
• The question as is, seems hard to properly answer. Perhaps an improvement could be adding the fact that the key is not random but derived from what is a low entropy space as you mentioned in your comment to the first answer? Nov 25 at 22:02
• Usually the IV is prefixed to the message. However, the IV doesn't give the attacker any more advantage than any other ciphertext block as that acts as the vector for the next block. This is conveyed by the answer of Fractalice. For any cipher we assume that the attacker can know (part of) the plaintext. As such, hiding the IV doesn't make any difference, it is assumed to be public. Nov 29 at 21:58
• @MaartenBodewes Usually the IV is prefixed to the message. - The keyword here is "usually". If this were the case in CBC, and no more was done with the IV - my point in the question would be true. I.e. communicating the IV in plaintext would give the attacker an edge in brute-forcing the key. Since the IV is XORd with the plaintext before encryption - that is not the case. As Fractalice's comment implies. Nov 30 at 18:23
• You're still misunderstanding. The IV is send and it is XOR'ed with the plaintext before encryption, so it is assumed to be available to an adversary. All modern ciphers should provide protection against even chosen plaintext attacks. This is called IND-CPA: indistinguishability under chosen plaintext attack, and it is provided by most if not all modes of operation except ECB. There is no edge to be gained if the cipher is IND-CPA secure, and brute forcing for AES is completely dependent on the key size - which for AES-256 is definitely ample protection. 2 days ago
• @MaartenBodewes Thanks for the clarification, however, I think I should further clarify what I thought at first (which I now know is wrong). I thought that an IV works like this (in CBC): a) Prepend the IV to the plaintext. b) Encrypt without XORing the first block with anything. If this were in fact the case, there would be no need to send the IV, only to make it random, and the receiver would just decrypt and throw away the first block. Thanks again. 2 days ago
I didn't fully understand the plan, but:
In CBC, each ciphertext block plays the role of IV for the next block. So, the bruteforce attacker can attack the next block, since the ciphertext block will be send in clear.
Are you really afraid that someone will bruteforce a 256-bit key? This is impossible.
Also, please consider using authenticated encryption to stop active attackers.
• the bruteforce attacker can attack the next block - they test a key and receive a result. How do they know that's the correct one? AFAIK AES will just transform 16 bytes into 16 bytes. Always. Never with a failure (except the last one because of padding). Am I wrong? Nov 25 at 19:49
• Are you really afraid that someone will bruteforce a 256-bit key? - Not a random key. A key based on an easy-to-remember password which a non-security-minded user chooses (think "password1" etc.). Nov 25 at 19:51
• please consider using authenticated encryption - As I mentioned in my question: Authentication is done separately. Ignored here.. Nov 25 at 19:51
• If the attacker does not know anything about plaintext, how could you test the first block with plaintext IV? Note that the first ciphertext block decrypted is IV xor message, so the attacker just gets a message candidate on each key guess. And, as you noticed - the last block often has known plaintext - padding, which allows to to such bruteforce in the end. Nov 26 at 8:33
• Thanks. I was misled by tutorials which say that the IV is simply prepended to the plaintext. If that were the case, my worry would be founded. As it is, the IV is XORd with the first block instead (in CBC) which solves that problem. Thanks again. Nov 30 at 18:25 | 2021-12-04 08:32:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2288222312927246, "perplexity": 1657.6657454672998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362952.24/warc/CC-MAIN-20211204063651-20211204093651-00429.warc.gz"} |
https://brilliant.org/problems/flipflopi-know-its-notim-not-idiot/ | # Flip Flop
Algebra Level 4
$\large \sum_{n=1}^\infty \dfrac{f(n)}{n^2}$
Let $$f$$ be an injective function maps from the set of positive integers to itself. Choose the correct answer for the value of the series above.
×
Problem Loading...
Note Loading...
Set Loading... | 2017-01-20 07:57:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7728466391563416, "perplexity": 1083.8898929687332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280801.0/warc/CC-MAIN-20170116095120-00022-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://buboflash.eu/bubo5/show-dao2?d=1708445273356 | Tags
Question
A relationship in which a principal hires an agent to perform a particular task or service.
Principal-Agent relationship
Tags
Question
A relationship in which a principal hires an agent to perform a particular task or service.
?
Tags
Question
A relationship in which a principal hires an agent to perform a particular task or service.
Principal-Agent relationship
#### Summary
status measured difficulty not learned 37% [default] 0
No repetitions | 2022-08-08 22:23:27 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8590188026428223, "perplexity": 7649.272718262897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00314.warc.gz"} |
https://www.zbmath.org/?q=an%3A0992.03064 | # zbMATH — the first resource for mathematics
Indestructible weakly compact cardinals and the necessity of supercompactness for certain proof schemata. (English) Zbl 0992.03064
Summary: We show that if the weak compactness of a cardinal is made indestructible by means of any preparatory forcing of a certain general type, including any forcing naively resembling the Laver preparation, then the cardinal was originally supercompact. We then apply this theorem to show that the hypothesis of supercompactness is necessary for certain proof schemata.
##### MSC:
3e+55 Large cardinals 3e+35 Consistency and independence results
Full Text: | 2021-09-19 04:08:54 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8424402475357056, "perplexity": 1177.9993395514994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056711.62/warc/CC-MAIN-20210919035453-20210919065453-00588.warc.gz"} |
https://www.dokuwiki.org/plugins?plugintag=latex&pluginsort=a | DokuWiki
It's better when it's simple
Corporate Use
Our Community
plugins
Plugins
Plugins provide a system of extending DokuWiki's features without the need to hack the original code (and so again on each update). Below is a list of ready-to-use plugins created by DokuWiki users.
A plugin is installed by putting it into its own folder under lib/plugins/. From version Ponder Stibbons on this can be done automatically using the extension manager. (Up to version Binky this was done with the plugin manager.) Be sure to read about Plugin Security. See the detailed plugin installation instructions.
If you would like to help translating the plugins in another language, please see this page Localization of plugins.
Search Plugins
Filter available plugins by type or by using the tag cloud. You could also search within the plugin namespace using the search box.
Filter by type
• Syntax plugins extend DokuWiki's basic syntax.
• Action plugins replace or extend DokuWiki's core functionality
• Helper plugins provide functionality shared by other plugins
• Render plugins add new export modes or replaces the standard XHTML renderer
• Remote plugins add methods to the RemoteAPI accessible via web services
• Auth plugins add authentication modules
Tagged with 'latex' (10)
Plugin Author Last Update Popularity
This plugin allows you to export single or multiple DokuWiki pages into one LaTeX file. It will export all media in a ZIP archive. It also supports exporting syntax from plugins imagereference, mathjax and zotero.
Provides:
Syntax, Action, Render
Tags:
export, latex, pdf
2014-01-22
145/17597
Renders inline LaTeX code
Provides:
Tags:
formula, latex, math
2011-04-29
345/17597
easy way for importing google charts to the wiki
Provides:
Syntax, Action
Tags:
barcode, charts, diagram, formula, graph, latex, maps
2011-03-02
21/17597
Generate LaTeX file from DokuWiki format and so PDF files (if latex is present)
Provides:
Tags:
!experimental, export, latex, pdf
2013-10-24
14/17597
Creates numbered references to images/tables in your text by unique reference names. Supports also LaTeX output. (previous authors: Martin Heinemann, Christian Moll)
Provides:
Syntax
Tags:
caption, images, latex, links, media, references
2014-06-18
258/17597
Plugin for displaying LaTeX equations using MathJax. (Discontinued: MathJax plugin recommended instead.)
Provides:
Syntax
Tags:
!discontinued, formula, latex, math
2011-06-21
46/17597
Enables MathJax [http://mathjax.org] parsing of TeX math expressions in wiki pages
Provides:
Syntax, Action
Tags:
latex, math, mathjax, tex
2017-05-28
1258/17597
parses LaTex blocks
Provides:
Syntax
Tags:
formula, latex, math, mimetex
2015-05-07
22/17597
Allows you to quote your literature references saved in Zotero with a LaTeX-like syntax
Provides:
Syntax
Tags:
latex, quotes, references, zotero
2013-03-02
31/17597
This plugin creates citations of multiple formats for your wiki pages.
Provides:
Action
Tags:
bibtex, latex, quotes, references
2009-05-28
63/17597
Popularity values are based on data gathered through the popularity plugin - please help to increase accuracy by reporting your data with this plugin.
Creating Plugins
If your needs aren't covered by the existing plugins above, please have a look at our pages on how to create and publish a plugin.
Reporting Bugs and Features Wishes
Two short notes:
• Please use the issue tracker of the plugin
• Provide enough information to reproduce your case
Ideas for New Plugins
Requesting Plugin
If you are in need of a special feature in DokuWiki but haven't the skills or resources to create your own plugin you might want to suggest the feature for consideration by the community.
To ask for the creation of a new plugin or to discuss plugin ideas, please refer to the Plugin Wishlist Forum.
Recent Wishes in the forum:
More ideas...
Further some closed features requests, which we won't implement in DokuWiki core, are interesting ideas for plugins: Doku Plugin idea's at our Github issue tracker. | 2017-10-24 04:08:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7203629612922668, "perplexity": 14768.606118655613}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828134.97/warc/CC-MAIN-20171024033919-20171024053919-00203.warc.gz"} |
https://zbmath.org/?q=an:0532.12011 | # zbMATH — the first resource for mathematics
La résolution des conjectures d’Artin dans les cas $${\mathfrak A}_ 4$$ et $${\mathfrak S}_ 4$$ par les méthodes de Langlands. (French) Zbl 0532.12011
Sémin. Théor. Nombres, Univ. Bordeaux I 1982-1983, Exp. No. 8, 12 p. (1983).
This paper gives a clear and lucid exposition of some parts of the proof - by Langlands and Tunnell - of the Artin conjecture for degree two representations of the Galois group $$G_ F$$ of a global field F, whose image in PGL(2,$${\mathbb{C}})$$ is isomorphic to $${\mathfrak A}_ 4$$ or $${\mathfrak S}_ 4$$. Artin conjectured that the L-function of an irreducible representation $$\sigma$$ of $$G_ F$$ is entire. Langlands conjectured further that to $$\sigma$$ is attached an automorphic cuspidal representation $$\pi$$ of $$GL(n,{\mathbb{A}}_ F)$$ (where n is the degree of $$\sigma$$ and $${\mathbb{A}}_ F$$ the adèle group of F), such that for all but a finite number of places v of F, the component $$\pi_ v$$ of $$\pi$$ at v is the unramified principal series representation of $$GL(2,F_ v)$$ corresponding to the component $$\sigma_ v$$ of $$\sigma$$ at v, which is a sum of unramified characters; in particular we would have $$L(\sigma,s)=L(\pi,s)$$. Since L-functions of cuspidal representations are entire, this would imply and explain the Artin conjecture.
This strong form of the Artin conjecture is known when $$\sigma$$ is monomial, of degree 2 or 3. When $$\sigma$$ is of degree 2 and when its image in PGL(2,$${\mathbb{C}})$$ is isomorphic to $${\mathfrak A}_ 4$$ or $${\mathfrak S}_ 4$$, $$\sigma$$ is not monomial; however, going to a suitable cubic extension E of F yields a restriction $$\sigma_ E$$ which is monomial, hence a corresponding cuspidal representation $$\pi_ E$$ of $$GL(2,{\mathbb{A}}_ E)$$. The problem is then to construct $$\pi$$ from $$\pi_ E$$ (this is called the base change problem) and to verify that $$\pi$$ has the right properties. Base change for cyclic E/F (due to Langlands) enabled him to treat the $${\mathfrak A}_ 4$$-type, whereas for $${\mathfrak S}_ 4$$-type $$\sigma$$, Tunnell had to use base change for non-cyclic E/F (due to Jacquet-Piatetskij-Shapiro-Shalika Tunnell uses also the cyclic case). That $$\pi$$ has the right properties comes from Gelbart and Jacquet’s results on functoriality from GL(2) to GL(3) (i.e. the relationship between automorphic representations of $$GL(2,{\mathbb{A}}_ F)$$ and $$GL(3,{\mathbb{A}}_ F)$$ corresponding to the passage, on the Galois side, from $$\sigma$$ to Ad$${\mathbb{O}}\sigma$$, where Ad is the adjoint representation of GL(2,$${\mathbb{C}}).$$
Granting all technical results on the side of cuspidal representations, this paper explains elegantly and carefully the ”dévissage” necessary to get the strong Artin conjecture.
Reviewer: G.Henniart
##### MSC:
11R39 Langlands-Weil conjectures, nonabelian class field theory 11R42 Zeta functions and $$L$$-functions of number fields 11F70 Representation-theoretic methods; automorphic representations over local and global fields 22E55 Representations of Lie and linear algebraic groups over global fields and adèle rings 11S15 Ramification and extension theory
Full Text: | 2021-05-08 10:15:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.76412034034729, "perplexity": 584.7436829028572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00416.warc.gz"} |
https://www.macupdate.com/app/mac/60751/zettlr | Zettlr
2.1.2
4.3
0.0
Zettlr
# Zettlr
Markdown editor.
## Zettlr overview
Zettlr is a reliable companion for writing scientific texts and taking notes. It is made for academics in the humanities and arts and is intended to keep your content apart from your design, but close to the notes you take. To reach this goal, Zettlr incorporates several important features:
• File-agnostic editing. Zettlr does not store any information about your files, except in your files. This way you can always switch to and from Zettlr. Every file you see inside the preview pane corresponds to a file on your disk. With no special additions that might render the use of your files difficult for other editors.
• Zettelkasten-Methods implemented directly into the app. With Zettlr, you can link files and searches using "Wiki-Links" in the format [[your search text|@ID:ID]], give IDs by typing @ID:Your-ID-Here and tag your files using Twitter-like hashtags: #hashtag. Holding down the Alt-key and clicking on links will try to open exact-match files and also initiate searches, while Alt-clicks on tags will simply initiate searches. More features are likely to come.
• A directory list. This list contains all open directories and files. You can open new directories by pressing Cmd/Ctrl+O. New files can be opened simply by double clicking them in your file browser or by dragging them onto the app. Every time you start the app, all previously opened paths will be re-loaded.
• A preview pane that lists all the files that are inside the currently selected directory and separates them by their subdirectory. Just click on a file to open it in …
• … the editor, which takes the most space and is the crucial component that actually makes Zettlr an editor. You are able to write Markdown-files in the area, a slim text format that keeps formatting to a bare minimum.
• Exporting options. Using the open source software pandoc and LaTeX, Zettlr enables you to export all files on the fly in a variety of formats; currently HTML, DOCX, ODT and PDF. Just open a file and press Cmd/Ctrl+E.
• Searching. Zettlr enables you to quickly search through your files to find what you are looking for in a fraction of the time you'd need if you store all your information in several word documents that you'd have to open and search.
• A toolbar containing all functions in handy button-form.
Note: While the software is classified as free, it is actually donationware. Please consider making a donation to help support development.
## What’s new in version 2.1.2
Updated on Jan 14 2022
##### GUI and Functionality:
• New Feature: You now have more fine-grained control over how your files are displayed: You can now select if the filename is always used or a title or first heading level 1 is used (if applicable)
• New Feature: You can now also fold nested lists
• New Feature: You can now choose to display the file extensions of Markdown
• New Feature: You can now choose to always only link using filenames
• The Vim input mode of the editor started working again, so we are re-enabling it with this update
• Fixed an error that would cause the global search to malfunction if you followed a link which then started a global search while the file manager was hidden
• Removed an unused preference
• Rearranged some preferences
• On Windows, tabbed windows will automatically hide their labels if the window becomes too narrow
• Reinstated the info on what variables you could use in the Zettelkasten generator
• Zettlr displays info text below some preferences again
• Citations are now first-class citizens and got their own preferences tab
• Fixed a small error that would close additional files when you renamed a file that was also currently open
• Fixed the context menu not showing during a full text search on macOS
• When something goes wrong during opening of a new workspace or root file, the error messages will now be more meaningful
• Small improvement that will not detect Setext headings level 2 without a following empty line. This prevents some instances of data loss where users forget this empty line and thus accidentally create a valid simple table
• Fixed an issue where the indentation of wrapped lines would look off after changing the theme or modifying the editor font via custom CSS
• Fixed the vim mode cursor being barely visible in dark mode
• Done task list items will now be stroked out faster
##### Under the Hood:
• Convert the MarkdownEditor to ES modules and TypeScript
• Make the dot-notation rule optional
View older Zettlr updates
## Information
Free
128.6 MB
#### Developer’s website
https://zettlr.com/
2479
#### App requirements
Try our new feature and write a detailed review about Zettlr. All reviews will be posted soon.
Write your thoughts in our old-fashioned comment
MacUpdate Comment Policy. We strongly recommend leaving comments, however comments with abusive words, bullying, personal attacks of any type will be moderated.
0.0
(0 Reviews of )
There are no reviews yet
Dec 27 2021
2.0.3
2.0
Dec 27 2021
2.0
Version: 2.0.3
V 2.1 still crashes on 10.13. They've isolated the problem with the Electron framework. Sadly, I've supported other electron-based projects that were orphaned by them. So this is a hard-pass for me. Luckily FSNotes works great on MacOS. Don't need android, Linux, or Windows for now.
Nov 14 2021
2.0.3
2.0
Nov 14 2021
2.0
Version: 2.0.3
Still crashes on 10.13
Jul 26 2019
1.3.0
4.0
Jul 26 2019
4.0
Version: 1.3.0
Zettlr is incredibly powerful : you can create files and write them in GFM markdown (that means table of content and footnote, tables), plus extras, like folding text (really useful when you write long papers).
It uses Latex and Pandoc parser to export in a huge range of format (including .docx or .odt, of course PDF, ePub and other text format), and academics can use Zotero to manage bibliography.
You can manage extra folders without moving them (excellent: you store your files and folders wherever you want, just drag and drop them to Zettlr), and transform folder to project, meaning you can export a project as an only one merged file).
I just miss the preview feature, which is not fully implemented, but I asked the dev and I'm sure it will come soon.
This app is a gem, the dev is open and responsive, it's just an incredible free and open source app, as powetful as Ulysses, but you're not trapped in a subscription model, and the arrogance of the devs.
I recommand it to any one who want to manage a big writing project, like a paper, a novel, a bunch of short stories, a book (scientific books are welcome, as it can deal with latex and mathjax syntax).
Apr 18 2019
1.2.3
3.5
Apr 18 2019
3.5
Version: 1.2.3
Do not have android version?
Free
FreeAbsolutely Free
How would you rate Zettlr?
Similar apps
SousChef
Access, modify and share recipes.
Is this app is similar to SousChef? Vote to improve the quality of this list.
Vote results
0
Upvotes
1
Total score
0
Downvotes
Day One
Maintain a daily journal.
Is this app is similar to Day One? Vote to improve the quality of this list.
Vote results
0
Upvotes
1
Total score
0
Downvotes
FoldingText
Markdown text editor with productivity features.
Is this app is similar to FoldingText? Vote to improve the quality of this list.
Vote results
1
Upvotes
1
Total score
0
Downvotes
Abricotine
Open-Source Markdown editor.
Is this app is similar to Abricotine? Vote to improve the quality of this list.
Vote results
1
Upvotes
1
Total score
0
Downvotes
MonsterWriter
Powerful content creation and word processor.
Is this app is similar to MonsterWriter? Vote to improve the quality of this list.
Vote results
1
Upvotes
1
Total score
0
Downvotes | 2022-01-21 18:07:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18745066225528717, "perplexity": 5404.419449070827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303512.46/warc/CC-MAIN-20220121162107-20220121192107-00000.warc.gz"} |
http://wikieducator.org/MathGloss/A/Abacus | # Abacus
< MathGloss | A
This glossary is far from complete. We are constantly adding math terms.
For instructions on adding new terms, please refer to Math Glossary Main Page
Definition
Abacus a mechanical counting device invented by Greeks.
## Abacus
Abacus is a Latin word that has its origins in the Greek words abax or abakon (meaning "table" or "tablet") which in turn, possibly originated from the Semitic word abq, meaning "sand" 1. | 2018-11-17 13:56:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39220768213272095, "perplexity": 9248.12813381682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743521.59/warc/CC-MAIN-20181117123417-20181117145417-00337.warc.gz"} |
https://puzzling.stackexchange.com/questions/78412/the-security-to-party-40 | The security to [party 40]
The 40th party of the new year is being held at a local mansion. The host is very rich and his success is because of one thing — his famous recipe for Linguini! So rich indeed, that 39 parties have already occurred in a span of 13 days.
The only guests that may attend are people who correctly reply to the guard at the door. Here's where you come in. You and a friend are trying to steal this recipe. You sneak by and listen to the passwords.
For $$1 \le n \le 9:$$
The $$n$$th guest arrives, whereupon the guard, holding a mirror, says $$n,$$ the guest says $$f(n),$$ and the guest is let in. Note: the 9th guest happens to be your friend.
Your hearing allows you to pick up that $$3, 6, 8, 7, 10, 10, 8, 9, 4$$ are $$f(1), \dots, f(9)$$ respectively.
It's getting late, about 7 or 8. So you pull up to the guard and he's holding a pair of dice. If anything, you could say this mansion is rare. But you don't say anything yet, for the guard has not given you your number yet.
Now the guard says "10". How do you respond, given that the only viable option is to utter another natural number?
• And here I was taking the Fresh Prince hint. James Avery (Phil Banks) was in a film called "The Linguini Incident" I figured my friend would be Jazz, and thought the 4 that he said could be related to the letters in his name. So Viv would be 3, and then it all fell apart. Then I started looking at episodes starting with Ep. 40. Wasted hours! – Chris Cudmore Jan 15 '19 at 2:56
You should say
3
Because
$$f(n)$$ is the Scrabble value of the number $$n$$ when written in English
Example
$$f(8)$$ is the Scrabble score of EIGHT which is $$1+1+2+4+1 = 9$$
• $$\,$$Correct. – Display name Jan 14 '19 at 23:10 | 2020-01-28 17:56:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47894370555877686, "perplexity": 1667.7553720058786}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251779833.86/warc/CC-MAIN-20200128153713-20200128183713-00325.warc.gz"} |
http://mathhelpforum.com/calculus/28654-antideriving.html | # Math Help - Antideriving
1. ## Antideriving
How do you antiderive/integrate this?
$\int\frac{4}{1+x^2}$
2. Originally Posted by Cursed
How do you antiderive/integrate this?
$\int\frac{4}{1+x^2}$
$\int \frac 4{1 + x^2}~dx = 4 \int \frac 1{1 + x^2}~dx$
now, $\int \frac 1{1 + x^2}~dx$ is something that should be in your text that you should memorize and never forget. $\int \frac 1{1 + x^2}~dx = \arctan x + C$ | 2015-03-30 04:48:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7689250707626343, "perplexity": 4638.038657921331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299054.80/warc/CC-MAIN-20150323172139-00113-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://www.gurobi.com/documentation/7.0/refman/developing_for_compute_ser.html | # Developing for Compute Server
Filter Content By
Version
Languages
### Developing for Compute Server
With only a few exceptions, using Gurobi Compute Server requires no changes to your program. This section covers the exceptions. We'll talk about program robustness issues that may arise specifically in a Compute Server environment, and we'll give a full list of the Gurobi features that aren't supported in Compute Server.
Coding for Robustness
Client-server computing introduces a few robustness situations that you wouldn't face when all of your computation happens on a single machine. Specifically, by passing data between a client and a server, your program is dependent on both machines being available, and on an uninterrupted network connection between the two systems. The queuing and failover capabilities of Gurobi Compute Server can handle the vast majority of issues that may come up, but you can take a few additional steps in your program if you want to achieve the maximum possible robustness.
The one scenario you may need to guard against is the situation where you lose the connection to the server while the portion of your program that builds and solves an optimization model is running. Gurobi Compute Server will automatically route queued jobs to another server, but jobs that are running when the server goes down are interrupted (the client will receive a NETWORK error). If you want your program to be able to survive such failures, you will need to architect it in such a way that it will rebuild and resolve the optimization model in response to a NETWORK error. The exact steps for doing so are application dependent, but they generally involve encapsulating the code between the initial Gurobi environment creation and the last Gurobi call into a function that can be reinvoked in case of an error.
Features Not Supported in Compute Server
As noted earlier, there are a few Gurobi features that are not supported in Compute Server. We've mentioned some of them already, but we'll give the full list here for completeness. You will need to avoid using these features if you want your application to work in a Compute Server environment.
The unsupported features are:
• Lazy constraints: While we do provide MIPSOL callbacks, we don't allow you to add lazy constraints to cut off the associated MIP solutions.
• User cuts: The MIPNODE callback isn't supported, so you won't have the opportunity to add your own cuts. User cuts aren't necessary for correctness, but applications that heavily rely on them may experience performance issues.
• Multi-threading within a single Gurobi environment: This isn't actually supported in Gurobi programs in general, but the results in a Compute Server environment are sufficiently difficult to track down that we wanted to mention it again here. All models built from an environment share a single socket connection to the Compute Server. This one socket can't handle multiple simultaneous messages. If you wish to call Gurobi from multiple threads in the same program, you should make sure that each thread works within its own Gurobi environment.
• Advanced simplex basis routines: The C routines that work with the simplex basis ( GRBFSolve, GRBBSolve, GRBBinvColj, GRBBinvRowi, and GRBgetBasisHead) are not supported. | 2022-01-20 22:14:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26649364829063416, "perplexity": 1212.4365262172328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302706.62/warc/CC-MAIN-20220120220649-20220121010649-00711.warc.gz"} |
https://www.experts-exchange.com/questions/28418452/asp-net-32-bit-64-bit-question.html | [Webinar] Streamline your web hosting managementRegister Today
x
• Status: Solved
• Priority: Medium
• Security: Public
• Views: 463
# asp.net 32 bit 64 bit question
I am running what is supposed to be a 64 bit .net web application. When I run this command:
aspnet_regiis.exe –i
I get this response:
The error indicates that IIS is in 64 bit mode, while this application is a 32 bit application and thus not compatible.
Any Ideas on how to fix the problem or determine if the application is really 64 or 32 bit?
0
jimmylew52
• 8
• 5
1 Solution
Commented:
Keep in mind that what is deployed on your server isn't actually machine code--it's IL code. IL code gets compiled on-the-fly to the host machine's architecture. Now, if you are running a 64-bit machine, but you need to run as 32-bit--usually for libraries that were coded as 32-bit (like Oracle libraries)--then you can tell .NET to prefer to run the library as 32-bit code rather than 64-bit. You do this by specifying a configuration of x86 (or x64, depending) in your project's properties.
e.g.
0
Author Commented:
How would I tell if the libraries on the server were compiled for 32 bot or 64 bit? Is there any way to tell?
0
Commented:
They're not compiled to either--they're compiled to IL code. If you want to see what architecture the executable (should) will be compiled to, you can use the CorFlags utility that comes with the .NET Framework. Pass the utility the path to the assembly with no other aruments:
e.g.
C:\>corflags.exe C:\path\to\assembly.dll
C:\>corflags.exe C:\path\to\assembly.exe
You will get output along the lines of:
In the screenshot, I have compiled a DLL as "Any CPU", "x86", and "x64", respectively. You will note that there is a difference in the "PE" and "CorFlags" fields. Both AnyCPU and x86 show a PE value of "PE32"; the x64 shows "PE32+". "PE32+" always indicates 64-bit. For the other two values you have to look at the value of the CorFlags. If its value is 0x1, then you have an assembly that can run on either x86 or x64--the runtime compiler on the host machine will make that determination. If its value is 0x3, then the assembly will be compiled as x86. The "32BITREQ" field also reinforces this.
0
Author Commented:
I get this error:
corflags.exe is not recognized as an internal or external command.
Dotnet 4.0 is installed but a search of the system doesnot find corflags
0
Commented:
My fault: I misread the documentation. You can acquire that utility by installing the Windows SDK.
0
Author Commented:
So if this is the result of the command for the .dll files:
Version : v4.0.30319
PE : PE32
CorFlags : 1
ILONLY : 1
32BIT : 0
Signed : 0
The web app is compiled for 32 bit and not 64 bit?
0
Commented:
That means it can run on either architecture--the runtime compiler will decide how to compile the IL code into machine code.
0
Author Commented:
That being the case, why am I getting this error?
The error indicates that IIS is in 64 bit mode, while this application is a 32 bit application and thus not compatible.
0
Commented:
What is the exact error message?
0
Author Commented:
exact command and error
C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319>aspnet_regiis -i -enable
The error indicates that IIS is in 64 bit mode, while this application is a 32 b
it application and thus not compatible.
0
Author Commented:
The service is compiled as a 32 bit or 64 bit service. It is currently installed in
Program Files (x86) and is running as a 32 bit service. If I uninstall and install in
Program Files will it run as a 64 bit service?
0
Author Commented:
No, that will not make a difference.
C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319>aspnet_regiis -i -enable
should be run from
C:\WINDOWS\Microsoft.NET\Framework64\v4.0.30319>aspnet_regiis -i -enable
0
Author Commented: | 2018-02-21 02:28:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6925375461578369, "perplexity": 5537.879827416958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813187.88/warc/CC-MAIN-20180221004620-20180221024620-00144.warc.gz"} |
https://faculty.math.illinois.edu/Macaulay2/doc/Macaulay2-1.19.1/share/doc/Macaulay2/NAGtypes/html/_is__G__E__Q.html | isGEQ -- compare two points
Synopsis
• Usage:
b = isGEQ(x,y)
• Inputs:
• x, or a list of complex (floating point) numbers
• y, or a list of complex (floating point) numbers
• Optional inputs:
• Tolerance => ..., default value .000001, the tolerance of a numerical computation
• Outputs:
• b, tells if x is (approximately) greater or equal than y
Description
The inputs are lists of complex numbers, the order is (approximately) lexicographic: regard each complex n-vector as real 2n-vector, for the corresponding coordinates a and b (of two real 2n-vectors) a < b if b-a is larger than Tolerance.
i1 : isGEQ({1,1,1},{1,0,2}) o1 = true i2 : isGEQ({1,1e-7},{1, 0}) o2 = true | 2022-08-16 22:02:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5798405408859253, "perplexity": 8457.036736307158}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572581.94/warc/CC-MAIN-20220816211628-20220817001628-00618.warc.gz"} |
https://tex.stackexchange.com/questions/524481/includegraphics-does-not-work-paragraph-ended-before-tempa-was-complete-e | # \includegraphics does not work, “Paragraph ended before \@tempa was complete.” error
I'm trying to compile a document using a class provided by a journal. I get the following error
Paragraph ended before \@tempa was complete.
Emergency stop.
This is the problematic code:
\documentclass{colt2020}
\begin{document}
\begin{figure}
\includegraphics[width=\columnwidth] {Figure1}
\end{figure}
\end{document}
The 'colt2020' class may be found here here. Without the supplied class, the following code does compile and displays the figure
\documentclass{article}
\usepackage{graphicx}
\begin{document}
\begin{figure}
\includegraphics[width=\columnwidth] {Figure1}
\end{figure}
\end{document}
I can add '\usepackage{graphicx}' to the first example and it will still not compile. It shouldn't matter anyways, since the class requires this package.
• May this tex.stackexchange.com/questions/511138/… will helps you – MadyYuvi Jan 16 at 10:06
• Welcome to TEX.SE! In a standard setup this should work, so whatever is happening is related to some code you are withholding. Please provide a full minimal working example which reproduces the issue, starting with \documentclass{...} and ending with \end{document}. – campa Jan 16 at 10:27
• you haven't given enough information to debug, but check you have loaded graphicx package (not graphics) – David Carlisle Jan 16 at 11:03
• Thank you for the comments, I've edited my question so it is now self contained. – Cain Jan 16 at 11:20
• Thank you for the update! However, colt2020.cls is no standard LaTeX class, so you should please provide a link to it. – campa Jan 16 at 11:23
You can revert the patch that the class is trying to make to \includegraphics
\documentclass{jmlr}
\makeatletter
\let\Ginclude@graphics\@org@Ginclude@graphics
\makeatother
\begin{document}
\begin{figure}
\includegraphics[width=\columnwidth] {example-image}
\end{figure}
\end{document}
Should work as long as you do not need the feature of having an alternative gray-scale version of your images for print versions rather than rely on automatic conversion of colour images to print.
The above patch is from Nicola Tablot, the jmlr class author. She will look into why the patch that the class makes is failing in recent latex releases. (The core file handling code changed in the 2019 latex release to cope with UTF-8 characters in filenames and to cope with filenames with spaces and multiple dots).
• It works! Thanks! Though I find it weird that other machines do manage to compile graphics using the original class. – Cain Jan 16 at 12:34
• @Cain they will have an older latex, but I contacted Nicola and I have a better answer, look in a minute, I'm editing.. – David Carlisle Jan 16 at 13:10 | 2020-04-03 17:04:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7083913087844849, "perplexity": 2467.973914038814}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370515113.54/warc/CC-MAIN-20200403154746-20200403184746-00106.warc.gz"} |
https://chesterrep.openrepository.com/handle/10034/305487/browse?view=list&rpp=20&offset=0&etal=-1&sort_by=3&type=dateaccessioned&order=DESC | Now showing items 1-20 of 609
• #### The Potential of Incremental Forming Techniques for Aerospace Applications
Incremental sheet metal forming (ISF) processes are part of a set of non-classical techniques that allow producing low-batches, customized and/or specific geometries for advanced engineering applications, such as aerospace, automotive and biomedical parts. Combined or not with other joining processes and additive manufacturing techniques, ISF processes permit rapid prototyping frameworks, and can be included in the class of smart manufacturing processes. This chapter discusses the fundamentals of ISF technology, key attributes, future challenges and presents few examples related to the use of incremental forming for the development of complex parts as specifically found in aerospace applications such as aerofoils. The use of incremental forming to produce customized designs and to perform quick try-outs of ready-to-use parts contributes to decrease the time to market, decrease tooling cost and increase part design freedom.
• #### New binary self-dual codes of lengths 56, 58, 64, 80 and 92 from a modification of the four circulant construction.
In this work, we give a new technique for constructing self-dual codes over commutative Frobenius rings using $\lambda$-circulant matrices. The new construction was derived as a modification of the well-known four circulant construction of self-dual codes. Applying this technique together with the building-up construction, we construct singly-even binary self-dual codes of lengths 56, 58, 64, 80 and 92 that were not known in the literature before. Singly-even self-dual codes of length 80 with $\beta \in \{2,4,5,6,8\}$ in their weight enumerators are constructed for the first time in the literature.
• #### Design and finite element simulation of metal-core piezoelectric fiber/epoxy matrix composites for virus detection
Undoubtedly, the coronavirus disease 2019 (COVID-19) has received the greatest concern with a global impact, and this situation will continue for a long period of time. Looking back in history, airborne transimission diseases have caused huge casualties several times. COVID-19 as a typical airborne disease caught our attention and reminded us of the importance of preventing such diseases. Therefore, this study focuses on finding a new way to guard against the spread of these diseases such as COVID-19. This paper studies the dynamic electromechanical response of metal-core piezoelectric fiber/epoxy matrix composites, designed as mass load sensors for virus detection, by numerical modelling. The dynamic electromechanical response is simulated by applying an alternating current (AC) electric field to make the composite vibrate. Furthermore, both concentrated and distributed loads are considered to assess the sensitivity of the biosensor during modelling of the combination of both biomarker and viruses. The design parameters of this sensor, such as the resonant frequency, the position and size of the biomarker, will be studied and optimized as the key values to determine the sensitivity of detection. The novelty of this work is to propose functional composites that can detect the viruses from changes of the output voltage instead of the resonant frequency change using piezoelectric sensor and piezoelectric actuator. The contribution of this detection method will significantly shorten the detection time as it avoids fast Fourier transform (FFT) or discrete Fourier transform (DFT). The outcome of this research offers a reliable numerical model to optimize the design of the proposed biosensor for virus detection, which will contribute to the production of high-performance piezoelectric biosensors in the future.
• #### Composite Matrices from Group Rings, Composite G-Codes and Constructions of Self-Dual Codes
In this work, we define composite matrices which are derived from group rings. We extend the idea of G-codes to composite G-codes. We show that these codes are ideals in a group ring, where the ring is a finite commutative Frobenius ring and G is an arbitrary finite group. We prove that the dual of a composite G-code is also a composite G-code. We also define quasi-composite G-codes. Additionally, we study generator matrices, which consist of the identity matrices and the composite matrices. Together with the generator matrices, the well known extension method, the neighbour method and its generalization, we find extremal binary self-dual codes of length 68 with new weight enumerators for the rare parameters $\gamma$ = 7; 8 and 9: In particular, we find 49 new such codes. Moreover, we show that the codes we find are inaccessible from other constructions.
• #### High order algorithms for numerical solution of fractional differential equations
In this paper, two novel high order numerical algorithms are proposed for solving fractional differential equations where the fractional derivative is considered in the Caputo sense. The total domain is discretized into a set of small subdomains and then the unknown functions are approximated using the piecewise Lagrange interpolation polynomial of degree three and degree four. The detailed error analysis is presented, and it is analytically proven that the proposed algorithms are of orders 4 and 5. The stability of the algorithms is rigorously established and the stability region is also achieved. Numerical examples are provided to check the theoretical results and illustrate the efficiency and applicability of the novel algorithms.
• #### Terahertz reading of ferroelectric domain wall dielectric switching
Ferroelectric domain walls (DWs) are important nano scale interfaces between two domains. It is widely accepted that ferroelectric domain walls work idly at terahertz (THz) frequencies, consequently discouraging efforts to engineer the domain walls to create new applications that utilise THz radiation. However, the present work clearly demonstrates the activity of domain walls at THz frequencies in a lead free Aurivillius phase ferroelectric ceramic, Ca0.99Rb0.005Ce0.005Bi2Nb2O9, examined using THz time domain spectroscopy (THz-TDS). The dynamics of domain walls are different at kHz and THz frequencies. At low frequencies, domain walls work as a group to increase dielectric permittivity. At THz frequencies, the defective nature of domain walls serves to lower the overall dielectric permittivity. This is evidenced by higher dielectric permittivity in the THz band after poling, reflecting decreased domain wall density. An elastic vibrational model has also been used to verify that a single frustrated dipole in a domain wall represents a weaker contribution to the permittivity than its counterpart within a domain. The work represents a fundamental breakthrough in understanding dielectric contributions of domain walls at THz frequencies. It also demonstrates that THz probing can be used to read domain wall dielectric switching.
• #### G-Codes, self-dual G-Codes and reversible G-Codes over the Ring Bj,k
In this work, we study a new family of rings, Bj,k, whose base field is the finite field Fpr . We study the structure of this family of rings and show that each member of the family is a commutative Frobenius ring. We define a Gray map for the new family of rings, study G-codes, self-dual G-codes, and reversible G-codes over this family. In particular, we show that the projection of a G-code over Bj,k to a code over Bl,m is also a G-code and the image under the Gray map of a self-dual G-code is also a self-dual G-code when the characteristic of the base field is 2. Moreover, we show that the image of a reversible G-code under the Gray map is also a reversible G2j+k-code. The Gray images of these codes are shown to have a rich automorphism group which arises from the algebraic structure of the rings and the groups. Finally, we show that quasi-G codes, which are the images of G-codes under the Gray map, are also Gs-codes for some s.
• #### Enhanced design of an offgrid PV-battery-methanation hybrid energy system for power/gas supply
Extensive studies have been carried out on various hybrid energy systems (HESs) for providing electricity to off-grid areas. However, a standalone HES that is capable of providing power and gas, has been less studied. In this paper, a standalone Photovoltaic (PV)-battery-methanation HES is proposed to provide adequate, reliable and cost-effective electricity and gas to the local consumers. Identifying a potential solution to maximize the reliability of the system, asked by consumers, and to minimize costs required by the investors is challenging. Bi-level programming is adopted in this study to tackle the pre-mentioned issue. In the outer layer, an optimal design is obtained by means of particle swarm optimization. In the inner layer, an optimal operation strategy is found under the optimal design of the outer layer using sequential quadratic programming. The results indicate that 1) The bi-level programming used in this study can find the optimal solution; 2) The proposed HES is proved to be able to supply power and gas simultaneously. 3) Compared with the right most and leftmost points on Pareto set, the total costs are reduced by 17.77% and 2.16%.
• #### Group rings: Units and their applications in self-dual codes
The initial research presented in this thesis is the structure of the unit group of the group ring Cn x D6 over a field of characteristic 3 in terms of cyclic groups, specifically U(F3t(Cn x D6)). There are numerous applications of group rings, such as topology, geometry and algebraic K-theory, but more recently in coding theory. Following the initial work on establishing the unit group of a group ring, we take a closer look at the use of group rings in algebraic coding theory in order to construct self-dual and extremal self-dual codes. Using a well established isomorphism between a group ring and a ring of matrices, we construct certain self-dual and formally self-dual codes over a finite commutative Frobenius ring. There is an interesting relationships between the Automorphism group of the code produced and the underlying group in the group ring. Building on the theory, we describe all possible group algebras that can be used to construct the well-known binary extended Golay code. The double circulant construction is a well-known technique for constructing self-dual codes; combining this with the established isomorphism previously mentioned, we demonstrate a new technique for constructing self-dual codes. New theory states that under certain conditions, these self-dual codes correspond to unitary units in group rings. Currently, using methods discussed, we construct 10 new extremal self-dual codes of length 68. In the search for new extremal self-dual codes, we establish a new technique which considers a double bordered construction. There are certain conditions where this new technique will produce self-dual codes, which are given in the theoretical results. Applying this new construction, we construct numerous new codes to verify the theoretical results; 1 new extremal self-dual code of length 64, 18 new codes of length 68 and 12 new extremal self-dual codes of length 80. Using the well established isomorphism and the common four block construction, we consider a new technique in order to construct self-dual codes of length 68. There are certain conditions, stated in the theoretical results, which allow this construction to yield self-dual codes, and some interesting links between the group ring elements and the construction. From this technique, we construct 32 new extremal self-dual codes of length 68. Lastly, we consider a unique construction as a combination of block circulant matrices and quadratic circulant matrices. Here, we provide theory surrounding this construction and conditions for full effectiveness of the method. Finally, we present the 52 new self-dual codes that result from this method; 1 new self-dual code of length 66 and 51 new self-dual codes of length 68. Note that different weight enumerators are dependant on different values of β. In addition, for codes of length 68, the weight enumerator is also defined in terms of γ, and for codes of length 80, the weight enumerator is also de ned in terms of α.
• #### Numerical methods for deterministic and stochastic fractional partial differential equations
In this thesis we will explore the numerical methods for solving deterministic and stochastic space and time fractional partial differential equations. Firstly we consider Fourier spectral methods for solving some linear stochastic space fractional partial differential equations perturbed by space-time white noises in one dimensional case. The space fractional derivative is defined by using the eigenvalues and eigenfunctions of Laplacian subject to some boundary conditions. We approximate the space-time white noise by using piecewise constant functions and obtain the approximated stochastic space fractional partial differential equations. The approximated stochastic space fractional partial differential equations are then solved by using Fourier spectral methods. Secondly we consider Fourier spectral methods for solving stochastic space fractional partial differential equation driven by special additive noises in one dimensional case. The space fractional derivative is defined by using the eigenvalues and eigenfunctions of Laplacian subject to some boundary conditions. The space-time noise is approximated by the piecewise constant functions in the time direction and by appropriate approximations in the space direction. The approximated stochastic space fractional partial differential equation is then solved by using Fourier spectral methods. Thirdly, we will consider the discontinuous Galerkin time stepping methods for solving the linear space fractional partial differential equations. The space fractional derivatives are defined by using Riesz fractional derivative. The space variable is discretized by means of a Galerkin finite element method and the time variable is discretized by the discontinous Galerkin method. The approximate solution will be sought as a piecewise polynomial function in t of degree at most q−1, q ≥ 1, which is not necessarily continuous at the nodes of the defining partition. The error estimates in the fully discrete case are obtained and the numerical examples are given. Finally, we consider error estimates for the modified L1 scheme for solving time fractional partial differential equation. Jin et al. (2016, An analysis of the L1 scheme for the subdiffifusion equation with nonsmooth data, IMA J. of Number. Anal., 36, 197-221) ii established the O(k) convergence rate for the L1 scheme for both smooth and nonsmooth initial data. We introduce a modified L1 scheme and prove that the convergence rate is O(k2−α=), 0 < α < 1 for both smooth and nonsmooth initial data. We first write the time fractional partial differential equations as a Volterra integral equation which is then approximated by using the convolution quadrature with some special generating functions. A Laplace transform method is used to prove the error estimates for the homogeneous time fractional partial differential equation for both smooth and nonsmooth data. Numerical examples are given to show that the numerical results are consistent with the theoretical results.
• #### The multi-dimensional Stochastic Stefan Financial Model for a portfolio of assets
The financial model proposed in this work involves the liquidation process of a portfolio of n assets through sell or (and) buy orders placed, in a logarithmic scale, at a (vectorial) price with volatility. We present the rigorous mathematical formulation of this model in a financial setting resulting to an n-dimensional outer parabolic Stefan problem with noise. The moving boundary encloses the areas of zero trading, the so-called solid phase. We will focus on a case of financial interest when one or more markets are considered. In particular, our aim is to estimate for a short time period the areas of zero trading, and their diameter which approximates the minimum of the n spreads of the portfolio assets for orders from the n limit order books of each asset respectively. In dimensions n = 3, and for zero volatility, this problem stands as a mean field model for Ostwald ripening, and has been proposed and analyzed by Niethammer in [25], and in [7] in a more general setting. There in, when the initial moving boundary consists of well separated spheres, a first order approximation system of odes had been rigorously derived for the dynamics of the interfaces and the asymptotic pro le of the solution. In our financial case, we propose a spherical moving boundaries approach where the zero trading area consists of a union of spherical domains centered at portfolios various prices, while each sphere may correspond to a different market; the relevant radii represent the half of the minimum spread. We apply It^o calculus and provide second order formal asymptotics for the stochastic version dynamics, written as a system of stochastic differential equations for the radii evolution in time. A second order approximation seems to disconnect the financial model from the large diffusion assumption for the trading density. Moreover, we solve the approximating systems numerically.
• #### The United Kingdom Ministry of Defence and the European Union's electrical and electronic equipment directives
The growth of the generation of Electrical and Electronic Equipment (EEE), and the use of hazardous substances in the production of these items, has required legislation to minimise the harm to the environment that their existing use, ultimate disposal and continued growth of the sector may pose. The European Union (EU) started to tackle this problem with the passing of two Directives in 2002, which focused on restricting the use of hazardous substances (RoHS - 2002/95/EC) and organising the recycling or disposal of discarded electronic and electrical equipment (WEEE - 2002/96/EC). These Directives have been recently recast and their scope widened; however, one exception to them remains items specifically designed for defence and military purposes. This paper looks at how and why these European Directives were passed, the impact they have had on defence in the United Kingdom (UK) up to the present moment, what impact the further extension of those directives might have on UK defence policy and how the UK Ministry of Defence (MOD) has begun to prepare for any extension, including the use of alternative products from the commercial market, and substituting less harmful materials. The paper reviews the information available to carry out future decision making and what level of decision making it can support. Where the data is insufficient, it makes recommendations on actions to take for improvement.
• #### Will Future Resource Demand Cause Significant and Unpredictable Dislocations for the UK Ministry of Defence?
This paper focuses on the drivers which may affect future trends in material availability for defence, in particular, the availability of rare earth elements (REE). These drivers include resource concentration, tighter regulatory policy and its enforcement, export policies, their use in economic statecraft, increases in domestic demand, promoting greater efficiency in resource use, efforts to mitigate resource depletion and more efficient resource extraction while reducing its associated environmental impact. It looks at the effect these factors might have on global systems and supply chains, the impact on material insecurity and how this may exacerbate the issue of their use in UK military equipment. It finds that these drivers are likely to have an increasing impact on material availability (if measures are not taken to mitigate them), which will have consequences for the provision of military capability by the UK.
• #### Talos: a prototype Intrusion Detection and Prevention system for profiling ransomware behaviour
Abstract: In this paper, we profile the behaviour and functionality of multiple recent variants of WannaCry and CrySiS/Dharma, through static and dynamic malware analysis. We then analyse and detail the commonly occurring behavioural features of ransomware. These features are utilised to develop a prototype Intrusion Detection and Prevention System (IDPS) named Talos, which comprises of several detection mechanisms/components. Benchmarking is later performed to test and validate the performance of the proposed Talos IDPS system and the results discussed in detail. It is established that the Talos system can successfully detect all ransomware variants tested, in an average of 1.7 seconds and instigate remedial action in a timely manner following first detection. The paper concludes with a summarisation of our main findings and discussion of potential future works which may be carried out to allow the effective detection and prevention of ransomware on systems and networks.
• #### Computational simulation of the damage response for machining long fibre reinforced plastic (LFRP) composite parts: A review
Long fibre reinforced plastics (LFRPs) possess excellent mechanical properties and are widely used in the aerospace, transportation and energy sectors. However, their anisotropic and inhomogeneous characteristics as well as their low thermal conductivity and specific heat capacity make them prone to subsurface damage, delamination and thermal damage during the machining process, which seriously reduces the bearing capacity and shortens the service life of the components. To improve the processing quality of composites, finite element (FE) models were developed to investigate the material removal mechanism and to analyse the influence of the processing parameters on the damage. A review of current studies on composite processing modelling could significantly help researchers to understand failure initiation and development during machining and thus inspire scholars to develop new models with high prediction accuracy and computational efficiency as well as a wide range of applications. To this aim, this review paper summarises the development of LFRP machining simulations reported in the literature and the factors that can be considered in model improvement. Specifically, the existing numerical models that simulate the mechanical and thermal behaviours of LFRPs and LFRP-metal stacks in orthogonal cutting, drilling and milling are analysed. The material models used to characterise the constituent phases of the LFRP parts are reviewed. The mechanism of material removal and the damage responses during the machining of LFRP laminates under different tool geometries and processing parameters are discussed. In addition, novel and objective evaluations that concern the current simulation studies are conducted to summarise their advantages. Aspects that could be improved are further detailed, to provide suggestions for future research relating to the simulation of LFRP machining.
• #### Numerical approximation of the Stochastic Cahn-Hilliard Equation near the Sharp Interface Limit
Abstract. We consider the stochastic Cahn-Hilliard equation with additive noise term that scales with the interfacial width parameter ε. We verify strong error estimates for a gradient flow structure-inheriting time-implicit discretization, where ε only enters polynomially; the proof is based on higher-moment estimates for iterates, and a (discrete) spectral estimate for its deterministic counterpart. For γ sufficiently large, convergence in probability of iterates towards the deterministic Hele-Shaw/Mullins-Sekerka problem in the sharp-interface limit ε → 0 is shown. These convergence results are partly generalized to a fully discrete finite element based discretization. We complement the theoretical results by computational studies to provide practical evidence concerning the effect of noise (depending on its ’strength’ γ) on the geometric evolution in the sharp-interface limit. For this purpose we compare the simulations with those from a fully discrete finite element numerical scheme for the (stochastic) Mullins-Sekerka problem. The computational results indicate that the limit for γ ≥ 1 is the deterministic problem, and for γ = 0 we obtain agreement with a (new) stochastic version of the Mullins-Sekerka problem.
• #### Ultrafast Electric Field-induced Phase Transition in Bulk Bi0.5Na0.5TiO3 under High Intensity Terahertz Irradiation
Ultrafast polarization switching is being considered for the next generation of ferroelectric based devices. Recently, the dynamics of the field-induced transitions associated with this switching have been difficult to explore, due to technological limitations. The advent of terahertz (THz) technology has now allowed for the study of these dynamic processes on the picosecond (ps) scale. In this paper, intense terahertz (THz) pulses were used as a high-frequency electric field to investigate ultrafast switching in the relaxor ferroelectric, Bi0.5Na0.5TiO3. Transient atomic-scale responses, which were evident as changes in reflectivity, were captured by THz probing. The high energy THz pulses induce an increase in reflectivity, associated with an ultrafast field-induced phase transition from a weakly polar phase (Cc) to a strongly polar phase (R3c) within 20 ps at 200 K. This phase transition was confirmed using X-ray powder diffraction and by electrical measurements which showed a decrease in the frequency dispersion of relative permittivity at low frequencies.
• #### Design, Synthesis and Evaluation of New Bioactive Oxadiazole Derivatives as Anticancer Agents Targeting Bcl-2
A series of 2-(1H-indol-3-yl)-5-substituted-1,3,4-oxadiazoles, 4a–m, were designed, synthesized and tested in vitro as potential pro-apoptotic Bcl-2 inhibitory anticancer agents based on our previously reported hit compounds. Synthesis of the target 1,3,4-oxadiazoles was readily accomplished through a cyclization reaction of indole carboxylic acid hydrazide 2 with substituted carboxylic acid derivatives 3a–m in the presence of phosphorus oxychloride. New compounds 4a–m showed a range of IC50 values concentrated in the low micromolar range selectively in Bcl-2 positive human cancer cell lines. The most potent candidate 4-trifluoromethyl substituted analogue 4j showed selective IC50 values of 0.52–0.88 μM against Bcl-2 expressing cell lines with no inhibitory effects in the Bcl-2 negative cell line. Moreover, 4j showed binding that was two-fold more potent than the positive control gossypol in the Bcl-2 ELISA binding affinity assay. Molecular modeling studies helped to further rationalize anti-apoptotic Bcl-2 binding and identified compound 4j as a candidate with drug-like properties for further investigation as a selective Bcl-2 inhibitory anticancer agent. | 2021-06-14 12:27:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5411435961723328, "perplexity": 853.3955904930966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612154.24/warc/CC-MAIN-20210614105241-20210614135241-00094.warc.gz"} |
https://carpentries-incubator.github.io/metagenomics/ | This lesson is being piloted (Beta version)
If you teach this lesson, please tell the authors and provide feedback by opening an issue in the source repository
# Data Processing and Visualization for Metagenomics
A lot of metagenomics analysis is done using command-line tools for three reasons:
1) You will often be working with a large number of files, and working through the command-line rather than through a graphical user interface (GUI) allows you to automate repetitive tasks.
2) You will often need more computing power than is available on your personal computer, and connecting to and interacting with remote computers requires a command-line interface.
3) You will often need to customize your analyses, and command-line tools often enable more customization than the corresponding GUI tools (if a GUI tool even exists).
In a previous lesson, you learned how to use the bash shell to interact with your computer through a command-line interface. In this lesson, you will be applying this new knowledge to carry out a common metagenomics workflow - identifying Operational Taxonomic Unities (OTUs) among samples taken from two metagenomes within a location. We will be starting with a set of sequenced reads (.fastq files), perform some quality control steps, assemble those reads into contigs, and finish by identifying and visualizing the OTUs among these samples.
As you progress through this lesson, keep in mind that, even if you aren’t going to be doing this same workflow in your research, you will be learning some very important lessons about using command-line bioinformatics tools. What you are going to learn here will enable you to use a variety of bioinformatics tools with confidence and greatly enhance your research efficiency and productivity.
## Prerequisites
This lesson assumes a working understanding of the bash shell. If you haven’t already completed the Shell metagenomics lesson, and you aren’t familiar with the bash shell, please review those materials before starting this lesson.
This lesson also assumes some familiarity with biological concepts, including the structure of DNA, nucleotide abbreviations, and the concepts microbiome and taxonomy.
This lesson uses data hosted on an Amazon Machine Instance (AMI). Workshop participants will be given information on how to log-in to the AMI during the workshop. Learners using these materials for self-directed study will need to set up their own AMI. Information on setting up an AMI and accessing the required data is provided on the Metagenomics Workshop setup page.
## Things You Need To Know
1. Stay calm, don’t panic.
2. Everything is going to be fine.
3. We are learning together.
This is the fourth lesson of the Metagenomics Workshop comprised of four lessons in total.
## Lesson Reference
Episodes 2. Assesing Read Quality and 3. Trimming and Filtering are adapted from the corresponding episodes in the Data Wrangling and Processing for Genomics lesson.
## Schedule
Setup Download files required for the lesson 00:00 1. Starting a Metagenomics Project How do you plan a metagenomics experiment? How does a metagenomics project look like? 00:30 2. Assessing Read Quality How can I describe the quality of my data? 01:20 3. Trimming and Filtering How can we get rid of sequence data that doesn’t meet our quality standards? 02:15 4. Metagenome Assembly Why genomic data should be assembled? What is the difference between reads and contigs? How can we assemble a metagenome? 02:55 5. Metagenome Binning How can we obtain the original genomes from a metagenome? 03:55 6. Taxonomic Assignment How can I know to which taxa my sequences belong? 04:40 7. Exploring Taxonomy with R How can I use my taxonomic assignment results to make analyses? 05:05 8. Diversity Tackled With R How can we measure diversity? How can I use R to analyze diversity? 05:55 9. Taxonomic Analysis with R How can we know which taxa are in our samples? How can we compare depth-contrasting samples? How can we manipulate our data to deliver a message? 06:55 10. Other Resources Where are other metagenomic resources? How can lessons be previewed? 07:00 Finish
The actual schedule may vary slightly depending on the topics and exercises chosen by the instructor. | 2022-12-01 10:51:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17102020978927612, "perplexity": 2959.087417935608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710808.72/warc/CC-MAIN-20221201085558-20221201115558-00187.warc.gz"} |
http://mathematica.stackexchange.com/questions/17757/correct-way-to-generate-large-data-sets-i-e-forward-yield-curve | # Correct way to generate large data sets (i.e.forward yield curve )
I would like to generate a set of forward yield curve matrix of size 1000 x 100. First I defined my SparseArray of 1000 x100:
(forwardYieldCurve=Normal[SparseArray[{{1,1}->0,{1000,100}-> 0}]])//MatrixForm;
then initial first row of forwardYieldCurve using:
Table[forwardYieldCurve[[1,j]]=tenor0[[1,j]]+driftM[[1,j]]tstep+
(volFit1[[1,j]]dX[[1,1]]+volFit2[[1,j]]dX[[1,2]]+volFit3[[1,j]]dX[[1,3]])Sqrt[tstep]
+((tenor0[[1,j+1]]-tenor0[[1,j]])/(dateArray[[1,j+1]]-dateArray[[1,j]]))tstep,{j,99}];//AbsoluteTiming
then for the second row and iterate with i (in BOLD) up to 100 rows of the forwardYieldCurve matrix:
Table[forwardYieldCurve[[i+1,j]]=forwardYieldCurve[[i,j]]+driftM[[1,j]]tstep+
(volFit1[[1,j]]dX[[i+1,1]]+volFit2[[1,j]]dX[[i+1,2]]+volFit3[[1,j]]dX[[i+1,3]])Sqrt[tstep]+
((forwardYieldCurve[[i+1,j+1]]-forwardYieldCurve[[i+1,j]])/(dateArray[[1,j+1]]-dateArray[[1,j]]))tstep,{j,99},{i,**100**}];//AbsoluteTiming
takes around 4 minutes to do to obtain results of 100 x 100, which will be projected take it to around 40 minutes to run this single set of simulations. When I eventually set i to iterate up to 1000 x 100, and furthermore, I will repeat this many times to get a statistically monte-carlo simulation of distributions. How to optimize this to reduce run time.
My input data dimensions:
forwardYieldCurve -> {1000,100}
tenor0={{0.0050399,0.00537318,0.00578648,0.00614997,0.00633987,0.00637105,0.00632311,0.00625459,0.00622594,0.00631663,0.0065289,0.00679745,0.00706621,0.00731132,0.0075159,0.00766905,0.00778107,0.00786696,0.00793966,0.00800508,0.00806759,0.00813158,0.00820143,0.00828151,0.00837543,0.00848368,0.00860596,0.00874199,0.00889147,0.00905412,0.00922964,0.00941775,0.00961814,0.00983054,0.0100546,0.0102902,0.0105368,0.0107941,0.0110615,0.0113385,0.0116248,0.0119197,0.0122228,0.0125336,0.0128516,0.0131763,0.0135073,0.013844,0.0141859,0.0145327,0.0148838,0.0152389,0.0155975,0.0159592,0.0163236,0.0166903,0.0170588,0.0174287,0.0177995,0.0181709,0.0203931,0.0225666,0.0246436,0.0265946,0.0283977,0.0300428,0.0315247,0.0328461,0.0340124,0.035033,0.0359187,0.0366814,0.0373332,0.0378862,0.0383519,0.0387395,0.0390575,0.0393143,0.0395184,0.0396782,0.0398011,0.0398898,0.0399458,0.0399704,0.0399652,0.0399316,0.039871,0.0397848,0.0396746,0.0395418,0.0393879,0.0392142,0.0390222,0.0388134,0.0385892,0.0383511,0.0381006,0.037839,0.0375678,0.0372885}}
driftM = {{4.29874*10^-6,8.59748*10^-6,0.0000128962,0.000017195,0.0000214937,0.0000257924,0.0000300912,0.0000343899,0.0000386887,0.0000429874,0.0000472861,0.0000515849,0.0000558836,0.0000601824,0.0000644811,0.0000687798,0.0000730786,0.0000773773,0.000081676,0.0000859748,0.0000902735,0.0000945723,0.000098871,0.00010317,0.000107468,0.000111767,0.000116066,0.000120365,0.000124663,0.000128962,0.000133261,0.00013756,0.000141858,0.000146157,0.000150456,0.000154755,0.000159053,0.000163352,0.000167651,0.00017195,0.000176248,0.000180547,0.000184846,0.000189144,0.000193443,0.000197742,0.000202041,0.000206339,0.000210638,0.000214937,0.000219236,0.000223534,0.000227833,0.000232132,0.000236431,0.000240729,0.000245028,0.000249327,0.000253626,0.000257924,0.000283717,0.000309509,0.000335302,0.000361094,0.000386886,0.000412679,0.000438471,0.000464264,0.000490056,0.000515849,0.000541641,0.000567433,0.000593226,0.000619018,0.000644811,0.000670603,0.000696396,0.000722188,0.000747981,0.000773773,0.000799565,0.000825358,0.00085115,0.000876943,0.000902735,0.000928528,0.00095432,0.000980113,0.0010059,0.0010317,0.00105749,0.00108328,0.00110907,0.00113487,0.00116066,0.00118645,0.00121224,0.00123804,0.00126383,0.00128962}}
tstep = 0.01
volFit1={{0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226,0.00718226}}
volFit2={{-5.97435*10^-6,-5.77847*10^-6,-5.58514*10^-6,-5.39435*10^-6,-5.20606*10^-6,-5.02029*10^-6,-4.83699*10^-6,-4.65618*10^-6,-4.47782*10^-6,-4.3019*10^-6,-4.12841*10^-6,-3.95734*10^-6,-3.78867*10^-6,-3.62238*10^-6,-3.45846*10^-6,-3.2969*10^-6,-3.13768*10^-6,-2.98078*10^-6,-2.8262*10^-6,-2.67392*10^-6,-2.52391*10^-6,-2.37618*10^-6,-2.23069*10^-6,-2.08745*10^-6,-1.94643*10^-6,-1.80762*10^-6,-1.671*10^-6,-1.53656*10^-6,-1.40429*10^-6,-1.27417*10^-6,-1.14618*10^-6,-1.02032*10^-6,-8.9656*10^-7,-7.74893*10^-7,-6.55303*10^-7,-5.37775*10^-7,-4.22296*10^-7,-3.08849*10^-7,-1.97421*10^-7,-8.79963*10^-8,1.94392*10^-8,1.249*10^-7,2.28402*10^-7,3.29958*10^-7,4.29584*10^-7,5.27294*10^-7,6.23103*10^-7,7.17026*10^-7,8.09077*10^-7,8.9927*10^-7,9.87622*10^-7,1.07415*10^-6,1.15886*10^-6,1.24177*10^-6,1.3229*10^-6,1.40226*10^-6,1.47986*10^-6,1.55573*10^-6,1.62987*10^-6,1.7023*10^-6,2.10178*10^-6,2.44338*10^-6,2.73027*10^-6,2.96562*10^-6,3.1526*10^-6,3.29436*10^-6,3.39409*10^-6,3.45494*10^-6,3.4801*10^-6,3.47271*10^-6,3.43596*10^-6,3.37301*10^-6,3.28703*10^-6,3.18118*10^-6,3.05863*10^-6,2.92256*10^-6,2.77613*10^-6,2.6225*10^-6,2.46485*10^-6,2.30635*10^-6,2.15015*10^-6,1.99943*10^-6,1.85736*10^-6,1.72711*10^-6,1.61183*10^-6,1.51471*10^-6,1.43891*10^-6,1.38759*10^-6,1.36392*10^-6,1.37108*10^-6,1.41223*10^-6,1.49054*10^-6,1.60917*10^-6,1.77129*10^-6,1.98008*10^-6,2.2387*10^-6,2.55031*10^-6,2.91809*10^-6,3.3452*10^-6,3.83481*10^-6}}
volFit3={{1.85601*10^-6,1.86013*10^-6,1.8634*10^-6,1.86583*10^-6,1.86742*10^-6,1.8682*10^-6,1.86815*10^-6,1.86729*10^-6,1.86563*10^-6,1.86316*10^-6,1.85991*10^-6,1.85587*10^-6,1.85105*10^-6,1.84546*10^-6,1.8391*10^-6,1.83199*10^-6,1.82412*10^-6,1.81551*10^-6,1.80616*10^-6,1.79608*10^-6,1.78528*10^-6,1.77375*10^-6,1.76152*10^-6,1.74858*10^-6,1.73494*10^-6,1.72062*10^-6,1.70561*10^-6,1.68992*10^-6,1.67356*10^-6,1.65653*10^-6,1.63885*10^-6,1.62052*10^-6,1.60154*10^-6,1.58193*10^-6,1.56169*10^-6,1.54082*10^-6,1.51934*10^-6,1.49724*10^-6,1.47454*10^-6,1.45125*10^-6,1.42737*10^-6,1.4029*10^-6,1.37786*10^-6,1.35224*10^-6,1.32607*10^-6,1.29933*10^-6,1.27205*10^-6,1.24422*10^-6,1.21586*10^-6,1.18697*10^-6,1.15756*10^-6,1.12763*10^-6,1.09719*10^-6,1.06625*10^-6,1.03481*10^-6,1.00289*10^-6,9.70481*10^-7,9.37598*10^-7,9.04246*10^-7,8.70433*10^-7,6.58225*10^-7,4.31082*10^-7,1.90459*10^-7,-6.21896*10^-8,-3.25408*10^-7,-5.97743*10^-7,-8.77739*10^-7,-1.16394*10^-6,-1.45489*10^-6,-1.74914*10^-6,-2.04524*10^-6,-2.34172*10^-6,-2.63713*10^-6,-2.93002*10^-6,-3.21893*10^-6,-3.50241*10^-6,-3.77901*10^-6,-4.04726*10^-6,-4.30572*10^-6,-4.55292*10^-6,-4.78742*10^-6,-5.00776*10^-6,-5.21248*10^-6,-5.40014*10^-6,-5.56927*10^-6,-5.71842*10^-6,-5.84613*10^-6,-5.95096*10^-6,-6.03144*10^-6,-6.08613*10^-6,-6.11356*10^-6,-6.11228*10^-6,-6.08084*10^-6,-6.01778*10^-6,-5.92165*10^-6,-5.791*10^-6,-5.62436*10^-6,-5.42028*10^-6,-5.17731*10^-6,-4.894*10^-6}}
randomWalkPCA[n_]:= RandomVariate[NormalDistribution[0,1],n];
RandVarPCA[mcRun_]:=Table[randomWalkPCA[3],{mcRun}];
(dX:=RandVarPCA[1000])//MatrixForm;
dateArray={{0.0833333,0.166667,0.25,0.333333,0.416667,0.5,0.583333,0.666667,0.75,0.833333,0.916667,1.,1.08333,1.16667,1.25,1.33333,1.41667,1.5,1.58333,1.66667,1.75,1.83333,1.91667,2.,2.08333,2.16667,2.25,2.33333,2.41667,2.5,2.58333,2.66667,2.75,2.83333,2.91667,3.,3.08333,3.16667,3.25,3.33333,3.41667,3.5,3.58333,3.66667,3.75,3.83333,3.91667,4.,4.08333,4.16667,4.25,4.33333,4.41667,4.5,4.58333,4.66667,4.75,4.83333,4.91667,5.,5.5,6.,6.5,7.,7.5,8.,8.5,9.,9.5,10.,10.5,11.,11.5,12.,12.5,13.,13.5,14.,14.5,15.,15.5,16.,16.5,17.,17.5,18.,18.5,19.,19.5,20.,20.5,21.,21.5,22.,22.5,23.,23.5,24.,24.5,25.}}
-
+1 for a well written and formatted question. It took a while, but you did it, and that's all that matters! :) – The Toad Jan 14 '13 at 14:37
Thanks, it is making sense... – sebastian c. Jan 14 '13 at 14:40
Try to redefine your RandomWalk function to: randomWalk[x_] := Accumulate[Prepend[RandomVariate[NormalDistribution[0, 1], x], 0]]. So if you need to generate 1000 Random Walks of Length[] 100 try this: ListLinePlot[Table[randomWalk[100], {1000}]]. It takes only 1.8 seconds here... – Rod Jan 14 '13 at 14:56
I am about to go to work so short on time but I think you might have gone off the rails with the opening line. The point of using a sparse array is that it uses less memory for large matrices and runs faster for calculations. Wrapping it in Normal makes it a "normal" matrix which seems to defeat the purpose (...other than making it easier to create a big matrix). If your matrix is truly sparse then try and work up a method that takes advantage of sparse array calculations. – Mike Honeychurch Jan 14 '13 at 20:22
I notice that the lists tenor0, driftm, the three volfits and dateArray are all wrapped in an extra layer of List. IMO your code would be a lot easier to read (and easier to optimise) if you stored 1D lists as 1D lists. – Simon Woods Jan 14 '13 at 23:12
Not a complete solution but a few comments too detailed for a comment.
a) Firstly make use of listability. Coinicdentally mentioned this the other day as well. It is important because listable functions thread themselves athrough lists -- for want of a better description -- and as a rule perform their operations on lists much faster than comparable use of Map or Table
So for example this code fragment:
Table[tenor0[[1, j]] +
driftM[[1, j]]*
tstep + (volFit1[[1, j]]*dX[[1, 1]] + volFit2[[1, j]]*dX[[1, 2]] +
volFit3[[1, j]]*dX[[1, 3]])*Sqrt[tstep], {j, 99}]
can be re-written as
tenor0[[1, 1 ;; 99]] +
driftM[[1, 1 ;; 99]]*
tstep + (volFit1[[1, 1 ;; 99]]*dX[[1, 1]] +
volFit2[[1, 1 ;; 99]]*dX[[1, 2]] +
volFit3[[1, 1 ;; 99]]*dX[[1, 3]]) Sqrt[tstep]
You could also replace 99 in the index with -2.
Also consider this fragment:
Table[(tenor0[[1, j + 1]] - tenor0[[1, j]])/(
dateArray[[1, j + 1]] - dateArray[[1, j]]), {j, 99}]
this is the same as
Differences[tenor0[[1]]]/Differences[dateArray[[1]]]
...and so on.
b) A major slowdown in the code fragment above seem to be your use of random numbers.
e.g.
randomWalkPCA[n_] := RandomVariate[NormalDistribution[0, 1], n];
RandVarPCA[mcRun_] := Table[randomWalkPCA[3], {mcRun}];
(dX := RandVarPCA[1000]
So when you use e.g. dX[[1, 3]] you repeatedly regenerate these large number of random numbers only to take {1,3} from that large list.
The more efficient way of running Monte Carlo simulations is to create all your random numbers once only and sample all of them rather than generate a large amount of numbers, a lot of times and only take small sample from them each time. (FWIW I was asked to speed up some MC code a couple of years back with a brief for 10 times improvement and got 250 times primarily with proper handling of random number generation and listability.)
c) I don't think it is necessary or advisable to create a blank matrix which gets filled with values. This is basically procedural thinking and as above it is best to start thinking in terms of entire lists.
d) I am sure there are many other things that can be altered but these are intended to help point you in the right direction rather than being an exhaustive analysis (I am at work so do not have the time)
-
Hi @Mike Honeychurch, could this be the reason why I am getting increased volatility with successive simulation runs as I noted: mathematica.stackexchange.com/questions/17781/… – sebastian c. Jan 14 '13 at 23:10
@sebastianc. What I have described should make your code run faster based on an alternative to your current code. By definition it should not change the results you obtain (once identical random numbers are used for comparison, see SeedRandom). If your results are not as you expected then you need to consider your implementation is correct or whether your underlying forumlas are correct. Are you a student or doing this commercially? – Mike Honeychurch Jan 14 '13 at 23:13
student, studying quant finance, but come from engineering background hoping to learn more about this for a successful career transition.. – sebastian c. Jan 14 '13 at 23:17
This overlaps with @Mike Honeychurch' reply. Define the 1xn matrices as simple vectors. For example:
dateArray = {0.0833333, 0.166667, 0.25, 0.333333, 0.416667, 0.5, 0.583333, 0.666667, 0.75, 0.833333, 0.916667, 1., 1.08333, 1.16667, 1.25, 1.33333, 1.41667, 1.5, 1.58333, 1.66667, 1.75, 1.83333, 1.91667, 2., 2.08333, 2.16667, 2.25, 2.33333, 2.41667, 2.5, 2.58333, 2.66667, 2.75, 2.83333, 2.91667, 3., 3.08333, 3.16667, 3.25, 3.33333, 3.41667, 3.5, 3.58333, 3.66667, 3.75, 3.83333, 3.91667, 4., 4.08333, 4.16667, 4.25, 4.33333, 4.41667, 4.5, 4.58333, 4.66667, 4.75, 4.83333, 4.91667, 5., 5.5, 6., 6.5, 7., 7.5, 8., 8.5, 9., 9.5, 10., 10.5, 11., 11.5, 12., 12.5, 13., 13.5, 14., 14.5, 15., 15.5, 16., 16.5, 17., 17.5, 18., 18.5, 19., 19.5, 20., 20.5, 21., 21.5, 22., 22.5, 23., 23.5, 24., 24.5, 25.};
Use Differences and some similar ideas to avoid recomputations.
tdiffs = Differences[tenor0];
ddiffs = Differences[dateArray];
qdiffs = tdiffs/ddiffs;
Also combine the volFitxxx stuff so we can use Dot instead of iterated multiply-and-add.
volFit = Transpose[{volFit1, volFit2, volFit3}];
Most importantly, define dX one time.
Here is a slight recoding of your example. It runs in a split second.
Timing[Module[{dx = dX, sqrtt = Sqrt[tstep]},
Do[forwardYieldCurve[[1, j]] =
tenor0[[j]] + driftM[[j]] tstep + (volFit[[j]].dx[[1]]) sqrtt +
qdiffs[[j]] tstep, {j, 99}];
Do[forwardYieldCurve[[i + 1, j]] = forwardYieldCurve[[i, j]] +
driftM[[j]] tstep + volFit[[j]].dx[[i + 1]]*sqrtt +
(forwardYieldCurve[[i + 1, j + 1]] -
forwardYieldCurve[[i + 1, j]])/ddiffs[[j]] tstep
, {j, 99}, {i, 100}]
];]
(* {0.100000, Null} *)
--- edit ---
To get different triples in every use of dX, can do as follows.
dX2 := randomWalkPCA[3]
Timing[
Module[{sqrtt = Sqrt[tstep]},
Do[forwardYieldCurve[[1, j]] =
tenor0[[j]] + driftM[[j]] tstep + (volFit[[j]].dX2) sqrtt +
qdiffs[[j]] tstep, {j, 99}];
Do[forwardYieldCurve[[i + 1, j]] = forwardYieldCurve[[i, j]] +
driftM[[j]] tstep + volFit[[j]].dX2*sqrtt +
(forwardYieldCurve[[i + 1, j + 1]] -
forwardYieldCurve[[i + 1, j]])/ddiffs[[j]] tstep
, {j, 99}, {i, 100}]
];]
(* Out[126]= {0.150000, Null} *)
--- end edit ---
-
Hi @Daniel, thanks I will try the above. But don't you normally have to regenerate random numbers each time when simulating a forward yield curve or else how would you get a Gaussian distribution? Or am I mistaken? – sebastian c. Jan 14 '13 at 23:38
You did not seem to be reusing them. More specifically, you were using one (new) row at a time. So you can generate all at once, or generate a row (3, that is) at a time. Now I'm not sure if you want to use the same set of values in each inner loop. If not, then best would be to generate only 3 at a time, not 3000. See edit for details. – Daniel Lichtblau Jan 14 '13 at 23:44
Hi Daniel, not sure why when trying the above I get: Dot::dotsh: "Tensors {0.00718226`,0.00718226...have incompatible shapes? – sebastian c. Jan 15 '13 at 0:03
Did you change all the 1 x n matrices into simple vectors? Also I forgot the defn of volFit, which I'll edit in right now. – Daniel Lichtblau Jan 15 '13 at 0:15
Yes Flatten it. – sebastian c. Jan 15 '13 at 0:23 | 2015-08-31 04:59:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5219219326972961, "perplexity": 2989.089377863125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065534.46/warc/CC-MAIN-20150827025425-00066-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.researchgate.net/journal/The-Journal-of-Supercomputing-1573-0484 | # The Journal of Supercomputing
Online ISSN: 1573-0484
Recent publications
Article
Driven by big data, neural networks evolve more complex and the computing capacity of a single machine is often difficult to meet the demand. Distributed deep learning technology has shown great performance superiority for handling this problem. However, a serious issue in this field is the existence of stragglers, which significantly restricts the performance of the whole system. It is an enormous challenge to fully exploit the computing capacity of the system based on parameter server architecture, especially in a heterogeneous environment. Motivated by this, we designed a method named EP4DDL to minimize the impact of the straggler problem by load balance technique. In a statistical view, the approach introduces a novel metric named performance variance to give a comprehensive inspection of stragglers and employs flexible parallelism techniques for each node. We verify the algorithm on standard benchmarks and demonstrate that it can reduce training time to 57.46%, 24.8%, and 11.5%, respectively, without accuracy loss compared with the FlexRR, Con-SGD, and Falcon.
Article
Human resource management is the cornerstone of enterprise success. In the process of enterprise management and control, the design of human resource management mode is a very important part of its management. Data mining technology provides a valuable and meaningful knowledge of extracting and mining big data. And such a technology will be an advantageous tool for human resource experts in the face of difficult and unknown talent screening. In the past, talent screening relied on many factors, such as experience, knowledge, performance and judgment ability. The screening criteria are no longer sufficient, because in this knowledge economy and business environment, the factors that facilitate someone to sit in a certain position today may not apply the next day, but in talent management, the definition of output is ensured that the right people are in the right jobs. Based on the above factors, how to select talents in the field of talent management and predict the possible future development of talents has become a challenge and problem for every organization. In this research, this research proposed data mining method with the decision tree technology to analyze data and find out the key factors that affect the on-the-job time through mining data. The results show that extended the application of data mining to the field of human resource management. Through the decision tree technology, this research contributions are companies can improve their recruitment methods and pay more attention to job seekers in the location. The policy of employee retention and employee recruitment will be improved. The key factors that affect the in-service time are predicted, and some key information that may affect the in-service time is obtained. This will help the company make correct decisions and effectively reduce the cost of company operations. This will help the company make correct decisions and effectively reduce the cost of company operations.
Article
Gene expression data play a significant role in the development of effective cancer diagnosis and prognosis techniques. However, many redundant, noisy, and irrelevant genes (features) are present in the data, which negatively affect the predictive accuracy of diagnosis and increase the computational burden. To overcome these challenges, a new hybrid filter/wrapper gene selection method, called mRMR-BAOAC-SA, is put forward in this article. The suggested method uses Minimum Redundancy Maximum Relevance (mRMR) as a first-stage filter to pick top-ranked genes. Then, Simulated Annealing (SA) and a crossover operator are introduced into Binary Arithmetic Optimization Algorithm (BAOA) to propose a novel hybrid wrapper feature selection method that aims to discover the smallest set of informative genes for classification purposes. BAOAC-SA is an enhanced version of the BAOA in which SA and crossover are used to help the algorithm in escaping local optima and enhancing its global search capabilities. The proposed method was evaluated on 10 well-known microarray datasets, and its results were compared to other current state-of-the-art gene selection methods. The experimental results show that the proposed approach has a better performance compared to the existing methods in terms of classification accuracy and the minimum number of selected genes.
Article
Identifying near-duplicate data can be applied to any type of content and has been widely used for increasing search engines' efficiency, detecting plagiarism or spam, etc. As a near-duplicate detection (NDD) method, sectional MinHash (S-MinHash) estimates the similarity between the text content in high accuracy by considering the section of every document's attributes with similarity estimation. However, due to the addition in computational complexity, it still has some performance issues such as being slow. The proposed sectional Min–Max Hash method aims to reduce the hashing time while preserving and improving the accuracy of detecting near-duplicate documents. We achieved this goal by combining S-MinHash with Min–Max Hash method. The results show that our new method reduces the hashing time and provides more speed due to using half of the random hash functions that S-MinHash needs to build up the signature matrix. Furthermore, we conducted experiments to compare our sectional Min–Max Hash with the baseline methods on the evaluated dataset and confirmed that in terms of the running time and algorithm's precision, the proposed method yields better results than the S-MinHash and other NDD techniques. Also, by assuming that we have two sections, as the best-case performance for sectional algorithms on the evaluated dataset, the error rate reduced significantly in the proposed method, and the F-score reached up to 99%.
Article
As the growing of data volumes due to the successive development of new mobile devices and the creation of new applications, the emergence of multi-access edge computing can successfully improve quality of service based on reduced latency and lower system energy consumption. The introduction of software-defined networking technologies in multi-access edge computing environments supports access to more network devices and enhances the scalability and service management flexibility of mobile edge computing environments. The limited nature of computing resources in mobile edge computing environments makes resource management a critical issue. Therefore, to minimize the energy consumption and latency of task execution in mobile edge computing environment, and to ensure reasonable resource allocation during task execution, a resource management strategy based on multi-objective optimization in edge computing environment is proposed. In this strategy, the overall energy consumption weighting and minimization problem is solved by optimizing the management of communication and computing resources, and an improved NSGA-II algorithm is proposed to rationally allocate communication and computational resources for each task. To deal with load imbalance caused by large traffic fluctuations in multi-access edge computing environments based on software-defined networks, in this paper, a load-balancing-oriented switch migration strategy is proposed in which a switch migration algorithm based on an improved ant colony algorithm is proposed to optimally select the switch migration process so that the static deployment of the controller adapts to the changing needs of dynamic flows in the network. Experimental results demonstrate that the proposed resource management strategy minimizes the latency and energy consumption during task execution and increases resource utilization and average throughput of servers. The proposed switch migration strategy can effectively achieve load balancing and reduce the response time.
Article
The Elliptic curve cryptosystem is a public-key cryptosystem that receives more focus in recent years due to its higher security with smaller key size when compared to RSA. Smartcards and other applications have highlighted the importance of security in resource-constrained situations. To meet the increasing need for speed in today’s applications, hardware acceleration with cryptographic algorithms is required. In this paper, we present a novel parallel architecture for elliptic curve scalar multiplication based on a modified Lopez-Dahab–Montgomery(LDM) algorithm, to reduce the total time delay for computing scalar multiplication. It comprises three main steps: affine to projective conversion, point addition and doubling in the main loop followed by reconversion to affine coordinate. The modified parallel algorithm with new inversion in the reconversion yields lesser clock cycle and total time delay compared to existing techniques in the literature for the National Institute of Standards and Technology recommended trinomial GF(2233) . Our proposed architecture implemented on Virtex4 and Virtex7 FPGA technologies, respectively, achieved a lesser clock cycle of 956, which yields a lesser delay of 20.025 and 8.22 μs. Compared with the state-of-the-art of existing techniques, two multiplications are reduced in the reconstruction process and our processor yields 18.29% and 27.21% increase in area-time performance in Virtex 4 and Virtex 7 devices, respectively.
Article
The h-extra edge-connectivity is an important parameter for the reliability evaluation and fault tolerance analysis of the easily scalable interconnection networks of parallel and distributed systems. The h-extra edge-connectivity of the topological structure of an interconnection network G, denoted by λh(G), is the minimum cardinality of a set of link malfunctions whose deletion disconnects G and each remaining component has at least h processors. In this paper, for the integer n≥3, we find that the h-extra edge-connectivity of n-dimensional pentanary cube (obtained by the n-th Cartesian product of K5), denoted by λh(K5n), presents a concentration behavior on the value 4×5n-1 (resp. 6×5n-1) for some exponentially large enough h: ⌈2×5n-13⌉≤h≤5n-1 (resp. ⌈4×5n-13⌉≤h≤2×5n-1). That is, for about 40.00 percent of 1≤h≤⌊5n/2⌋, the exact values of the h-extra edge-connectivity of n-dimensional pentanary cube are either 4×5n-1 or 6×5n-1.
Article
Early diagnosis and therapy are the most essential strategies to prevent deaths from diseases, such as cancer, brain tumors, and heart diseases. In this regard, information mining and artificial intelligence approaches have been valuable tools for providing useful data for early diagnosis. However, high-dimensional data can be challenging to examine, practically difficult to visualize, and costly to measure and store. Transferring a high-dimensional portrayal of the data to a lower-dimensional one without losing important information is the focal issue of dimensionality reduction. Therefore, in this study, dimensionality reduction-based medical data classification is presented. The proposed methodology consists of three modules: pre-processing, dimension reduction using an adaptive artificial flora (AAF) algorithm, and classification. The important features are selected using the AAF algorithm to reduce the dimension of the input data. From the results, a dimension-reduced dataset is obtained. The reduced data are then fed as input to the hybrid classifier. A hybrid support vector neural network is proposed for classification. Finally, the effectiveness of the proposed method is analyzed in terms of different metrics, namely accuracy, sensitivity, and specificity. The proposed method is implemented in MATLAB.
Article
Fog-integrated cloud (FiC) contains a fair amount of heterogeneity, leading to uncertainty in the resource provisioning. An admission control manager (ACM) is proposed, using a modified fuzzy inference system (FiS), to place a request based on the request’s parameters, e.g., CPU, memory, storage, and few categorical parameters, e.g., job priority and time sensitivity. The ACM considers the extended three-layer architecture of FiC. FiC nodes are classified into three computing nodes: fog node, aggregated fog node, and cloud node using modified FiS model. For performance study, extensive simulation experiments have been carried out on real Google trace. Different batches on the number of relevant rules are created and compared on metrics of job execution time, memory overhead, accuracy, and hit ratio with the modified rules. The proposed work has also been compared with the state of the art. The results have been encouraging and exhibit the benefits of the proposed model apart from being it lightweight with reduced number of rules, especially suited for the FiC.
Article
In recent years, combinatorial optimization has been widely studied. The existing optimization solutions are prone to fall into local optimal solutions and have a lower probability of obtaining global optimal solutions. Quantum approximate optimization algorithm (QAOA) is an effective algorithm that can obtain the optimal solution with high probability. In this paper, the problem Hamiltonian is obtained by summing the problem function and the deformed constraints. Through theoretical formula derivation, the problem Hamiltonian is transformed into the Ising model. The performance of the experimental result under different optimizers and asynchronous lengths is verified on pyQPanda. The experimental results show that when using the problem Hamiltonian method set in this paper, the probability of obtaining the optimal solution is 99.59%. Compared with other methods, the proposed method can alleviate the risk of falling into local optimal solutions and obtain the global optimal solution with a higher probability.
Article
Due to the increase and complexity of computer systems, reducing the overhead of fault tolerance techniques has become important in recent years. One technique in fault tolerance is checkpointing, which saves a snapshot with the information that has been computed up to a specific moment, suspending the execution of the application, consuming I/O resources and network bandwidth. Characterizing the files that are generated when performing the checkpoint of a parallel application is useful to determine the resources consumed and their impact on the I/O system. It is also important to characterize the application that performs checkpoints, and one of these characteristics is whether the application does I/O. In this paper, we present a model of checkpoint behavior for parallel applications that performs I/O; this depends on the application and on other factors such as the number of processes, the mapping of processes and the type of I/O used. These characteristics will also influence scalability, the resources consumed and their impact on the IO system. Our model describes the behavior of the checkpoint size based on the characteristics of the system and the type (or model) of I/O used, such as the number I/O aggregator processes, the buffering size utilized by the two-phase I/O optimization technique and components of collective file I/O operations. The BT benchmark and FLASH I/O are analyzed under different configurations of aggregator processes and buffer size to explain our approach. The model can be useful when selecting what type of checkpoint configuration is more appropriate according to the applications’ characteristics and resources available. Thus, the user will be able to know how much storage space the checkpoint consumes and how much the application consumes, in order to establish policies that help improve the distribution of resources.
Article
Given a large data graph, trimming techniques can reduce the search space by removing vertices without outgoing edges. One application is to speed up the parallel decomposition of graphs into strongly connected components (SCC decomposition), which is a fundamental step for analyzing graphs. We observe that graph trimming is essentially a kind of arc-consistency problem, and AC-3, AC-4, and AC-6 are the most relevant arc-consistency algorithms for application to graph trimming. The existing parallel graph trimming methods require worst-case O(nm)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal O(nm)$$\end{document} time and worst-case O(n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal O(n)$$\end{document} space for graphs with n vertices and m edges. We call these parallel AC-3-based as they are much like the AC-3 algorithm. In this work, we propose AC-4-based and AC-6-based trimming methods. That is, AC-4-based trimming has an improved worst-case time of O(n+m)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal O(n+m)$$\end{document} but requires worst-case space of O(n+m)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal O(n+m)$$\end{document}; compared with AC-4-based trimming, AC-6-based has the same worst-case time of O(n+m)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal O(n+m)$$\end{document} but an improved worst-case space of O(n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal O(n)$$\end{document}. We parallelize the AC-4-based and AC-6-based algorithms to be suitable for shared-memory multi-core machines. The algorithms are designed to minimize synchronization overhead. For these algorithms, we also prove the correctness and analyze time complexities with the work-depth model. In experiments, we compare these three parallel trimming algorithms over a variety of real and synthetic graphs on a multi-core machine, where each core corresponds to a worker. Specifically, for the maximum number of traversed edges per worker by using 16 workers, AC-3-based traverses up to 58.3 and 36.5 times more edges than AC-6-based trimming and AC-4-based trimming, respectively. That is, AC-6-based trimming traverses much fewer edges than other methods, which is meaningful especially for implicit graphs. In particular, for the practical running time, AC-6-based trimming achieves high speedups over graphs with a large portion of trimmable vertices.
Article
In this study, we have developed a set of virtual reality (VR) human–robot interaction technology acceptance model for learning direct current and alternating current, aiming to use VR technology to immerse students in the generation, existence, and flow of electricity. We hope that using VR to transform abstract physical concepts into tangible objects will help students learn and comprehend abstract electrical concepts. The VR technology acceptance model was developed using the Unity 3D game kit to be accessed using the HTC Vive VR headset. The scene models, characters, and objects were created using Autodesk 3DS Max and Autodesk Maya, and the 2D graphics were processed in Adobe Photoshop. The results were evaluated using four metrics for our technology acceptance model. The four metrics include the content, design, interface and media content, and practical requirements. The average score of the content is 4.73. The average score of the design is 4.12. The average score of the interface and media content is 4.34. The average score of the practical requirements is 3.72. All the items on the effectiveness questionnaire of the technology acceptance model had average scores in the range 4.25–4.75. Therefore, all teachers were strongly satisfied with the trial teaching activity. The average score of each statement ranged within 3.58–4.03 for the satisfaction with the teaching material contents. Hence, the students were somewhat satisfied with this teaching activity. The average score of each statement ranged from 3.43 to 4.96 for the satisfaction with the implementation of the technology acceptance model. This result shows that the respondents were generally satisfied with the learning outcomes associated with these materials. The average score per question in this questionnaire was 3.92, and most of the questions have an average score greater than 3.8 for the feedback pertaining to satisfaction with the teaching material contents. In summary, a deeply immersive and interactive game was created using tactile somatosensory devices and VR that aim to utilize and enhance the fun and benefits associated with learning from games.
Article
Epistasis can be defined as the statistical interaction of genes during the expression of a phenotype. It is believed that it plays a fundamental role in gene expression, as individual genetic variants have reported a very small increase in disease risk in previous Genome-Wide Association Studies. The most successful approach to epistasis detection is the exhaustive method, although its exponential time complexity requires a highly parallel implementation in order to be used. This work presents Fiuncho, a program that exploits all levels of parallelism present in x86_64 CPU clusters in order to mitigate the complexity of this approach. It supports epistasis interactions of any order, and when compared with other exhaustive methods, it is on average 358, 7 and 3 times faster than MDR, MPI3SNP and BitEpi, respectively.
Article
Execution of multiple applications on Multi-Processor System-on-Chips (MPSoCs) significantly boosts performance and energy efficiency. Although various researchers have suggested Network-on-Chip (NoC) architectures for MPSoCs, the problem still needs more investigations for the case of multi-application MPSoCs. In this paper, we propose a fully automated synthesis flow in five steps for the design of custom NoC fabrics for multi-application MPSoCs. The steps include: preprocessing, core to router allocation, voltage island merging, floorplanning, and router to router connection. The proposed flow finds design solutions that satisfy the performance, bandwidth, and power constraints of all input applications. If the user decides, the proposed synthesis adds network-level reconfiguration to improve the efficiency of the obtained design solutions. With the reconfiguration option, the proposed flow comes up with adaptive NoC architectures that satisfy each application’s communication requirements while power-gate idle resources, e.g., router ports and links. If reconfiguration option is not set by the user, the proposed flow considers the top communication requirements among the applications in finding design solutions. We have used the proposed synthesis flow to design custom NoCs for several combined graphs of real-world applications and synthetic graphs. Results show that the reconfiguration option can save up to 98% in the energy-delay product (EDP) of the ultimate designs.
Article
Complex system theory is increasingly applied to develop control protocols for distributed computational and networking resources. The paper deals with the important subproblem of finding complex connected structures having excellent navigability properties using limited computational resources. Recently, the two-dimensional hyperbolic space turned out to be an efficient geometry for generative models of complex networks. The networks generated using the hyperbolic metric space share their basic structural properties (like small diameter or scale-free degree distribution) with several real networks. In the paper, a new model is proposed for generating navigation trees for complex networks embedded in the two-dimensional hyperbolic plane. The generative model is not based on known hyperbolic network models: the trees are not inferred from the existing links of any network; they are generated from scratch instead and based purely on the hyperbolic coordinates of nodes. We show that these hyperbolic trees have scale-free degree distributions and are present to a large extent both in synthetic hyperbolic complex networks and real ones (Internet autonomous system topology, US flight network) embedded in the hyperbolic plane. As the main result, we show that routing on the generated hyperbolic trees is optimal in terms of total memory usage of forwarding tables.
Article
Network reconfiguration is an important means of improving network invulnerability. However, most existing network reconfiguration methods fail to consider node importance, edge importance, and hierarchical characteristics, and the local and global information of command and control (C2) networks are difficult to satisfy comprehensively. Therefore, this study designed a hierarchy-entropy-based method for reconfiguring C2 networks. By combining hierarchical and operational link entropy, the probability of inter-node edge reconfiguration based on hierarchy entropy is proposed. Additionally, methods for calculating the node level-up, cross-level, and swap degrees, and a portfolio reconfiguration strategy are proposed. Finally, to validate the proposed method, a case study was simulated, and the repair probability, adjustable parameters, and reconfiguration effects of the different reconfiguration methods and modes were determined. The comparison results demonstrate that the proposed algorithm improves the reconfiguration effect and reduces the reconfiguration cost.
Article
Task graphs provide a simple way to describe scientific workflows (sets of tasks with dependencies) that can be executed on both HPC clusters and in the cloud. An important aspect of executing such graphs is the used scheduling algorithm. Many scheduling heuristics have been proposed in existing works; nevertheless, they are often tested in oversimplified environments. We provide an extensible simulation environment designed for prototyping and benchmarking task schedulers, which contains implementations of various scheduling algorithms and is open-sourced, in order to be fully reproducible. We use this environment to perform a comprehensive analysis of workflow scheduling algorithms with a focus on quantifying the effect of scheduling challenges that have so far been mostly neglected, such as delays between scheduler invocations or partially unknown task durations. Our results indicate that network models used by many previous works might produce results that are off by an order of magnitude in comparison to a more realistic model. Additionally, we show that certain implementation details of scheduling algorithms which are often neglected can have a large effect on the scheduler’s performance, and they should thus be described in great detail to enable proper evaluation.
Article
The Goore Game (GG) is a model for collective decision-making under uncertainty, which can be used as a tool for stochastic optimization of a discrete variable function. The Goore Game has a fascinating property that can be resolved in an entirely distributed manner with no intercommunication between the players. In this paper, we introduce a new model called Cellular Goore Game (CGG). CGG is a network of Goore Games in which, at any time, every node (or node in a subset of the nodes) in the network plays the role of a referee that participates in a GG with its neighboring players (voters). Like GG, each player independently selects its optimal action between two available actions based on their gains and losses received from its adjacent referees. Players in CGG know nothing about how other players are playing or even how/why they are rewarded/penalized by the voters. CGG may be used for modeling systems that can be described as massive collections of simple objects interacting locally with each other. Through simulations, the behavior of CGG for different networks of players/voters is studied. This paper presents a novel CGG-based approach to efficiently solve the Quality-of-Service (QoS) control for clustered WSNs to show the potential of CGG. Also, a CGG-based QoS control algorithm for WSNs with multiple sinks is proposed that dynamically adjusts the number of active sensors during WSN operation. Several experiments have been conducted to evaluate the performance of these algorithms. The obtained results show that the proposed CGG-based algorithms are superior to the existing algorithms in terms of the QoS control performance metrics.
Article
Mobile Edge Computing (MEC) provides a new opportunity to reduce the latency of IoT applications significantly. It does so by offloading computation-intensive tasks in applications from IoT devices to mobile edges, which are located N-close proximity to the IoT devices. However, the prior researches focus on supporting computation offloading for a specific type of applications. Meanwhile, making multi-task and multi-server offloading decisions in highly complex and dynamic MEC environments remains intractable. To address this problem, this paper proposes a novel approach called MultiOff. First, we propose a generic program structure that supports on-demand computation offloading. Applications conforming to this structure can extract the flowcharts of program fragments via code analysis. Second, a novel cost-efficient offloading strategy based on a Multi-task Particle Swarm Optimization algorithm using the Genetic Algorithm operators (MPSO-GA) is proposed. MPSO-GA makes offloading decisions by analyzing program fragment flowcharts and context. Finally, each application can be offloaded at the granularity of services with the offloading scheme, minimizing the system cost while satisfying the deadline constraint for each application. We evaluate MultiOff on several real-world applications and the experimental results show that MultiOff can support computation offloading for different types of applications at the fine-grained granularity of services. Moreover, MPSO-GA can save about 2.11–17.51%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document} system cost compared with other classical methods while meeting time constraints.
Article
Article
In the past decade, social media networks have received much attention among ordinary people, agencies, and research scholars. Twitter is one of the fastest-growing social media tools. By means of the Twitter application on smartphones, users are able to immediately report events happening around them on a real-time basis. The information disseminated by millions of active users every day generates a new version of a dynamic database that contains information about various topics. Twitter data can be utilized as a major traffic data source along with conventional sensors. In this aspect, this paper presents a novel firefly algorithm-based feature selection with a deep learning model for traffic flow analysis (FFAFS-DLTFA) using Twitter data. The goal of FFAFS-DLTFA is to determine the class labels for tweets as relevant to traffic events. The proposed FFAFS-DLTFA encompasses several processes, such as preprocessing, feature extraction, feature selection, and classification. Primarily, tweets are preprocessed in several ways, such as tokenization, removal of stop words, and stemming. At the same time, three types of embedding vectors, unigram, bigram, and POS features, are used. In addition, the firefly algorithm (FFA) is applied for the optimal selection of feature subsets. Finally, a deep neural network (DNN) model is applied for the identification of tweets into three classes, namely, positive, neutral, and negative. The performance validation of FFAFS-DLTFA takes place using the benchmark Kaggle repository, and the results are inspected under different aspects. The experimental values demonstrate the better performance of FFAFS-DLTFA on the other techniques with the maximum accuracy of 98.83%.
Article
Currently, many smart speakers, even social robots, appear on the market to help people's lives become more convenient. Usually, people use smart speakers to check their daily schedule or control home appliances in their house. Many social robots also include smart speakers. They have the common property of being used in voice control machines. Regardless of where the smart speaker is installed and used, when people start a conversation with voice equipment, a security or privacy risk is exposed. Hence, we want to build a speech recognition (SR) that contains the privacy identification information (PII) system in this paper. We call this the SR-PII system. We used a Google Artificial-Intelligence-Yourself (AIY) Voice Kit released from Google to build a simple, smart dialog speaker and included our SR-PII system. In our experiments, we test SR accuracy and the reliability of privacy settings in three environments (quiet, noise, and playing music). We also examine the cloud response and speaker response times during our experiments. The results show that the speaker response is approximately 3.74 s in the cloud environment and approximately 9.04 s from the speaker. We also showed the response accuracy of the speaker, which successfully prevented personal information with the SR-PII system in three environments. The speaker has a response mean time of approximately 8.86 s with 93% mean accuracy in a quiet room, approximately 9.18 s with 89% mean accuracy in a noisy environment, and approximately 9.62 s with 90% mean accuracy in an environment that plays music. We conclude that the SR-PII system can secure private information and that the most important factor affecting the response speed of the speaker is the network connection status. We hope that people can, through our experiments, have some guidelines in building social robots and installing the SR-PII system to protect users’ personal identification information.
Article
The Jacobi iterative algorithm has the characteristic of low computational load, and multiple components of the solution can be solved independently. This paper applies these characteristics to the ternary optical computer, which can be used for parallel optimization because it has a large number of data bits and reconfigurable processor bits. Therefore, a new parallel design scheme is constructed to solve the problem of slow efficiency in solving large linear equations. And the elaborate experiment is used to verify. The experimental method is to simulate the calculation on the ternary optical computer experimental platform. Then, the resource consumption is numerically calculated and summarized to measure the feasibility of the parallel design. Eventually, the results show that the parallel design has obvious advantages in computing speed. The Jacobi iterative algorithm is optimized in parallel on ternary optical processor for the first time. There are two parallel highlights of the scheme. First, the n components are calculated in full parallel. Second, the modified signed-digit (MSD) multiplier based on the minimum module and one-step MSD adder are used to calculate each component to eliminate the impact of large amount of data on calculation time. The research provides a new method for fast solution of large linear equations.
Article
Article
Detection of the selfish node in a delay tolerant network (DTN) can sharply reduce the loss incurred in a network. The algorithm's current pedigree mainly focuses on the rely on nodes, records, and delivery performance. The community structure and social aspects have been overlooked. Analysis of individual and social tie preferences results in an extensive detection time and increases communication overhead. In this article, a heterogeneous DTN topology with high-power stationary nodes and mobile nodes on Manhattan's accurate map is designed. With the increasing complexity of social ties and the diversified nature of topology structure, there need for a method that can effectively capture the essence within the speculated time. In this article, a novel deep autoencoder-based nonnegative matrix factorization (DANMF) is proposed for DTN topology. The topology of social ties projected onto low-dimensional space leads to effective cluster formation. DANMF automatically learns an appropriate nonlinear mapping function by utilizing the features of data. Also, the inherent structure of the deep autoencoder is nonlinear and has strong generalization. The membership matrices extracted from the DANMF are used to design the weighted cumulative social tie that eventually, along with the residual energy, is used to detect the network's selfish node. The testing of the designed model is carried out on the real dataset of MIT reality. The proficiency of the developed algorithm has been well tested and proved at every step. The methods employed for social tie extraction are NMF and DANMF. The methodology is rigorously experimented on various scenarios and has improved around 80% in the worst-case scenario of 40% nodes turning selfish. A comprehensive comparison is made with the other existing state-of-the-art methods which are also incentive-based approaches. The developed method has outperformed and has shown the supremacy of the current methods to capture the latent, hidden structure of the social tie.
Article
Mobile users frequently change their location and often desire to avail of location-based services (LBS). LBS server provides services to users at the service charge. The user queries the LBS server for services, and the LBS server replies queries’ answer with the associated fee. This exchange may breach the user’s privacy. Users’ query privacy and LBS server services’ privacy is a challenging issue. Many privacy-preserving LBS schemes have been proposed, such as trusted third party, homomorphic encryption, and private information retrieval. These existing schemes mostly suffer from poor efficiency and privacy issue. We propose an efficient privacy-preserving scheme for location-based services (EP2LBS) using a lattice-based oblivious transfer protocol. The proposed EP2LBS scheme’s security depends on the combination of decisional ring-learning with errors assumption and perfect secrecy assumption. This enables the EP2LBS scheme to preserve the user’s query privacy and LBS server’s services privacy. The theoretical and experimental results show that the EP2LBS scheme requires lower communication and computation costs at server and user as compared to the current-state-of-the-art schemes.
Article
The Internet of Medical Things (IoMT) is a bionetwork of allied medical devices, sensors, wearable biosensor devices, etc. It is gradually reforming the healthcare industry by leveraging its capabilities to improve personalized healthcare services by enabling seamless communication of medical data. IoMT facilitates prompt emergency responses and provides improved quality of medical services with minimum cost. With the advancement of modern technology, progressively ubiquitous medical devices raise critical security and data privacy concerns through resource constraints and open connectivity. Vulnerabilities in IoMT devices allow unauthorized access for potential entry into healthcare and sensitive personal data. In addition, the patient may experience severe physical damage with the attack on IoMT devices. To provide security to IoMT devices and privacy to patient data, we have proposed a novel IoMT framework with the hybridization of Bayesian optimization and extreme learning machine (ELM). The proposed model derives encouraging performance with enhanced accuracy in decision-making process compared to similar state-of-the-art methods.
Article
Data deduplication is a process that gets rid of excessive duplicates of data and minimizes the storage capacity to a large extent. This process mainly optimizes redundancies without compromising the data fidelity or integrity. However, the major challenge faced by most data deduplication systems is secure cloud storage. Cloud computing relies on the ability and security of all information. In the case of distributed storage, data protection and security are critical. This paper presents a Secure Cloud Framework for owners to effectively handle cloud-based information and provide high security for information (SCF). Weaknesses, Cross-Site Scripting (XSS), SQL perfusion, adverse processing, and wrapping are all examples of significant attacks in the cloud. This paper proposes an improved Secure File Deduplication Avoidance (SFDA) algorithm for block-level deduplication and security. The deduplication process allows cloud customers to adequately manage the distributed storage space by avoiding redundant information and saving transfer speed. A deep learning classifier is used to distinguish the familiar and unfamiliar data. A dynamic perfect hashing scheme is used in the SFDA approach to perform convergent encryption and offer secure storage. The Chaotic krill herd optimization (CKHO) algorithm is used for the optimal secret key generation process of the Advanced Encryption Standard (AES) algorithm. In this way, the unfamiliar data are encrypted one more time and stored in the cloud. The efficiency of the results is demonstrated via the experiments conducted in terms of computational cost, communication overhead, deduplication rate, and attack level. For file sizes of 8 MB, 16 MB, 32 MB, and 64 MB, the proposed methodology yields a deduplication rate of 53%, 62%, 54%, and 44%, respectively. The dynamic perfect hashing and the optimal key generation using the CKHO algorithm minimizes the data update time and the time taken to update a total of 1024 MB data is 341.5 ms. The improved SFDA algorithm's optimal key selection approach reduces the impact of an attack by up to 12% for a data size of 50 MB, whereas the existing system is mostly impacted by data size, and its attack level rises by up to 19 percent for the same data size.
Article
Power consumption is likely to remain a significant concern for exascale performance in the foreseeable future. In addition, graphics processing units (GPUs) have become an accepted architectural feature for exascale computing due to their scalable performance and power efficiency. In a recent study, we found that we can achieve a reasonable amount of power and energy savings based on the selection of algorithms. In this research, we suggest that we can save more power and energy by varying the block size in the kernel configuration . We show that we may attain more savings by selecting the optimum block size while executing the workload. We investigated two kernels on NVIDIA Tesla K40 GPU, a Bitonic Mergesort and Vector Addition kernels, to study the effect of varying block sizes on GPU power and energy consumption. The study should offer insights for upcoming exascale systems in terms of power and energy efficiency.
Article
In real-time rendering, a 3D scene is modelled with meshes of triangles that the GPU projects to the screen. They are discretized by sampling each triangle at regular space intervals to generate fragments which are then added texture and lighting effects by a shader program. Realistic scenes require detailed geometric models, complex shaders, high-resolution displays and high screen refreshing rates, which all come at a great compute time and energy cost. This cost is often dominated by the fragment shader, which runs for each sampled fragment. Conventional GPUs sample the triangles once per pixel; however, there are many screen regions containing low variation that produce identical fragments and could be sampled at lower than pixel-rate with no loss in quality. Additionally, as temporal frame coherence makes consecutive frames very similar, such variations are usually maintained from frame to frame. This work proposes Dynamic Sampling Rate (DSR), a novel hardware mechanism to reduce redundancy and improve the energy efficiency in graphics applications. DSR analyzes the spatial frequencies of the scene once it has been rendered. Then, it leverages the temporal coherence in consecutive frames to decide, for each region of the screen, the lowest sampling rate to employ in the next frame that maintains image quality. We evaluate the performance of a state-of-the-art mobile GPU architecture extended with DSR for a wide variety of applications. Experimental results show that DSR is able to remove most of the redundancy inherent in the color computations at fragment granularity, which brings average speedups of 1.68x and energy savings of 40%.
Article
Swarm-Intelligence (SI), the collective behavior of decentralized and self-organized system, is used to efficiently carry out practical missions in various environments. To guarantee the performance of swarm, it is highly important that each object operates as an individual system while the devices are organized as simple as possible. This paper proposes an efficient, scalable, and practical swarming system using gas detection device. Each object of the proposed system has multiple sensors and detects gas in real time. To let the objects move toward gas rich spot, we propose two approaches for system design, vector-sum based, and Reinforcement Learning (RL) based. We firstly introduce our deterministic vector-sum-based approach and address the RL-based approach to extend the applicability and flexibility of the system. Through system performance evaluation, we validated that each object with a simple device configuration performs its mission perfectly in various environments.
Article
In this work, we propose a multi-tier architectural model to separate functionality and security concerns for distributed cyber-physical systems. On the line of distributed computing, such systems require the identification of leaders for distribution of work, aggregation of results, etc. Further, we propose a fault-tolerant leader election algorithm that can independently elect the functionality and security leaders. The proposed election algorithm identifies a list of potential leader capable nodes to reduce the leader election overhead. It keeps identifying the highest potential node as the leader, whenever needed, including the situation when one has failed. We also explain the proposed architecture and its management method through a case study. Further, we perform several experiments to evaluate the system performance. The experimental results show that the proposed architectural model improves the system performance in terms of latency, average response time, and the number of real-time tasks completed within the deadline.
Article
Aspect-level sentiment classification has been widely used by researchers as a fine-grained sentiment classification task to predict the sentiment polarity of specific aspect words in a given sentence. Previous studies have shown relatively good experimental results using graph convolutional networks, so more and more approaches are beginning to exploit sentence structure information for this task. However, these methods do not link aspect word and context well. To address this problem, we propose a method that utilizes a hierarchical multi-head attention mechanism and a graph convolutional network (MHAGCN). It fully considers syntactic dependencies and combines semantic information to achieve interaction between aspect words and context. To fully validate the effectiveness of the method proposed in this paper, we conduct extensive experiments on three benchmark datasets, which, according to the experimental results, show that the method outperforms current methods.
Article
DNA sequencing is one of the important sub-disciplines of bioinformatics, which has various applications in medicine, history, demography, and archaeology. De novo sequencing is the most challenging problem in this field. De novo sequencing is used for recognizing a new genome and for sequencing unknown parts of the genome such as in cancer cells. For assembling the genome, first, small fragments of the genome (called reads) that are located randomly on the genome are sequenced by the sequencing machine. Then, they are sent to the processing machine to be aligned on the genome. To sequence the whole genome, the reads must cover it entirely. The minimum number of reads to cover the genome is given by the Lander–Waterman's coverage bound. In this paper, we generalize the later scheme to de novo sequencing and reduce the total number of required bases by Lander–Waterman's coverage bound. We investigate the performance of the scheme such as the longest generated contig length, the execution time of the algorithm, different read lengths, and probability of error in the genome assembly. The results show the computational complexity and execution time of the algorithm in parallel on human genome with length 50,000 bases. We also show that the proposed method can generate contigs with 90 percent genome length.
Article
• Ram Kumar
• S. C. Sharma
Query expansion is an important approach utilized to improve the efficiency of data retrieval tasks. Numerous works are carried out by the researchers to generate fair constructive results; however, they do not provide acceptable results for all kinds of queries particularly phrase and individual queries. The utilization of identical data sources and weighting strategies for expanding such terms are the major cause of this issue which leads the model unable to capture the comprehensive relationship between the query terms. In order to tackle this issue, we developed a novel approach for query expansion technique to analyze the different data sources namely WordNet, Wikipedia, and Text REtrieval Conference. This paper presents an Improved Aquila Optimization-based COOT(IAOCOOT) algorithm for query expansion which retrieves the semantic aspects that match the query term. The semantic heterogeneity associated with document retrieval mainly impacts the relevance matching between the query and the document. The main cause of this issue is that the similarity among the words is not evaluated correctly. To overcome this problem, we are using a Modified Needleman Wunsch algorithm algorithm to deal with the problems of uncertainty, imprecision in the information retrieval process, and semantic ambiguity of indexed terms in both the local and global perspectives. The k most similar word is determined and returned from a candidate set through the top-k words selection technique and it is widely utilized in different tasks. The proposed IAOCOOT model is evaluated using different standard Information Retrieval performance metrics to compute the validity of the proposed work by comparing it with other state-of-art techniques.
Article
• Rajni Aron
Article
Producing a large family of resource-constrained multi-processing systems on chips (MPSoC) is challenging, and the existing techniques are generally geared toward a single product. When they are leveraged for a variety of products, they are expensive and complex. Further in the industry, a considerable lack of analysis support at the architectural level induces a strong dependency on the experiences and preferences of the designer. This paper proposes a formal foundation and analysis of MPSoC product lines based on a featured transition system (FTS) to express the variety of products. First, features diagrams are selected to model MPSoC product lines, which facilitate capturing its semantics as FTS. To this end, the probabilistic model checker verifies the resulting FTS that is decorated with tasks characteristics and processors’ failure probability. The experimental results indicate that the formal approach offers quantitative results on the relevant product that optimizes resource usage when exploring the product family.
Article
Wireless communication among vehicular ad hoc network (VANET) entities is secured through cryptography, which is used for authentication as well as to ensure the overall security of messages in this environment. Authentication protocols play a significant role and are therefore required to be free of vulnerabilities that allow entity impersonation, unauthorized entry, and general misuse of the system. A resourceful adversary can inflict serious damage to VANET systems through such vulnerabilities. We consider several VANET authentication protocols in the literature and identify vulnerabilities. In addition to the commonly considered vulnerabilities in VANETs, we observe that the often-overlooked relay attack is possible in almost all VANET authentication protocols. Relay attacks have the potential to cause damage in VANETs through misrepresentation of vehicle identity, telematic data, traffic-related warnings, and information related to overall safety in such networks. We discuss possible countermeasures to address identified vulnerabilities. We then develop an authentication protocol that uses ambient conditions to secure against relay attacks and other considered vulnerabilities. We include security proof for the proposed protocol.
Article
Industries are going through the fourth industrial revolution (Industry 4.0), where technologies like the Industrial Internet of things, big data analytics, and machine learning (ML) are extensively utilized to improve the productivity and efficiency of manufacturing systems and processes. This work aims to further investigate the applicability and improve the effectiveness of ML prediction models for fault diagnosis in the smart manufacturing process. Hence, we propose several methodologies and ML models for fault diagnosis for smart manufacturing process applications. A case study has been conducted on a real dataset from a semiconductor manufacturing (SECOM) process. However, this dataset contains missing values, noisy features, and class imbalance problem. This imbalance problem makes it so difficult to accurately predict the minority class, due to the majority class size difference. In the literature, efforts have been made to alleviate the class imbalance problem using several synthetic data generation techniques (SDGT) on the UCI machine learning repository SECOM dataset. In this work, to handle the imbalance problem, we employed, compared, and evaluated the feasibility of three SDGT on this dataset. To handle issues related to the missing values and noisy features, we implemented two missing values imputation techniques and feature selection techniques, respectively. We then developed and compared the performance of ten predictive ML models against these proposed methodologies. The results obtained across several evaluation metrics of performance were significant. A comparative analysis shows the feasibility and validate the effectiveness of these SDGT and the proposed methodologies. Some among the proposed methodologies could produce an accuracy in the range of 99.5% to 100%. Furthermore, based on a comparative analysis with similar models from the literature, our proposed models outpaced those proposed in the literature.
Article
Software-Defined Networking (SDN) and Network Function Virtualization (NFV) are promising technologies for delivering software-based networks to the user community. The application of Machine Learning (ML) in SDN and NFV enables innovation and easiness towards network management. The shift towards the softwarization of networks broadens the many doors of innovation and challenges. As the number of devices connected to the Internet is increasing swiftly, the SDNFV traffic management mechanism will provide a better solution. Many ML techniques applied to SDN focus more on the classification problems like network attack patterns, routing techniques, QoE/QoS provisioning. The approach of the application of ML to SDNFV and SDN controller placement has a lot of scope to explore. This work aims to develop an ML approach for network traffic management by predicting the number of controllers likely to be placed in the network. The proposed prediction mechanism is a centralized one and deployed as Virtual Network Function (VNF) in the NFV environment. The number of controllers is estimated using the predicted traffic and placed in the optimal location using the K-Medoid algorithm. The proposed method is suitably analysed for performances metrics. The proposed approach effectively combines SDN, NFV and ML for the better achievement of network automation.
Article
CNNs have achieved remarkable image classifcation and object detection results over the past few years. Due to the locality of the convolution operation, although CNNs can extract rich features of the object itself, they can hardly obtain global context in images. It means the CNN-based network is not a good candidate for detecting objects by utilizing the information of the nearby objects, especially when the partially obscured object is hard to detect. ViTs can get a rich context and dramatically improve the prediction in complex scenes with multi-head self-attention. However, it sufers from long inference time and huge parameters, which leads ViTbased detection network that is hardly be deployed in the real-time detection system. In this paper, frstly, we design a novel plug-and-play attention module called mix attention (MA). MA combines channel, spatial and global contextual attention together. It enhances the feature representation of individuals and the correlation between multiple individuals. Secondly, we propose a backbone network based on mix attention called MANet. MANet-Base achieves the state-of-the-art performances on ImageNet and CIFAR. Last but not least, we propose a lightweight object detection network called CAT-YOLO, where we make a trade-of between precision and speed. It achieves the AP of 25.7% on COCO 2017 test-dev with only 9.17 million parameters, making it possible to deploy models containing ViT on hardware and ensure real-time detection. CAT-YOLO could better detect obscured objects than other state-of-the-art lightweight models.
Article
Existing algorithms have difficulty in solving the two tasks of localization and classification simultaneously when performing traffic sign detection on realistic images of complex traffic scenes. In order to solve the above problems, a new road traffic sign dataset is created, and based on the YOLOv4 algorithm, for the complexity of realistic traffic scene images and the large variation in the size of traffic signs in the images, the multi-scale feature extraction module, cascade feature fusion module and attention mechanism module are designed to improve the algorithm’s ability to locate and classify traffic signs simultaneously. Experimental results on the newly created dataset show that the improved algorithm achieves a mean average precision of 84.44%, which is higher than several major CNN-based object detection algorithms for the same type of task.
Article
As the complexity of the cyber-physical systems (CPSs) increase, system modeling and simulation tend to be performed on different platforms where collaborative modeling activities are performed on distributed clients, while the simulations of systems are carried out in specific simulation environments, such as high-performance computing (HPC). However, there is a great gap between system models usually designed in system modeling language (SysML) and simulation code, and the existing model transformation-based simulation methods and tools mainly focus on either discrete or continuous models, ignoring the fact that the simulation of hybrid models is quite important in designing complex systems. To this end, a model transformation approach is proposed to simulate hybrid SysML models under a discrete event system specification (DEVS) framework. In this approach, to depict hybrid models, simulation-related meta-models with discrete and continuous features are extracted from SysML views without additional extension. Following the meta object facility (MOF), DEVS meta-models are constructed based on the formal definition of DEVS models, including discrete, hybrid and coupled models. Moreover, a series of concrete mapping rules is defined to transform the discrete and continuous behaviors based on the existing state machine mechanism and constraints of SysML, separately. Such an approach may facilitate a SysML system engineer to use a DEVS-based simulator to validate system models without the necessity of understanding DEVS theory. Finally, the effectiveness of the proposed method is verified by a defense system case.
Article
In this paper, we investigate the relationship between emotions and colors by showing robot animated emotion faces and colors to the participants through a series of surveys. We focused on representing a visualized emotion through a robot's facial expression and background colors. To complete the emotion design with animated faces and color background, we gave an experimental design for surveying the users' thoughts. We took an example of a robot animated face by using the ASUS Zenbo. We selected 11 colors as our color background and 24 facial expressions from Zenbo. To analyze our results from questionnaires, we used histograms to show the basic data situation and the multiple logistic regression analysis (MLRA) to see the marginal relationships. We separated our questionnaires into positive and negative questionnaires and divided the dataset into three cases to discuss the different relationships between color and emotion. Results showed that people preferred the blue color no matter whether the face was showing positive or negative emotion. The MLRA also showed the correct percentage is outstanding in case 2, either positive emotion or negative emotion. Participants thought Zenbo's robotic animated face was the same as they thought. Through our experimental design, we hope that people can consider more colors with emotion to design the human–robot interface that will be closer to the users' thoughts and make life more colorful with comfortable reactions with robots.
Article
Multistage Interconnection Networks (MINs) are an effective means of communication between multiple processors and memory modules in many parallel processing systems. Literature consists of numerous fault-tolerant MIN designs. However, due to the recent advances in the field of parallel processing, requiring large processing power, an increase in the demand to design and develop more reliable, cost-effective and fault-tolerant MINs is being observed. This work proposes two novel MIN designs, namely, Augmented-Shuffle Exchange Gamma Interconnection Network (A-SEGIN) and Enhanced A-SEGIN (EA-SEGIN). The proposed MINs utilize chaining of switches, and multiplexers & demultiplexers for providing a large number of alternative paths and thereby better fault tolerance. Different reliability measures, namely, 2-terminal, multi-source multi-destination, broadcast and network/global, respectively, of the proposed MINs have been evaluated with the help of all enumerated paths and well-known Sum-of-Disjoint Products approach. Further, overall performance, with respect to the number of paths, different reliability measures, hardware cost and cost per unit, of the proposed MINs has been compared with 19 other well-studied MIN layouts. The results suggest that the proposed MINs are very strong competitors of the preexisting MINs of their class owing to their better reliability and cost effectiveness.
Article
A quantum-dot cellular automaton is a new technology that solves all the disputes CMOS technology faces. Quantum-dot cellular automata-based computations run at ultra-high speeds with very high device density and low power consumption. Reversible logic design, featured in quantum-dot cellular automata, permits fully invertible computation. The arithmetic and logic units are the major components in all microprocessor-based systems that probably serve as the processing device's heart. This paper discusses an area-efficient quantum-dot cellular automata technology-based coplanar, reversible arithmetic and logic unit using the double Peres and Feynman gates. With a latency of 2.5\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2.5$$\end{document} clocks and a total area of 0.1 μm², the proposed arithmetic and logic unit performs 19 logic and arithmetic operations. QCA Designer and QD-E are used to simulate the proposed design and energy consumption, respectively. The proposed design's total energy dissipation, as measured by QCA Designer-E, is 5.45e−002 eV, and the average energy dissipation is 4.95e−003 eV. The proposed method has a considerable number of improvements in terms of latency, the number of operations, and area compared to earlier work.
Article
Based on the service orientation, a business service represents a coherent functionality that offers added value to the environment, regardless of how it is realized internally. The enterprise business service is a crucial section of enterprise architecture. Although many leading-edge enterprise architecture frameworks describe architecture in levels of abstraction, they still cannot provide an accurate syntactic and semantic description. If test cases are generated based on accurate descriptions of enterprise business services, the subsequent revisions and changes can be reduced. This research has one main contribution: it starts from the enterprise level, gains benefits from the enriched descriptions for enterprise business service, continues to generate appropriate syntactic and semantic models, and generates test cases from the formal model. In the suggested method, the goals of the enterprise will initially be extracted based on The Open Group Architecture Framework. Then, it will be subjected to syntactic modeling based on the ArchiMate language. Next, the semantics are added in terms of the Web Service Modeling Ontology framework and are manually formalized in B language by applying the defined transformation rules. Finally, the test coverage set will be examined on the formal model to generate test cases. The suggested method has been implemented in the marketing department of a petrochemical company. The results indicate the validity and efficiency of the method.
Article
Reliable and efficient delivery of diverse services with different requirements is among the main challenges of IoT systems. The challenges become particularly significant for IoT deployment in larger areas and high-performance services. The low-rate wireless personal area networks, as standard IoT systems, are well suited for a wide range of multi-purpose IoT services. However, their coverage distance and data rate constraints can limit the given IoT applications and restrict the creation of new ones. Accordingly, this work proposes a model that aims to correlate and expand the standard IoT systems from personal to wide areas, thus improving performance in terms of providing fast data processing and distant connectivity for IoT data access. The model develops two IoT systems for these purposes. The first system, 5GIoT, is based on 5G cellular, while the second, LTEIoT, is based on 4G long-term evolution (LTE). The precise assessment requires a reference system, for which the model further includes a standard IoT system. The model is implemented and results are obtained to determine the performance of the systems for diverse IoT use cases. The level of improvement provided by the 5GIoT and LTEIoT systems is determined by comparing them to each other as well as to the standard IoT system to evaluate their advantages and limitations in the IoT domain. The results show the relatively close performance of 5GIoT and LTEIoT systems while they both outperform the standard IoT by offering higher speed and distance coverage.
Top-cited authors
• Kyung Hee University
• University of Surrey
• Universiti Sains Malaysia
• Amman Arab University
• Sapienza University of Rome | 2022-08-12 16:13:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31506261229515076, "perplexity": 1331.9750300802045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571719.48/warc/CC-MAIN-20220812140019-20220812170019-00242.warc.gz"} |
https://ecstasyshots.wordpress.com/2017/02/26/legendre-differential-equation-1-a-friendly-introduction/ | # Legendre Differential equation (#1) : A friendly introduction
In this series of posts about Legendre differential equation, I would like to de-construct the differential equation down to the very bones. The motivation for this series is to put all that I know about the LDE in one place and also maybe help someone as a result.
The Legendre differential equation is the following:
$(1-x^2)y^{''} -2xy^{'} + l(l+1)y = 0$
where $y^{'} = \frac{dy}{dx}$ and $y^{''} = \frac{d^{2}y}{dx}$
We will find solutions for this differential equation using the power series expansion i.e
$y = \sum\limits_{n=0}^{\infty} a_n x^n$
$y^{'} = \sum\limits_{n=0}^{\infty} na_n x^{n-1}$
$y^{''} = \sum\limits_{n=0}^{\infty} n(n-1)a_n x^{n-2}$
We will plug in these expressions for the derivatives into the differential equation.
$l(l+1)y = l(l+1)\sum\limits_{n=0}^{\infty} a_n x^n$ – (i)
$-2xy^{'} = -2\sum\limits_{n=0}^{\infty} na_n x^{n}$ – (ii)
$(1-x^2)y^{''} = (1-x^2)\sum\limits_{n=0}^{\infty} n(n-1)a_n x^{n-2}$
$= \sum\limits_{n=0}^{\infty} n(n-1)a_n x^{n-2} - \sum\limits_{n=0}^{\infty} n(n-1)a_n x^{n}$ – (iii)
** Note: Begin
$\sum\limits_{n=0}^{\infty} n(n-1)a_n x^{n-2}$
Let’s take $\lambda = n-2$.
As n -> $0$. , $\lambda$ -> $-2$.
As n -> $\infty$, $\lambda$ -> $\infty$.
$\sum\limits_{\lambda = -2}^{\infty} (\lambda+2)(\lambda+1)a_n x^{\lambda}$
$= 0 + 0 + \sum\limits_{\lambda = 0}^{\infty} (\lambda+2)(\lambda+1)a_n x^{\lambda}$
Again performing a change of variables from $\lambda$ to n.
$= \sum\limits_{n= 0}^{\infty} (n+2)(n+1)a_n x^{n}$
** Note: End
(iii) can now be written as follows.
$\sum\limits_{n=0}^{\infty} x^n \left((n+1)(n+2)a_{n+2} - n(n-1)a_n \right)$ – (iv)
(i)+(ii)+(iv).
$\sum\limits_{n=0}^{\infty} x^n \left((n+2)(n+1)a_{n+2} - (l(l+1)-n(n+1))a_n \right)$
x = 0 is a trivial solution and therefore we get the indicial equation:
$(n+2)(n+1)a_{n+2} - (l(l+1)-n(n+1))a_n = 0$
$(n+2)(n+1)a_{n+2} = (l^2 - n^2 + l - n)a_n = 0$
$(n+2)(n+1)a_{n+2} = ((l-n)(l+n)+ l - n)a_n = 0$
$(n+2)(n+1)a_{n+2} = (l-n)(l+n+1)a_n = 0$
We get the following recursion relation on the coefficients of the power series expansion.
$a_{n+2} = a_n \frac{(l+n+1)(l-n)}{(n+1)(n+2)}$
Next post: What do these coefficients mean ? | 2017-10-23 00:33:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 29, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9060245156288147, "perplexity": 453.8122947659944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825497.18/warc/CC-MAIN-20171023001732-20171023021732-00310.warc.gz"} |
https://math.stackexchange.com/questions/1261458/euclids-proof-for-the-existence-of-infinitely-many-prime | # Euclid's proof for the existence of infinitely many prime
The proof goes like this
Suppose to the contrary there exists a list of finite primes which shall be denoted $$\left.\text{\{}p_1,p_2\text{,. . . }p_n\right\}$$ The product of all primes in this list shall be $$\left.\text{P=\{}p_1p_{2. . .}p_n\right\}$$ Now suppose then that $P+1 = q$.
There now exists 2 possibilities:
Case 1: q is a prime. If q itself is a prime number then it is self-implied that there exists a prime number outside the list of finite primes. The claim then that there exists only a finite number of primes is false. Thus, there exists infinitely many primes.
Case 2: If q is not a prime, then the prime factorisation of q is some integer and a prime number $$p_i$$. If $p_i$ is in the list of finite primes then it can be deduced to divide P since P is the product of all finite primes in the list. (My understanding ends here and the confusions begins hence fourth) And I quote Wiki: "But $$p_i$$ divides $q$, divides $p$ and $q$ and the difference between $p$ and $q$. Since no prime number divides 1, this would be a contradiction and so $p$ cannot be on the list. This (what does "this" refers to?) means that at least one more prime number exists beyond those in the list"
*Need some tidying up on the paragraph
• $p_i$ divides both $q$ (by definition of $p_i$) and $P$ (because we have assumed that $p_i$ is one of the factors making up $P$), so it divides $q-P$, which is equal to $1$. But there isn't a prime number that divides $1$. This contradicts our assumption (that $p_i$ is a factor of $P$), so the assumption must be false. So $p_i$ must be a new prime number we didn't already have in our list. – Billy May 2 '15 at 2:55
• I see why now. Thank you – Mathematicing May 2 '15 at 3:24
• Case 1 is redundant since it can be handled in Case 2, which uses only that $\,P+1\,$ is $>1$ so it has a prime factor (possibly itself). Euclid's original proof was constructive, not by contradiction. – Bill Dubuque May 2 '15 at 4:14
If those are all the primes then you can conclude that at least one of them divides 1 which is a contradiction. So you can assume the finite list is not all the primes. Thus there must be infinitely many. But this does not imply $p_1p_2\cdots p_n+1$ is prime for the first $n$ primes. It would only be prime if $p_1,\dots,p_n$ were all the primes.
• Is it necessarily true that if some prime divides a and b then it must divide the difference between a and b? – Mathematicing May 2 '15 at 2:59
• Yes this is true for any number, not just prime. If $k \mid a$ and $k \mid b$ then $k \mid xa + yb$ for any integers $x,y$ and so in particular. $k \mid a - b$ and $k \mid a + b$ – alkabary May 2 '15 at 3:06 | 2019-04-20 15:08:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.899118959903717, "perplexity": 148.44291823792602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529839.0/warc/CC-MAIN-20190420140859-20190420162859-00102.warc.gz"} |
https://search.r-project.org/CRAN/refmans/DAP/html/solve_DAP_seq.html | solve_DAP_seq {DAP} R Documentation
## Solves DAP optimization problem for a given sequence of lambda values
### Description
Uses block-coordinate descent algorithm with warm initializations, starts with the maximal supplied lambda value.
### Usage
solve_DAP_seq(X1, X2, lambda_seq, eps = 1e-04, maxiter = 10000,
feature_max = nrow(X1) + nrow(X2))
### Arguments
X1 A n1 x p matrix of group 1 data (scaled). X2 A n2 x p matrix of group 2 data (scaled). lambda_seq A supplied sequence of tunning parameters. eps Convergence threshold for the block-coordinate decent algorithm based on the maximum element-wise change in V. The default is 1e-4. maxiter Maximum number of iterations, the default is 10000. feature_max An upper bound on the number of nonzero features in the solution; the default value is the total sample size. The algorithm trims the supplied lambda_seq to eliminate solutions that exceed feature_max.
### Value
A list of
lambda_seq A sequence of considered lambda values. V1_mat A p x m matrix with columns corresponding to the 1st projection vector V1 found at each lambda from lambda_seq. V2_mat A p x m matrix with columns corresponding to the 2nd projection vector V2 found at each lambda from lambda_seq. nfeature_vec A sequence of corresponding number of selected features for each value in lambda_seq.
### Examples
## This is an example for solve_DAP_seq
## Generate data
n_train = 50
n_test = 50
p = 100
mu1 = rep(0, p)
mu2 = rep(3, p)
Sigma1 = diag(p)
Sigma2 = 0.5* diag(p)
## Build training data
x1 = MASS::mvrnorm(n = n_train, mu = mu1, Sigma = Sigma1)
x2 = MASS::mvrnorm(n = n_train, mu = mu2, Sigma = Sigma2)
xtrain = rbind(x1, x2)
ytrain = c(rep(1, n_train), rep(2, n_train))
## Standardize the data
out_s = standardizeData(xtrain, ytrain, center = FALSE)
####use solve_proj_seq
fit = solve_DAP_seq(X1 = out_s$X1, X2 = out_s$X2, lambda_seq = c(0.2, 0.3, 0.5, 0.7, 0.9))
[Package DAP version 1.0 Index] | 2022-05-28 11:14:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6312761902809143, "perplexity": 9192.276077966155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016373.86/warc/CC-MAIN-20220528093113-20220528123113-00201.warc.gz"} |
http://appliedmechanics.asmedigitalcollection.asme.org/article.aspx?articleID=2717385 | 0
Research Papers
# Schallamach Wave-Induced Instabilities in a Belt-Drive SystemPUBLIC ACCESS
[+] Author and Article Information
Yingdan Wu
George W. Woodruff School
of Mechanical Engineering,
Georgia Institute of Technology,
Atlanta, GA 30332
e-mail: yingdanwu@gatech.edu
Michael Varenberg
George W. Woodruff School
of Mechanical Engineering,
Georgia Institute of Technology,
Atlanta, GA 30332
e-mail: michael.varenberg@me.gatech.edu
Michael J. Leamy
Fellow ASME
Professor
George W. Woodruff School
of Mechanical Engineering,
Georgia Institute of Technology,
Atlanta, GA 30332
e-mail: michael.leamy@me.gatech.edu
1Corresponding author.
Contributed by the Applied Mechanics Division of ASME for publication in the JOURNAL OF APPLIED MECHANICS. Manuscript received September 15, 2018; final manuscript received November 20, 2018; published online December 17, 2018. Assoc. Editor: George Haller.
J. Appl. Mech 86(3), 031002 (Dec 17, 2018) (9 pages) Paper No: JAM-18-1534; doi: 10.1115/1.4042101 History: Received September 15, 2018; Revised November 20, 2018
## Abstract
We experimentally study the dynamic behavior of a belt-drive system to explore the effect of loading conditions, driving speed, and system inertia on both the frequency and amplitude of the observed frictional and rotational instabilities. A self-excited oscillation is reported whereby local detachment events in the belt–pulley interface serve as harmonic forcing of the pulley, leading to angular velocity oscillations that grow in time. Both the frictional instabilities and the pulley oscillations depend strongly on operating conditions and system inertia, and differ between the driver and driven pulleys. A larger net torque applied to the pulley generally intensifies Schallamach waves of detachment in the driver case but has little influence on other measured response quantities. Higher driving speeds accelerate the occurrence of frictional instabilities as well as pulley oscillations in both cases. Increasing the system's inertia does not affect the behavior of contact instabilities, but does lead to a steadier rotation of the pulley and more pronounced fluctuations in the belt tension. A simple dynamic model of the belt-drive system demonstrates good agreement with the experimental results and provides strong evidence that frictional instabilities are the primary source of the system's self-oscillation.
<>
## Introduction
Belt-drives are simple and economical machines for transmitting mechanical power in many engineering applications, such as in automotive front end accessory drives and continuously variable transmissions, manufacturing machines, household appliances, and magnetic tape data storage systems. Although properly installed and maintained belt-drives can preserve energy efficiency up to 95% [1], their efficiency and performance are affected by complex system dynamics arising primarily from excitation at the belt–pulley interface, fluctuations in the pulley angular velocities and span tensions, and belt misalignment. These, in turn, lead to energy loss, undesired vibration and noise, wear, and speed loss between the driver and driven pulleys. For these reasons, it is critical to understand belt-drive mechanics/dynamics for robust system design and energy efficiency.
The literature regarding belt-drive mechanics is extensive; the earliest investigations on belt-drive mechanics can be traced back to Leonard Euler's [2] study of a belt wrapped over a fixed pulley as well as Grashof's [3] analysis of the frictional mechanics of belt-drives under steady operation, yielding the classical belt creep theory. In the creep theory, the belt–pulley frictional contact region is governed by a Coulomb law, and the belt is treated as a flexible one-dimensional string. The theory predicts a single-slip arc in the exit region of the pulley and an adhesion arc in the remaining contact region. More recently, creep theory has been enhanced to include the effects of belt flexural stiffness [4] and radial and tangential belt inertia [46]. In contrast to the creep theory, Firbank [7] proposed a belt shear theory in which shear strains in the belt envelope dominate in determining drive behavior instead of longitudinal strains. It is notable that the symmetry in contact mechanics between the driver and driven pulley is broken by the presence of shear, coinciding with observations from experimental investigations [8]. Later investigations considered the influence of radial compliance [912] and bending stiffness/inertia [9,10,1214] on the belt-drive mechanics, and compared shear and creep theories [10,13]. Also, experiments were conducted to study the creep loss between the driver and driver pulleys [7,1517], tangential and normal forces acting on the pulley [1820], and strain/tension variation along the surface of a belt [8,21]. Recently, analytical and numerical models (e.g., based on finite element modeling) have extended analyses to unsteady operation due to harmonic excitation, with some also considering bending stiffness and one-way clutches [2229].
In the reviewed belt-drive studies, some variant of Coulomb friction is assumed to govern in the belt–pulley frictional contact region, implying sliding between the belt and pulley in the slip arc. It is now understood that, at least for low-speed operation using moderately thick and homogeneous belts with smooth pulleys, the large contact area formed by the belt–pulley interface may prevent sliding in the accepted sense. In 2018, Wu et al. [30] introduced experimental evidence that in such a simple belt-drive system, displacement between the belt and the pulley was accommodated primarily by detachment events (and not sliding), including Schallamach waves [3133], which are narrow lines of lost contact (e.g., due to surface buckling) that move across the contact region. They also found that rolling contact mechanics differ between driver and driven pulleys—although detachment has been found in both cases, Schallamach waves appear in the driver pulley only. These findings motivate further investigation to uncover the ranges of speed, load, belt material properties, pulley roughness, and other factors that may exhibit detachment vice sliding behavior.
In this paper, we continue the exploration of the simple belt drive system first reported in Ref. [30], studying the formation of detachment waves as functions of the applied torque, the operating speed (all speeds still limited to slow operation), and the system moment of inertia. In so doing, we uncover a self-excited instability in which periodic detachment waves lead to pulley oscillations which grown in time.
## Experimental Details
###### Apparatus and Belt Samples.
The experimental apparatus (Fig. 1) is capable of measuring tension in both belt spans (via force transducers) and the angular displacement of the pulley (via a rotary encoder). A replaceable flywheel attached to the pulley allows adjustment of the system inertia. A digital camera, fixed above the exiting side and angled approximately 8 deg as shown in Fig. 1, records the evolution of the belt–pulley contact zone on the trailing side. A releasing motor controls the speed of the driving stage, ensuring a near-constant speed. It is notable that the user can switch between the driver (solid lines in Fig. 1(a)) and driven configuration (dashed lines in Fig. 1(a)) by reversing the direction of the torque. A detailed description of the apparatus is available in Ref. [30].
The belts tested herein are cast of polydimethylsiloxane (PDMS, Sylgard 184, Dow Corning, Midland, MI) using a 10:1 mixture of Sylgard 184 prepolymer and its cross-linker cured for 14 h at 65 °C against a flat smooth template. PDMS is a transparent elastomer with a Young's modulus of 1.6 MPa (measured at rate of 0.01 s−1 with ES10 tensile test stand, Mark-10 Co., Copiague, NY). The belt extends 400 mm in length, 5 mm in width, and 2 mm in thickness. The travel distance of the driving stage is 300 mm, which is large enough (compared to the pulley radius, 10 mm) to achieve pronounced instabilities in system behavior and reach a near steady-state rolling condition. Note that PDMS was chosen for its transparency to enable observation of the contact area between the belt and pulley.
###### Operating Conditions.
Previous investigations on Schallamach waves in sliding find that the frequency and the amplitude of the detachment waves depend on sliding velocity and loading conditions [31,32,3436]. To study the same quantities in our belt drive system, we varied the stage driving velocity from 3 mm s−1 to 11 mm s−1 using incremental changes of 2 mm s−1.
Note that there is a distinction between the dead weight hanged to apply torque and the “net torque weight” effectively acting on the pulley. This difference arises due to the parasitic resistance from friction in the bearings and the rotary encoder. As a result, the cases of driver and driven pulley need different dead weights to achieve the same net torque applied to the pulley. Bearing this in mind, we adjusted the loading such that the net torque weight applied to the pulley ranged from 3.5 N to 5.5 N with an increment of 0.5 N for both driver and driven cases.
For all tests performed, the high tension (on the entering side for the driver case and on the exiting side for the driven case) remained 6 N, the maximum tension that our belt specimen can bear without failure. Tables 1 and 2 list detailed operating conditions for variation of driving speed and load, respectively. Each test was repeated at least five times. All statistical tests were performed using one-way ANOVA, all pairwise multiple comparison procedures (Holm–Sidak method), overall significance level 0.05, using the sigmaplot software package (Systat Software, Inc., San Jose, CA). The temperature and relative humidity in the laboratory during the tests were 23 °C and 35%, respectively.
## Experimental Results
###### Contact Mechanics and Instability Formation.
Figure 2 summarizes findings from our previous study [30] of a slowly rotating pulley in frictional contact with a flat belt. In both the driver and driven pulley cases, the transition from high to low tension (or vice-versa) occurs in a finite zone at the exit region of the belt–pulley contact zone. There, friction traction accompanies changes in tension. Due to the net tension force acting approximately at the centroid of the cross section, and the friction traction acting at the contact surface of the belt, a moment about the belt center arises, which tends to lift the belt from a driver pulley (Fig. 2(a)), while tending to do the opposite in a driven pulley (Fig. 2(b)).
In the event of belt detachment from the driver pulley, the frictional traction disappears and the traction moment relaxes, which brings the belt back in contact due to the restoring action of the tension force's radial component. This sequence leads to the generation of a surface fold (driver contact instability) that can travel along the interface in the backward direction until it closes due to increasing normal load at smaller angles. In the driven case, the belt has a tendency to become thinner due to the increasing tension. This thinning may pull the belt out of the contact, leading to local detachment at the contact area edge (driven contact instability). Once detached from the pulley, the belt does not attach again, and the contact area simply moves forward together with the pulley until the belt peels off again. Thus, the contact mechanics of the driving and the driven pulleys differ significantly.
###### Self-Excited Oscillation.
Contact instabilities formed in both driver and driven cases lead to oscillations in belt tension and pulley rotation. Figure 3 documents the friction force and angular velocity, as a function of pulley revolution, obtained experimentally at a driving speed of 3 mm s−1 using a net torque weight of 4 N in both the driven (Fig. 3(a)) and the driver (Fig. 3(b)) cases. The friction force is arrived at by taking the difference between the tight and slack side tensions measured by the force transducers with a resolution of 0.04 N. The angular velocity is computed via a central difference scheme from the angular displacement registered by the rotary encoder with a resolution of 8192 counts per revolution.
Subfigures in the top row clearly document growing oscillations, with similar frequency content, in both measured quantities. A zoomed-in plot (middle row) shows the correlation between the friction and angular velocity such that the period of the friction oscillations coincides with that of the pulley oscillations (each period denoted by two adjacent dashed–dottted lines). It is also evident that the friction signal exhibits additional, higher frequency content associated with belt detachment at the exit of the pulley. This is borne out by comparisons of the friction signal with the contact area in the exiting region of the belt–pulley interface (see numbered points on the friction force, middle subfigure, and corresponding numerals on the contact area snapshots), where black areas denote contact, and white areas denote loss of contact between the belt and the pulley. Similar to our previous study [30], Schallamach waves (see isolated detached pocket on third contact snapshot in the driver case) are detected only in the driver case.
The friction fluctuations were analyzed further via wavelet transform routines available in matlab and were decomposed into two primary components: (1) a high-frequency component (dashed line) associated with the detachment events at the belt–pulley interface (Fde); and (2) a low-frequency component (solid line) associated with the pulley oscillation (Fpo). Interestingly, the Fpo signals in the driver and driven cases exhibit distinct reverse saw-tooth shapes, and this becomes more evident as the system inertia increases (see Sec. 3.5). This result can be explained in the following way. Comparing the Fpo signal to the angular velocity oscillations (middle row in Fig. 3), we can see that the minute halts in the pulley rotation are associated with the force drop in the driver case and with the force rise in the driven case. In the driver case, when the pulley drives the belt, the pulley motion cessation leads to decrease of the difference between the tight and slack side tensions (the belt is relaxed). On the other hand, in the driven case, when the belt drives the pulley, the pulley motion cessation leads to increase of the difference between the tight and slack side tensions (the belt is loaded).
We believe the oscillation growth observed in both the friction force and the angular velocity is due to a positive feedback mechanism as follows: contact instabilities in the belt exit zone excite rotational oscillations in the pulley, which, in turn, store a periodic belt tension pattern in the belt entry zone. This tension pattern then serves as an additional excitation source when released at the exiting side of the belt, further destabilizing the pulley angular velocity. Thus, local contact instabilities induce large changes in the system's global dynamics. Given that the dynamic response evolves in time, in the parameter studies to follow, we chose to focus on the final pulley revolution where the self-excited oscillations are most evident.
The fluctuations in both the friction force and the angular velocity were decomposed into two components related to detachment events and pulley oscillations. Their corresponding frequencies and amplitudes are shown in Fig. 4. The fluctuations in angular velocity resemble those in friction force (especially in terms of frequency), which verifies the correlation between these two signals. The information on the fluctuations in the angular velocity associated with detachment events is missing for the driven case because of a low signal-to-noise ratio resulting in an inability to get reliable data.
The fluctuations in the friction force associated with detachment events exhibit higher frequency and lower amplitude than those associated with pulley oscillations in both the driver and driven cases (Figs. 4(a) and 4(b)). As the net torque weight increases in the driver case, the amplitude AF,de is nearly not affected (statistically significant difference is observed only between torque weights 3.5 and 5.5 N, and between 3.5 and 5 N), whereas its frequency fF,de increases. This increase is consistent with the hypothesis that a traction-induced moment tends to lift the belt from the pulley, while the belt remains attached until the traction moment reaches a certain threshold [30]. Increasing the net torque weight raises the traction applied to the belt and reduces the gap between the acting moment and the threshold value. This leads to less time needed to reach the threshold value for detachment, resulting in increase of the fluctuation frequency. In the driven case, however, the frequency fF,de and the amplitude AF,de do not depend on the net torque weight (no statistically significant effect is observed). In this case, the traction-induced moment acts in the opposite direction, pressing the belt against the pulley. The detachment at the contact edge happens as a result of the thinning and peeling of the belt [30], and these mechanisms are not directly affected by the frictional traction. The amplitude AF,de in the driver case is larger than that in the driven case because the scale of contact instabilities is much larger in the former case (Fig. 3).
The frequency of the friction force fluctuations associated with the pulley oscillations (fF,po) does not depend on the net torque weight (no statistically significant effect is observed). The torque weight adds inertia to the system, so, in principle, the frequency fF,po should decrease with increasing torque. The reason for not observing this effect can be an insufficient range of the torque change. A small difference between the frequencies fF,po in the driver and driven cases may result from a larger torque weight used in the driver case to attain the same net torque weight as in the driven case, which results in a larger system inertia. The amplitude of the friction force fluctuations associated with the pulley oscillations (AF,po) shows inconsistent step-like increase with increasing the net torque weight from 4 to 4.5 N, while maintaining statistically indistinguishable values otherwise. This can be related to some issue that went unnoticed during the tests and is worth verifying based on the analysis of the angular velocity oscillations as follows.
The frequencies of the angular velocity fluctuation fα,de and fα,po (Fig. 4(c)) are almost identical to those of the friction force fluctuations (Fig. 4(a)), because they characterize the same instabilities. Similar to AF,de in the driver case (Fig. 4(b)), the amplitude of the angular velocity fluctuation Aα,de (Fig. 4(d)) is nearly not affected by the torque increase (no statistically significant difference is observed between the torque weights 4, 4.5, 5 and 5.5 N). Analyzing the amplitude Aα,po, we also see almost no effect of the torque weight (statistically significant difference is observed only between the torque weights 4 and 5.5 N in the driver case, as well as between the torque weights 4 and 5.5 N, 4 and 5 N, and 3.5 and 5 N in the driven case), which resembles the results obtained for AF,po. Thus, based on a comparative analysis of the effect of torque, we can conclude that, to a first approximation, the only affected parameter is the frequency of Schallamach waves of detachment in the driver case.
###### Effect of Driving Speed.
In both the driver and driven cases, increasing driving speed accelerates the occurrence of contact instabilities (detachment events) and pulley oscillations regardless of the analyzed signal source, be it either friction force or angular pulley velocity (Figs. 5(a) and 5(c), respectively). Given that the formation of the stress pattern along the contact arc relies on the rotation of the pulley, the stress relaxation associated with detachment events at the belt–pulley interface takes less time as the pulley rotates faster. The pulley oscillations are mainly caused by contact instabilities, so when the latter occur more often, the former follow suit. Hence, all frequencies increase with increasing driving speed. It is also evident that the pulley oscillations in the driven case appear to be more sensitive to the driving speed, so the frequency fF,po increases more rapidly than that in the driver case. This can result from a larger system inertia in the driver case due to a larger torque weight used to obtain the same net torque as in the driven case.
The amplitude of the frictional force fluctuations associated with the pulley oscillations (AF,po, Fig. 5(b)) exhibits inconsistent step-like decrease with increasing the driving speed from 5 to 7 mm s−1, while maintaining otherwise statistically indistinguishable values in the driven case. In the driver case, the effect of the driving speed on the amplitude AF,po is also not statistically reliable (no statistically significant difference is observed between the speeds 5 and 9 mm s−1, 7 and 9 mm s−1, 7 and 11 mm s−1, 9 and 11 mm s−1). The amplitude of the frictional force fluctuations associated with the detachment events (AF,de, Fig. 5(b)) also does not demonstrate clear effect of the driving speed. In the driven case, no statistically significant difference between different amplitudes is observed at all, while in the driver case, only the amplitude obtained at the speed 3 mm s−1 differs from all other measurements.
The amplitude of the angular velocity fluctuations associated with the pulley oscillations (Aα,po, Fig. 5(d)) has shown no statistically significant effect of driving speed in either the driver or driven case. The effect of the driving speed on the amplitude of the angular velocity fluctuations associated with the detachment events (Aα,de, Fig. 5(d)) is also negligible, with only the amplitude obtained at the speed 3 mm s−1 being different from all other measurements. Thus, based on a comparative analysis of the effect of driving speed, we can conclude that, to a first approximation, while the amplitudes of either the contact instabilities or pulley oscillations are not affected, their frequencies grow with increasing the driving speed. Note that the frequency versus speed relationship in our findings compares well with that found in sliding cases where an increasing speed also leads to an increased Schallamach wave frequency [32].
###### Effect of System's Inertia.
As reported earlier, contact instabilities excite pulley oscillations, complicating the study of the contact mechanics. In an attempt to limit these oscillations, we increased the moment of inertia of the pulley by using two removable flywheels, whose moments of inertia are 9 and 99 times that of the pulley. These are referred to as small and large flywheels. Table 3 lists the conditions employed in assessing the effect of inertia.
An illustrative example of the effect of the pulley's moment of inertia on the friction force and angular velocity is presented in Fig. 6. Looking at the friction curves, we conclude that the effect of the pulley's moment of inertia is identified easier in the fluctuations associated with the pulley oscillations (see the wavelet decomposition subfigure in Fig. 3 for comparison), while the fluctuations associated with the contact instabilities seem to be less sensitive to this parameter. Interestingly, increasing the pulley's moment of inertia results in much more violent oscillations in the friction force, while the pulley oscillations become more restrained. This is explained by noting that more friction force (the difference between the tight and slack side tensions) is needed to move a heavier pulley, which has larger moment of inertia and hence rotates more steadily.
Analyzing the frequencies and amplitudes of the fluctuations in the friction force and the pulley's angular velocity (Fig. 7), we can draw similar conclusions. The frequencies of the fluctuations in both the friction force and the angular velocity associated with the pulley oscillation (fF,po and fα,po, respectively) decrease with increasing the pulley's moment of inertia in both the driver and driven cases, which is the expected result. The amplitude of the friction force fluctuations associated with the pulley oscillations (AF,po) grows with increasing the pulley's moment of inertia in both the driver and driven cases, which results from higher belt tension being required to move (or interfere with) the heavier pulley. The amplitude of the angular velocity fluctuations associated with the pulley oscillations (Aα,po) decreases (as expected) with increasing the pulley's moment of inertia, but the changes are less pronounced (no statistically significant difference is observed between small and large flywheels in the driver case, as well as between no and small flywheel, and between small and large flywheels in the driven case).
The frequencies of the friction force fluctuations and the angular velocity fluctuations associated with contact instabilities/detachment events (fF,de and fα,de, respectively) seem to be independent of the pulley's moment of inertia (no statistically significant difference is observed between any of the tested points in the driven case, as well as between force fluctuations with small and large flywheels, and between velocity fluctuations with no and large flywheel in the driver case). The amplitudes of the friction force fluctuations and the angular velocity fluctuations associated with contact instabilities/detachment events (AF,de and Aα,de, respectively) are nearly independent of the pulley's moment of inertia in all cases (statistically significant difference is observed only between force fluctuations with no and large flywheel in the driver case). The amplitude AF,de in the driver case is larger than that in the driven case because larger contact area is involved in the detachment events. Thus, based on a comparative analysis of the effect of the pulley's moment of inertia, we can conclude that, to a first approximation, while the force and velocity fluctuations associated with the contact instabilities are not affected, the force and velocity fluctuations associated with the pulley oscillations do depend on the pulley mass.
## Theoretical Model
To verify whether frictional instabilities can serve as a source of self-oscillation in our system, we developed a simple dynamic model designed in such way that friction fluctuations are used as a modulated input, and the pulley's angular velocity is computed as an output and then compared to experimental data. The following assumptions were made:
1. (1)The belt is uniform and perfectly flexible, and it stretches in a quasi-static manner; the two spans of the belt are hence treated as massless linear elastic springs coupled with a massless damper.
2. (2)The belt deformation along the belt width is decoupled from the belt deformation along the belt length.
3. (3)Belt extension s(t) resulting from detachment events is applied uniformly over the exiting portion of the belt.
4. (4)The torque weight is applied to the pulley through an inextensible string.
5. (5)The speed of the belt exiting span and the masses of the loading weights are taken from the experiment.
The diagrams of the model shown in Fig. 8 depict a lumped system with two degrees-of-freedom: the angular displacement α(t) of the pulley and the linear displacement y(t) of the tension mass M. The motion of the torque mass m follows directly from the pulley motion at the attachment point. The linear motion of the exiting span of the belt is denoted as x(t) and is prescribed using the constant speed employed in the experiment. Detachment-driven extension in the exit zone is approximated as s(t). The frequency and amplitude of s(t) is estimated based on experimental measurements with one set of loading parameters and then used for all other test points (see Sec. 5 for further details).
The elongation of the exiting span of the belt δ(t) can be defined as Display Formula
(1)$δt=xt−Rαt−s(t)$
where R denotes the radius of the pulley. The elongation of the entering span of the belt is the difference between (t) and y(t). Hooke's law yields the span stiffness values, k1(t) and k2(t), for the exiting and entering belt spans, respectively Display Formula
(2)$k1t=EAl01+x(t),andk2t=EAl02−y(t)$
where E, A, l01 and l02 denote the belt elastic modulus, cross-sectional area, and the initial lengths of the exiting and entering spans, respectively. Assuming quasi-static belt stiffness changes, we can derive the governing equations for our belt-drive system (driver pulley, Eq. (3a); driven pulley, Eq. (3b) as Display Formula
(3a)$I+mR200Mα¨y¨+Rc1+R2c2−Rc2−Rc2c2α˙y˙+[R2(k1+k2)−Rk2−Rk2k2]αy=mgR+Rk1(x(t)−s(t))−Mg$
Display Formula
(3b)$I+mR200Mα¨y¨+Rc1+R2c2−Rc2−Rc2c2α˙y˙+R2(k1+k2)−Rk2−Rk2k2αy=−mgR+Rk1(x(t)−s(t))−Mg$
where $c1$ denotes the damping coefficient associated with the pulley oscillations (losses in the belt-pulley contact and bearings) and $c2$ denotes the damping coefficient of the free spans of the viscoelastic belt. Both coefficients are assumed to be constant due to an approximately constant length of the contact arc in the first case and a constant total length of the belt in the second case.
## Comparison of Theoretical and Experimental Results
The belt extension s(t) in the detachment region is considered to correlate closely to the friction force fluctuations associated with detachment events (bottom row in Fig. 3). Hence, s(t) can be abstracted as a saw-tooth function for both the driver (Eq. (4a)) and driven case (Eq. (4b)). The frequency of the saw-tooth function is taken as the frequency fF,de measured in the case with small flywheel for both the driver and driven pulleys. The duty cycles (loading phase fractions) of the saw-tooth function are 2/3 and 1/3 for the driver and driven pulleys, respectively, taken in accord with the friction force fluctuations associated with detachment events. The amplitude of belt extension, As, cannot be defined based on our experimental data. To this end, we have determined As via trial-and-error method: the amplitude of the angular velocity fluctuations (Aα,po) is calculated by numerical integration of Eq. (3) (via matlab's ode45 routine), while As is varied until the predicted Aα,po matches the Aα,po measured with the small flywheel in both the driver and driven cases. These values are applied to the cases of no and large flywheels to see whether the model can be predictive. The expressions used to describe s(t) are Display Formula
(4a)$st=As3fF,de4πt,0≤t≤4π3fF,deAs3−3fF,de2πt,4π3fF,de≤t≤2πfF,de$
Display Formula
(4b)$st=As3fF,de2πt,0≤t≤2π3fF,deAs32−3fF,de4πt,2π3fF,de≤t≤2πfF,de$
All model parameters (Table 4) are taken from the experiment except for the two damping coefficients, c1 and c2, which were also chosen based on the trial-and-error method to obtain a good agreement between the theoretical and experimental results. The values c1 and c2, however, are verified to fall within a reasonable range for PDMS [37].
The equations of motion were numerically integrated using the ode45 function in matlab, while the real-time belt span lengths updated the stiffness values according to Eq. (2). Wavelet routines in matlab were used to postprocess the data from both the numerical model and the experiment. The model was also used to compute the quasi-static vibration modes (eigenfrequencies and eigenvectors) as a function of time (with the instantaneous stiffness values).
The development of the angular velocity fluctuations in the driver and driven cases is presented using wavelet scalograms in Figs. 9 and 10, respectively. Dotted white traces provide the time-varying natural frequencies obtained from the model eigen analysis. The corresponding vibration modes (at 1.8 min, as denoted by the red dash-dotted line) for the observed two frequencies are shown in Fig. 9(a). For the lower frequency of the dominant vibration mode, the oscillations of the pulley and the tension weight are similar in scale, while for the higher frequency of the secondary vibration mode, the tension weight oscillations dominate the system response. It is also evident in all plots that the contact instabilities (detachment events) excite the lower frequency vibration mode, whose magnitude grows in time.
The scalograms document strong agreement between the numerical model and the experimental results, and clearly explain the time-dependent frequencies observed in the experiment. Both the model and the experiment demonstrate a frequency band between the low and high natural frequencies of the angular velocity fluctuations, which is associated with the contact instabilities in the experiment and with the excitation source s(t) in the numerical model. This band is characterized by a fundamental frequency and higher harmonics in the experiment, while the model exhibits only a fundamental frequency due to a simple excitation function encoded in Eqs. (4a) and (4b). Despite these differences and an overall simplicity of the model, the theoretical results match closely the experiment, which provides strong evidence that frictional instabilities driven by unmodulated external power are the primary source of the studied system's self-oscillation.
As a final comparison, Fig. 11 details the computational and experimental results obtained at the end of the fourth revolution of the pulley. Consistent with the results shown in Figs. 7(c) and 7(d), the frequency and the amplitude of the computed angular velocity oscillation in both the driver and the driven cases decrease with increasing the system's inertia, while the maximum discrepancy between the theory and experiment is less than 10%. Thus, having a formal description of local contact instabilities, we can predict the global dynamic behavior of our belt-drive system.
## Conclusion
To summarize, we highlight the key findings as follows. A larger applied torque accelerates the occurrence of contact instabilities in the driver case, while all other studied system response quantities remain unaffected. Increasing the driving speed results in an increase in the frequencies of the contact instability occurence and the pulley's angular velocity oscillations, while their amplitudes are essentially unaffected. The former suggests that as transmitted power increases, more power dissipates at the interface, as expected. Surprisingly, increasing the pulley's inertia does not remediate the contact instabilities, but instead leads to more pronounced fluctuations in the belt tension. Crosschecking, we draw similar conclusions based on a simple dynamic model, which provides strong evidence that contact instabilities driven by unmodulated external power are the primary source of the system's self-oscillation. To this end, our main conclusion is that contact instabilities and, hence, the resulting global system's oscillation, most likely cannot be conditioned from outside and instead the main focus must be on the interface itself.
## Funding Data
• The National Science Foundation under (Grant No. 1562129, Funder ID. 10.13039/501100008982)
## References
Zhang, S. , and Xia, X. , 2011, “ Modeling and Energy Efficiency Optimization of Belt Conveyors,” Appl. Energy, 88(9), pp. 3061–3071.
Euler, M. L. , 1762, “ Remarques Sur L'effect du Frottement Dans L'equilibre,” Mémoires De L'Académie Royale Des Sci., 18, pp. 265–278.
Grashof, F. , 1890, Theoretische Maschinenlehre, L. Voss, Leipzig, Germany.
Kong, L. , and Parker, R. G. , 2005, “ Steady Mechanics of Belt-Pulley Systems,” ASME J. Appl. Mech., 72(1), pp. 25–34.
Bechtel, S. , Vohra, S. , Jacob, K. , and Carlson, C. , 2000, “ The Stretching and Slipping of Belts and Fibers on Pulleys,” ASME J. Appl. Mech., 67(1), pp. 197–206.
Rubin, M. , 2000, “ An Exact Solution for Steady Motion of an Extensible Belt in Multipulley Belt Drive Systems,” ASME J. Mech. Des., 122(3), pp. 311–316.
Firbank, T. , 1970, “ Mechanics of the Belt Drive,” Int. J. Mech. Sci., 12(12), pp. 1053–1063.
Della Pietra, L. , and Timpone, F. , 2013, “ Tension in a Flat Belt Transmission: Experimental Investigation,” Mech. Mach. Theory, 70, pp. 129–156.
Gerbert, G. , 1991, “ Paper XII (i) On Flat Belt Slip,” Tribol. Ser., 18, pp. 333–340.
Gerbert, G. , 1996, “ Belt Slip—A Unified Approach,” ASME J. Mech. Des., 118(3), pp. 432–438.
Sorge, F. , 2007, “ Shear Compliance and Self-Weight Effects on Traction Belt Mechanics,” Proc. Inst. Mech. Eng., Part C, 221(12), pp. 1717–1728.
Sorge, F. , 2008, “ A Note on the Shear Influence on Belt Drive Mechanics,” ASME J. Mech. Des., 130(2), p. 024502.
Alciatore, D. , and Traver, A. , 1995, “ Multipulley Belt Drive Mechanics: Creep Theory vs Shear Theory,” ASME J. Mech. Des., 117(4), pp. 506–511.
Kong, L. , and Parker, R. G. , 2005, “ Microslip Friction in Flat Belt Drives,” Proc. Inst. Mech. Eng., Part C, 219(10), pp. 1097–1106.
Balta, B. , Sonmez, F. O. , and Cengiz, A. , 2015, “ Speed Losses in V-Ribbed Belt Drives,” Mech. Mach. Theory, 86, pp. 1–14.
Chen, T. , and Sung, C. , 2000, “ Design Considerations for Improving Transmission Efficiency of the Rubber V-Belt CVT,” Int. J. Veh. Des., 24(4), pp. 320–333.
Zhu, C. , Liu, H. , Tian, J. , Xiao, Q. , and Du, X. , 2010, “ Experimental Investigation on the Efficiency of the Pulley-Drive CVT,” Int. J. Automot. Technol., 11(2), pp. 257–261.
Firbank, T. , 1977, “ On the Forces Between the Belt and Driving Pulley of a Flat Belt Drive,” Design Engineering Technical Conference, Chicago, IL, Sept., pp. 1–5.
Kim, H. , and Marshek, K. , 1988, “ Belt Forces and Surface Model for a Cloth-Backed and a Rubber-Backed Flat Belt,” J. Mech., Transm., Autom. Des., 110(1), pp. 93–99.
Kim, H. , Marshek, K. , and Naji, M. , 1987, “ Forces Between an Abrasive Belt and Pulley,” Mech. Mach. Theory, 22(1), pp. 97–103.
Palmer, R. , and Jarvis, J. , 1980, “ Steady State Strains in Power Transmitting Flat Belts Made of Composite Material,” Strain, 16(4), pp. 156–161.
Leamy, M. J. , and Wasfy, T. M. , 2002, “ Transient and Steady-State Dynamic Finite Element Modeling of Belt-Drives,” ASME J. Dyn. Syst. Meas. Control, 124(4), pp. 575–581.
Wasfy, T. M. , and Leamy, M. , 2002, “ Effect of Bending Stiffness on the Dynamic and Steady-State Responses of Belt-Drives,” ASME Paper No. DETC2002/MECH-34223.
Leamy, M. , and Wasfy, T. , 2002, “ Analysis of Belt-Driven Mechanics Using a Creep-Rate-Dependent Friction Law,” ASME J. Appl. Mech., 69(6), pp. 763–771.
Leamy, M. J. , 2005, “ On a Perturbation Method for the Analysis of Unsteady Belt-Drive Operation,” ASME J. Appl. Mech., 72(4), pp. 570–580.
Kerkkänen, K. S. , García-Vallejo, D. , and Mikkola, A. M. , 2006, “ Modeling of Belt-Drives Using a Large Deformation Finite Element Formulation,” Nonlinear Dyn., 43(3), pp. 239–256.
Dufva, K. , Kerkkänen, K. , Maqueda, L. G. , and Shabana, A. A. , 2007, “ Nonlinear Dynamics of Three-Dimensional Belt Drives Using the Finite-Element Method,” Nonlinear Dyn., 48(4), pp. 449–466.
Čepon, G. , and Boltežar, M. , 2009, “ Dynamics of a Belt-Drive System Using a Linear Complementarity Problem for the Belt–Pulley Contact Description,” J. Sound Vib., 319(3–5), pp. 1019–1035.
Kim, D. , Leamy, M. J. , and Ferri, A. A. , 2011, “ Dynamic Modeling and Stability Analysis of Flat Belt Drives Using an Elastic/Perfectly Plastic Friction Law,” ASME J. Dyn. Syst. Meas. Control, 133(4), p. 041009.
Wu, Y. , Leamy, M. J. , and Varenberg, M. , 2018, “ Schallamach Waves in Rolling: Belt Drives,” Tribol. Int., 119, pp. 354–358.
Barquins, M. , 1985, “ Sliding Friction of Rubber and Schallamach Waves—A Review,” Mater. Sci. Eng., 73, pp. 45–63.
Fukahori, Y. , Gabriel, P. , and Busfield, J. J. C. , 2010, “ How Does Rubber Truly Slide Between Schallamach Waves and Stick-Slip Motion?,” Wear, 269(11–12), pp. 854–866.
Schallamach, A. , 1971, “ How Does Rubber Slide?,” Wear, 17(4), pp. 301–312.
Barquins, M. , and Courtel, R. , 1975, “ Rubber Friction and the Rheology of Viscoelastic Contact,” Wear, 32(2), pp. 133–150.
Barquins, M. , and Roberts, A. D. , 1986, “ Rubber-Friction Variation With Rate and Temperature—Some New Observations,” J. Phys. D-Appl. Phys., 19(4), pp. 547–563.
Best, B. , Meijers, P. , and Savkoor, A. R. , 1981, “ The Formation of Schallamach Waves,” Wear, 65(3), pp. 385–396.
Lin, T. R. , Farag, N. H. , and Pan, J. , 2005, “ Evaluation of Frequency Dependent Rubber Mount Stiffness and Damping by Impact Test,” Appl. Acoust., 66(7), pp. 829–844.
View article in PDF format.
## References
Zhang, S. , and Xia, X. , 2011, “ Modeling and Energy Efficiency Optimization of Belt Conveyors,” Appl. Energy, 88(9), pp. 3061–3071.
Euler, M. L. , 1762, “ Remarques Sur L'effect du Frottement Dans L'equilibre,” Mémoires De L'Académie Royale Des Sci., 18, pp. 265–278.
Grashof, F. , 1890, Theoretische Maschinenlehre, L. Voss, Leipzig, Germany.
Kong, L. , and Parker, R. G. , 2005, “ Steady Mechanics of Belt-Pulley Systems,” ASME J. Appl. Mech., 72(1), pp. 25–34.
Bechtel, S. , Vohra, S. , Jacob, K. , and Carlson, C. , 2000, “ The Stretching and Slipping of Belts and Fibers on Pulleys,” ASME J. Appl. Mech., 67(1), pp. 197–206.
Rubin, M. , 2000, “ An Exact Solution for Steady Motion of an Extensible Belt in Multipulley Belt Drive Systems,” ASME J. Mech. Des., 122(3), pp. 311–316.
Firbank, T. , 1970, “ Mechanics of the Belt Drive,” Int. J. Mech. Sci., 12(12), pp. 1053–1063.
Della Pietra, L. , and Timpone, F. , 2013, “ Tension in a Flat Belt Transmission: Experimental Investigation,” Mech. Mach. Theory, 70, pp. 129–156.
Gerbert, G. , 1991, “ Paper XII (i) On Flat Belt Slip,” Tribol. Ser., 18, pp. 333–340.
Gerbert, G. , 1996, “ Belt Slip—A Unified Approach,” ASME J. Mech. Des., 118(3), pp. 432–438.
Sorge, F. , 2007, “ Shear Compliance and Self-Weight Effects on Traction Belt Mechanics,” Proc. Inst. Mech. Eng., Part C, 221(12), pp. 1717–1728.
Sorge, F. , 2008, “ A Note on the Shear Influence on Belt Drive Mechanics,” ASME J. Mech. Des., 130(2), p. 024502.
Alciatore, D. , and Traver, A. , 1995, “ Multipulley Belt Drive Mechanics: Creep Theory vs Shear Theory,” ASME J. Mech. Des., 117(4), pp. 506–511.
Kong, L. , and Parker, R. G. , 2005, “ Microslip Friction in Flat Belt Drives,” Proc. Inst. Mech. Eng., Part C, 219(10), pp. 1097–1106.
Balta, B. , Sonmez, F. O. , and Cengiz, A. , 2015, “ Speed Losses in V-Ribbed Belt Drives,” Mech. Mach. Theory, 86, pp. 1–14.
Chen, T. , and Sung, C. , 2000, “ Design Considerations for Improving Transmission Efficiency of the Rubber V-Belt CVT,” Int. J. Veh. Des., 24(4), pp. 320–333.
Zhu, C. , Liu, H. , Tian, J. , Xiao, Q. , and Du, X. , 2010, “ Experimental Investigation on the Efficiency of the Pulley-Drive CVT,” Int. J. Automot. Technol., 11(2), pp. 257–261.
Firbank, T. , 1977, “ On the Forces Between the Belt and Driving Pulley of a Flat Belt Drive,” Design Engineering Technical Conference, Chicago, IL, Sept., pp. 1–5.
Kim, H. , and Marshek, K. , 1988, “ Belt Forces and Surface Model for a Cloth-Backed and a Rubber-Backed Flat Belt,” J. Mech., Transm., Autom. Des., 110(1), pp. 93–99.
Kim, H. , Marshek, K. , and Naji, M. , 1987, “ Forces Between an Abrasive Belt and Pulley,” Mech. Mach. Theory, 22(1), pp. 97–103.
Palmer, R. , and Jarvis, J. , 1980, “ Steady State Strains in Power Transmitting Flat Belts Made of Composite Material,” Strain, 16(4), pp. 156–161.
Leamy, M. J. , and Wasfy, T. M. , 2002, “ Transient and Steady-State Dynamic Finite Element Modeling of Belt-Drives,” ASME J. Dyn. Syst. Meas. Control, 124(4), pp. 575–581.
Wasfy, T. M. , and Leamy, M. , 2002, “ Effect of Bending Stiffness on the Dynamic and Steady-State Responses of Belt-Drives,” ASME Paper No. DETC2002/MECH-34223.
Leamy, M. , and Wasfy, T. , 2002, “ Analysis of Belt-Driven Mechanics Using a Creep-Rate-Dependent Friction Law,” ASME J. Appl. Mech., 69(6), pp. 763–771.
Leamy, M. J. , 2005, “ On a Perturbation Method for the Analysis of Unsteady Belt-Drive Operation,” ASME J. Appl. Mech., 72(4), pp. 570–580.
Kerkkänen, K. S. , García-Vallejo, D. , and Mikkola, A. M. , 2006, “ Modeling of Belt-Drives Using a Large Deformation Finite Element Formulation,” Nonlinear Dyn., 43(3), pp. 239–256.
Dufva, K. , Kerkkänen, K. , Maqueda, L. G. , and Shabana, A. A. , 2007, “ Nonlinear Dynamics of Three-Dimensional Belt Drives Using the Finite-Element Method,” Nonlinear Dyn., 48(4), pp. 449–466.
Čepon, G. , and Boltežar, M. , 2009, “ Dynamics of a Belt-Drive System Using a Linear Complementarity Problem for the Belt–Pulley Contact Description,” J. Sound Vib., 319(3–5), pp. 1019–1035.
Kim, D. , Leamy, M. J. , and Ferri, A. A. , 2011, “ Dynamic Modeling and Stability Analysis of Flat Belt Drives Using an Elastic/Perfectly Plastic Friction Law,” ASME J. Dyn. Syst. Meas. Control, 133(4), p. 041009.
Wu, Y. , Leamy, M. J. , and Varenberg, M. , 2018, “ Schallamach Waves in Rolling: Belt Drives,” Tribol. Int., 119, pp. 354–358.
Barquins, M. , 1985, “ Sliding Friction of Rubber and Schallamach Waves—A Review,” Mater. Sci. Eng., 73, pp. 45–63.
Fukahori, Y. , Gabriel, P. , and Busfield, J. J. C. , 2010, “ How Does Rubber Truly Slide Between Schallamach Waves and Stick-Slip Motion?,” Wear, 269(11–12), pp. 854–866.
Schallamach, A. , 1971, “ How Does Rubber Slide?,” Wear, 17(4), pp. 301–312.
Barquins, M. , and Courtel, R. , 1975, “ Rubber Friction and the Rheology of Viscoelastic Contact,” Wear, 32(2), pp. 133–150.
Barquins, M. , and Roberts, A. D. , 1986, “ Rubber-Friction Variation With Rate and Temperature—Some New Observations,” J. Phys. D-Appl. Phys., 19(4), pp. 547–563.
Best, B. , Meijers, P. , and Savkoor, A. R. , 1981, “ The Formation of Schallamach Waves,” Wear, 65(3), pp. 385–396.
Lin, T. R. , Farag, N. H. , and Pan, J. , 2005, “ Evaluation of Frequency Dependent Rubber Mount Stiffness and Damping by Impact Test,” Appl. Acoust., 66(7), pp. 829–844.
## Figures
Fig. 1
The experimental apparatus: (a) schematic and (b) system as built
Fig. 2
Schematic of the contact behavior in the driver (a) and driven (b) cases
Fig. 3
Friction, angular velocity and characteristic images representing evolution of the contact area (shown in black) in (a) the driver and (b) the driven cases. A wavelet decomposition exhibits the two primary components of the friction signal: fluctuations associated with detachment events (Fde) and fluctuations associated with pulley oscillations (Fpo).
Fig. 4
The frequency (f) and amplitude (A) of the fluctuations in the friction force (F), (a) and (b), respectively, and in the angular pulley velocity (α), (c) and (d), respectively, associated with detachment events (de) and pulley oscillations (po), and presented as a function of loading conditions in both the driver and driven cases. The error bars show standard deviation.
Fig. 5
The frequency (f) and amplitude (A) of the fluctuations in the friction force (F), (a) and (b), respectively, and in the angular pulley velocity (α), (c) and (d), respectively, associated with detachment events (de) and pulley oscillations (po), and presented as a function of driving speed in both the driver and driven cases. The error bars show standard deviation.
Fig. 6
Friction force obtained with and without flywheels in the driver (a) and driven (b) cases
Fig. 7
The frequency (f) and amplitude (A) of the fluctuations in the friction force (F), (a) and (b), respectively, and in the angular pulley velocity (α), (c) and (d), respectively, associated with detachment events (de) and pulley oscillations (po), and presented as a function of the relative pulley's moment of inertia in both the driver and driven cases. The error bars show standard deviation.
Fig. 8
Diagrams of a simple belt-drive system defined for the driver and driven cases
Fig. 9
Wavelet scalograms of the angular velocity fluctuations obtained from (a) the experiment and (b) the numerical model for the driver pulley equipped with no, small, and large flywheels
Fig. 10
Wavelet scalograms of the angular velocity fluctuations obtained from (a) the experiment and (b) the numerical model for the driven pulley equipped with no, small, and large flywheels
Fig. 11
The frequency (a) and the amplitude (b) of the angular velocity oscillations obtained from the experiment and numerical model as a function of the relative pulley's moment of inertia in the driver and driven cases
## Tables
Table 1 Experimental conditions for variation of driving speed
Table 2 Experimental conditions for variation of load
Table 3 Experimental conditions for variation of the moment of inertia of the pulley
Table 4 Parameters in the dynamic model
## Errata
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections
• TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
• EMAIL: asmedigitalcollection@asme.org | 2019-01-18 07:45:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4309994578361511, "perplexity": 2803.22429050445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659944.3/warc/CC-MAIN-20190118070121-20190118092121-00302.warc.gz"} |
https://socratic.org/questions/59b73d20b72cff30c433f147#475109 | # Question #3f147
If $p$ and $q$ are distinct primes, then certainly their product $p q$ is a multiple of both of them.
Any number less than $p q$ would not be a multiple of both $p$ and $q$, for otherwise it would have to have both $p$ and $q$ in its prime factorization, and would therefore have to equal $p q$ or something greater. | 2022-08-12 20:23:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4199899137020111, "perplexity": 72.88579983171688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571758.42/warc/CC-MAIN-20220812200804-20220812230804-00078.warc.gz"} |
https://ghc.gitlab.haskell.org/ghc/doc/libraries/base-4.15.0.0/System-IO-Error.html | base-4.15.0.0: Basic libraries
System.IO.Error
Description
Standard IO Errors.
Synopsis
# I/O errors
The Haskell 2010 type for exceptions in the IO monad. Any I/O operation may raise an IOException instead of returning a result. For a more general type of exception, including also those that arise in pure code, see Exception.
In Haskell 2010, this is an opaque type.
Construct an IOException value with a string describing the error. The fail method of the IO instance of the Monad class raises a userError, thus:
instance Monad IO where
...
fail s = ioError (userError s)
Construct an IOException of the given type where the second argument describes the error location and the third and fourth argument contain the file handle and file path of the file involved in the error if applicable.
Adds a location description and maybe a file path and file handle to an IOException. If any of the file handle or file path is not given the corresponding value in the IOException remains unaltered.
## Classifying I/O errors
An error indicating that an IO operation failed because one of its arguments already exists.
An error indicating that an IO operation failed because one of its arguments does not exist.
An error indicating that an IO operation failed because one of its arguments is a single-use resource, which is already being used (for example, opening the same file twice for writing might give this error).
An error indicating that an IO operation failed because the device is full.
An error indicating that an IO operation failed because the end of file has been reached.
An error indicating that an IO operation failed because the operation was not possible. Any computation which returns an IO result may fail with isIllegalOperation. In some cases, an implementation will not be able to distinguish between the possible error causes. In this case it should fail with isIllegalOperation.
An error indicating that an IO operation failed because the user does not have sufficient operating system privilege to perform that operation.
A programmer-defined error value constructed using userError.
An error indicating that the operation failed because the resource vanished. See resourceVanishedErrorType.
Since: base-4.14.0.0
# Types of I/O error
An abstract type that contains a value for each variant of IOException.
#### Instances
Instances details
# Since: base-4.1.0.0 Instance detailsDefined in GHC.IO.Exception Methods # Since: base-4.1.0.0 Instance detailsDefined in GHC.IO.Exception MethodsshowList :: [IOErrorType] -> ShowS Source #
I/O error where the operation failed because one of its arguments already exists.
I/O error where the operation failed because one of its arguments does not exist.
I/O error where the operation failed because one of its arguments is a single-use resource, which is already being used.
I/O error where the operation failed because the device is full.
I/O error where the operation failed because the end of file has been reached.
I/O error where the operation is not possible.
I/O error where the operation failed because the user does not have sufficient operating system privilege to perform that operation.
I/O error that is programmer-defined.
I/O error where the operation failed because the resource vanished. This happens when, for example, attempting to write to a closed socket or attempting to write to a named pipe that was deleted.
Since: base-4.14.0.0
## IOErrorType predicates
I/O error where the operation failed because one of its arguments already exists.
I/O error where the operation failed because one of its arguments does not exist.
I/O error where the operation failed because one of its arguments is a single-use resource, which is already being used.
I/O error where the operation failed because the device is full.
I/O error where the operation failed because the end of file has been reached.
I/O error where the operation is not possible.
I/O error where the operation failed because the user does not have sufficient operating system privilege to perform that operation.
I/O error that is programmer-defined.
I/O error where the operation failed because the resource vanished. See resourceVanishedErrorType.
Since: base-4.14.0.0
# Throwing and catching I/O errors
Raise an IOException in the IO monad.
catchIOError :: IO a -> (IOError -> IO a) -> IO a Source #
The catchIOError function establishes a handler that receives any IOException raised in the action protected by catchIOError. An IOException is caught by the most recent handler established by one of the exception handling functions. These handlers are not selective: all IOExceptions are caught. Exception propagation must be explicitly provided in a handler by re-raising any unwanted exceptions. For example, in
f = catchIOError g (\e -> if IO.isEOFError e then return [] else ioError e)
the function f returns [] when an end-of-file exception (cf. isEOFError) occurs in g; otherwise, the exception is propagated to the next outer handler.
When an exception propagates outside the main program, the Haskell system prints the associated IOException value and exits the program.
Non-I/O exceptions are not caught by this variant; to catch all exceptions, use catch from Control.Exception.
Since: base-4.4.0.0
tryIOError :: IO a -> IO (Either IOError a) Source #
The construct tryIOError comp exposes IO errors which occur within a computation, and which are not fully handled.
Non-I/O exceptions are not caught by this variant; to catch all exceptions, use try from Control.Exception.
Since: base-4.4.0.0
modifyIOError :: (IOError -> IOError) -> IO a -> IO a Source #
Catch any IOException that occurs in the computation and throw a modified version. | 2020-10-24 08:54:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35804250836372375, "perplexity": 3676.359897532234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882103.34/warc/CC-MAIN-20201024080855-20201024110855-00689.warc.gz"} |
https://allnswers.com/mathematics/question14511123 | , 24.01.2020Ap621765
# The patels have a dvd collection. three-eighth of dvds are animated. two-eighths of them are mysteries. one-eighth are comedies. the rest are about to travel what fraction of dvds are not about to travel
no
step-by-step explanation:
g = (2, 2)
step-by-step explanation:
given the ratio is 1 : 1 then f is the midpoint of eg
let the coordinates of g = (x, y)
using the midpoint formula
(0 + x) = 1 ( multiply both sides by 2 )
0 + x = 2 ⇒ x = 2
similarly
(4 + y) = 3 ( multiply both sides by 2 )
4 + y = 6 ( subtract 4 from both sides )
y = 2
coordinates of g = (2, 2)
6(2x + 5y)6 (2x) = 12x6 (5y) = 30y12x + 30y
Do you know the answer?
### Other questions on the subject: Mathematics
Mathematics, 21.06.2019, Lovergirl13
The original positive number will be more. Step-by-step explanation:In order to find this, we can first call the original positive number a. Now to find how much it would be when w...Read More
Mathematics, 21.06.2019, karen718
P(B|A) is the probability of drawing a blue marble on the second draw, given that a yellow marble was chosen on the first draw.Step-by-step explanation:P(B|A) is the conditional pr...Read More
Img()3! : $3! : ! ? :$ sugz...Read More | 2021-01-19 11:17:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8034341931343079, "perplexity": 2215.046085024689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703518240.40/warc/CC-MAIN-20210119103923-20210119133923-00246.warc.gz"} |
https://refl1d.readthedocs.io/en/latest/api/abeles.html | # abeles - Pure python reflectivity calculator¶
check refl Reflectometry as a function of kz for a set of slabs.
Optical matrix form of the reflectivity calculation.
O.S. Heavens, Optical Properties of Thin Solid Films
This is a pure python implementation of reflectometry provided for convenience when a compiler is not available. The refl1d application uses reflmodule to compute reflectivity.
refl1d.abeles.check()[source]
refl1d.abeles.refl(kz, depth, rho, irho=0, sigma=0, rho_index=None)[source]
Reflectometry as a function of kz for a set of slabs.
kz : float[n] | Å-1
Scattering vector $$2\pi\sin(\theta)/\lambda$$. This is $$\tfrac12 Q_z$$.
depth : float[m] | Å
thickness of each layer. The thickness of the incident medium and substrate are ignored.
rho, irho : float[n, k] | 10-6-2
real and imaginary scattering length density for each layer for each kz Note: absorption cross section mu = 2 irho/lambda
sigma : float[m-1] | Å
interfacial roughness. This is the roughness between a layer and the subsequent layer. There is no interface associated with the substrate. The sigma array should have at least m-1 entries, though it may have m with the last entry ignored.
rho_index : int[m]
index into rho vector for each kz
Slabs are ordered with the surface SLD at index 0 and substrate at index -1, or reversed if kz < 0. | 2020-09-24 01:31:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6133279204368591, "perplexity": 4916.717380140559}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213006.47/warc/CC-MAIN-20200924002749-20200924032749-00205.warc.gz"} |
http://aviation.stackexchange.com/questions/3814/what-is-a-good-plane-for-aerial-photography | # What is a good plane for aerial photography?
I'm a bit of a photographer and I keep thinking it would be fun to try and do some shots aloft. Only trick is, I'm not sure what kind of plane I would want to use to get those pictures done. I'd have to rent the plane and the pilot and I'd like to keep the price down as best as possible... So a purpose built plane isn't such a big deal.
Other considerations
• Good view of the ground, though it doesn't have to be straight down (I'm not doing surveying
• Stable while banking so I can more easily track shots when the plane is at an angle.
• Ability to be held stably at a bank without actually turning the craft, again to help me track my shots.
• Low vibration, for the same reason as stated over the last two.
• Big windows
• Can fly low and slow for if I want to get close to things (within reason.)
Also, a high wing is probably preferred. Though, if it's exceptionally good at holding a bank, it may not matter.
If you think of anything else that I should consider to keep my camera from being jangled about, and to allow me to have a good view so I can get some shots... Please feel free to leave them in the comments.
So, what plane would work best for this mission?
-
It sounds like a helicopter would actually be best for this purpose. ;) – flyingfisch May 10 at 2:02
The cost for rental would be ridiculous though...wouldn't it? That and they tend to vibrate I thought. Though if you want to put together the reasons why and post 'em, I'm curious. – Jay Carr May 10 at 2:04
This article on photography from helicopters might be useful, even if you do go for a fixed-wing craft. – David Richerby May 10 at 3:53
– RedGrittyBrick May 10 at 12:39
@RedGrittyBrick could you expand that into an answer with some explanation? – Jay Carr May 10 at 12:42
Well, the first caveat is that the best plane for aerial photography is one that someone else is flying -- Trying to set up shots while also maneuvering an aircraft is difficult at best, and can be dangerous.
• Good view of the ground
This generally implies a high wing aircraft, assuming you're shooting from inside the plane. (If you're mounting external cameras this is much less of a consideration.)
• Stable while banking & Able to fly banked without turning
Properly flown pretty much any aircraft can fly along indefinitely in an aerodynamic slip -- it's not an aerodynamically favorable configuration, but meeting this requirement shouldn't be a problem.
The need to maintain the aircraft in a slip is however one of the reasons you probably want to have someone else flying. While not difficult it is not the way the plane "wants" to be flying.
• Low Vibration
Give up the idea of long telephoto lenses or slow shutter speeds -- at least unless you have some kind of image stabilization in your lens/camera. Most GA aircraft will have some appreciable vibration.
• Big Windows
"Big" may be less important than "Removable" -- Most light GA airplanes will have windows of a reasonable size (at least in the front seats), but aircraft plexiglass windows are not of the best optical quality, so you might not want to shoot through them.
• Able to flow low and slow
The optimal aircraft here is a helicopter (in fact some might argue that the R22 helicopter is a great photo platform - but beware the vibration), but most GA training airplanes can do an adequate job here.
Again, a plug for having someone else fly: Trying to photograph something on the ground while flying low and slow is a recipe for a "moose stall", which frequently doesn't end well for anyone except the moose.
## My recommendations
In the typical GA airplane fleet I would pick a Cessna 152 or Cessna 172. These planes are pretty much ubiquitous, and among their advantages are the large hinged windows which you can flip up and pop a camera out of.
You might also consider "jump planes" which are certificated for flight with a door open or removed, providing you a giant hole in the fuselage through which you can take photos (as well as anchor points to attach yourself and your equipment to the airframe).
If you will need to hold station over your target a R22 or similar helicopter would probably be your best choice (and the R22 can certainly be flown without doors).
@JayCarr If you handhold, vibration won't be the issue - your body is an excellent isolator at engine vibration frequencies. Turbulence will be a problem though, and if you brace the camera against the airplane in any way the vibration will be intolerable in any plane. Check out Kenyon gyro stabilizers for turbulence if your budget allows. They can be bought for about $1,000 or rented cheaper. And use a plane you can open the windows in or the glare and scratches in the acrylic will ruin the shot. C152/172 both have semi openable windows. – dvnrrs May 10 at 10:54 In most GA planes (and presumably helicopters though I don't know much about those) there are power settings that induce more or less vibration - you'll be able to feel any vibration that will affect your shot, and can usually adjust power accordingly. Like @dvnrrs said though, the body is a great shock isolator. Mounting to the plane is where you may need special equipment. – voretaq7 May 10 at 20:00 @voretaq7 Mkay, I'll keep that in mind. Probably would be a good thing to chat with the pilot about before heading up... Just wish I could combine this answer with Pauls, he got the "which pilot to hire" about perfect. But, I prefer this one because it covers the plane aspect of the questions better... – Jay Carr May 11 at 2:23 @Qantas94Heavy Yes and No - to me this is a question that can be "answered" (at least to some degree) with factual information (characteristics to look for in a photo plane). That's as opposed to the sort of "recommend a plane for this mission" stuff I was thinking about in that Meta answer ("I've got a husband, two kids, and a dog, and we go to Grandma's 500 miles away every month -- what kind of plane should I buy?") which has a lot more subjectivity – voretaq7 May 12 at 5:45 I used to work at a sport parachuting center - we occasionally had photographers rent one of our Cessna 182 airplanes for the same purpose. An advantage of jump planes is the pilots are quite used to odd attitudes (and even odder requests) so a 60 degree bank angle with the door open and you leaning out of it will get a reply of "no problem". Our conditions were the photographer wears a parachute, a seatbelt, and if the camera goes for a jump that's just too bad for your wallet. 30 minute lecture on what to do if you fall out. Short form: drop camera, grab this handle, pull. And the trip's over if you do anything silly. You would sit on the floor where the co-pilot's seat would normally be, the pilot will open the door as needed, and you have a clear view out the right rear quarter of the plane. Obstacles are the wheel (front half of the door looking down) and wing strut (forward) - the tail is well out of the way. It's very noisy, but unless your camera is actually outside the fuselage the wind blast is not an issue. Don't change lenses though, and your clip-on lens hood will probably disappear rather quickly. You can sit on the door ledge with your feet outside and have an excellent, stable field of view with both hands free to operate your camera. I've sat there hundreds of times without a seatbelt, it's no problem even in steep turns. Just a bit odd seeing the ground above the wingtip. Difference is, I don't care if I fall out. - Excellent answer paul, I hadn't really considered using a jump company, but when you put it in that context it makes a lot of sense. – Jay Carr May 11 at 2:17 I mean, having a pilot who is used to having passengers hanging halfway out the door would be optimal, I would think. We have a company that does jumps here, I wonder if they also have a 182 to rent out... – Jay Carr May 11 at 2:24 A 182 is basically the standard jump plane, unless the place is busy and has all-turbine aircraft. A quick check of dropzones near Springfield doesn't list their aircraft on websites but they will either have one or know who does have one - it's a rather small community. – paul May 11 at 3:39 Sweet, I'll check that out then, thanks :) – Jay Carr May 11 at 3:55 Chris Dahle-Bredine (author of Shot From Above) uses an ultralight for this kind of photography. It's cheap, only burns a little gas, is cheap to maintain, has lots of visibility, and he takes amazing shots! - Ultralights are perhaps the one exception to "the pilot shouldn't be the cameraman" - and the visibility out of a powered parachute certainly can't be beat! – voretaq7 May 10 at 20:01 Yes, ultralight is also the way to go: I know of a Humbert Tétras used for a large scale aerial survey here in Madagascar. – menjaraz May 11 at 5:17 I'm going to try to offer an out-of-the-box solution and suggest a remote-controlled quadcopter / octocopter UAV with a camera mounted on it. Such a system seems to satisfy most of your stated and implied needs: I'd have to rent the plane and the pilot and I'd like to keep the price down as best as possible. A UAV seems like an excellent choice in this regard: unmanned vehicles tend to be cheaper than manned ones, and don't require a pilot's license to operate (although other regulations may still apply). You probably do still need a skilled operator, since controlling a quadcopter does take some practice, and it's difficult (not to mention potentially dangerous) to control the vehicle and the camera at the same time. Having a separate pilot frees you from having to do that job yourself, and lets you concentrate fully on taking pictures. The commercial operators I know of that do this kind of work always have a separate pilot and camera operator. • Good view of the ground, though it doesn't have to be straight down (I'm not doing surveying). [...] Big windows. [...] Also, a high wing is probably preferred. Though, if it's exceptionally good at holding a bank, it may not matter. Check. The setups I've seen usually have the camera mounted under the vehicle body, so you can point it in any direction, including straight down, and the only things that could possibly get in the way are the landing struts. • Stable while banking so I can more easily track shots when the plane is at an angle. • Ability to be held stably at a bank without actually turning the craft, again to help me track my shots. Check, sort of. No, the quadcopter itself can't maintain a steady tilt without accelerating, but the rigs I've seen let the camera be held steady at any angle, while the copter can freely move in any direction in three dimensions. • Low vibration, for the same reason as stated over the last two. Check. At least, I've never heard of this being a problem. • Can fly low and slow for if I want to get close to things (within reason.) This is the one aspect in which remote-controlled helicopters absolutely excel: they can hover in place, or move at any desired speed (for panning shots), at any altitude above ground level. Want to hover in front of someone's face to shoot a portrait of them? Perfectly possible (if maybe a little bit unnerving). Mind you, I haven't actually flown or worked with such systems myself; most of my knowledge comes from watching a demonstration at a "maker faire"-style event by a local company operating such systems, and exchanging a few words with the folks there. But it was definitely impressive, and the way I would go if I ever needed to do some aerial photography. That said, there certainly are also some photography applications for which remote-controlled vehicles are not so well suited, such as high altitudes, very long flight times, flight over significant distances of inaccessible terrain or operation in bad weather. However, if your requirements don't include any of those, they may well be much more practical and less costly than renting a manned aircraft. - Ah, see, but you missed the more important implied criteria: I like making up reasons to fly in an airplane ;). But yes, other than that, a quadcopter sounds like a pretty good idea. – Jay Carr May 11 at 2:21 I was going to post a similar thread. As much as a hate the UAV/drones for most low level aerial photography of the ground they seem to be a great tool. in fact I am going to be getting at quad or hex-copter this fall to play with aerial photography. Maybe I will get this video of a rocket. youtube.com/watch?v=9ZDkItO-0a4 – JerryKur Aug 19 at 16:54 Well, the absolute cheapest and easiest way for aerial video is with a homemade garbage bag kite. Don't laugh, sometimes you can get great results. And you can easily change the viewpoint angle by simply reeling the kite in, adjusting the camera to a different view angle, and reeling it back out. You can also fly all day as long as you got wind. Here's how. http://quadcopter101.blogspot.com/2014/04/do-you-wish-to-do-aerial-video-but-only.html But another cheap and easy way is with a toy quadcopter. And again you can get surprisingly good results with even the cheapest (about$65) quadcopters. Again, here's how: http://quadcopter101.blogspot.com/2014/02/misc-4-aerial-video-part-2.html | 2014-10-25 20:34:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26392289996147156, "perplexity": 1784.3243310432758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119650193.38/warc/CC-MAIN-20141024030050-00010-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://openstudy.com/updates/4db9c9a0b0ab8b0bf5c67c8b | ## I<3Fun 4 years ago Simplify: 7 SQRT 3 + 8 SQRT 3 – 2 SQRT 2 A) 13 SQRT 5 B) 13 SQRT 6 C) 15 SQRT 3 – 2 SQRT 2 I was confused if it would be B or C
1. mmbuckaroos
C
2. SkateboarderT_3
C
3. a132
okay. so it looks like:$7\sqrt{3} + 8\sqrt{3} - 2\sqrt{2}$ The first thing to do is combine like terms. Both $7\sqrt{3}$ and $8\sqrt{?}$ have the same radicand in common, so you can combine them to be $15\sqrt{3}$ You have the $-2\sqrt{?}$ left. You can't do anything to that. So, like the two above me have said, your answer will be C: $15\sqrt{3} - 2\sqrt{2}$
4. a132
* i meant both 7(sqrt) 3 and 8(sqrt) 3 | 2015-06-30 10:14:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7787712216377258, "perplexity": 734.6311626528193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375093400.45/warc/CC-MAIN-20150627031813-00130-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://answers.ros.org/question/195352/different-padding-for-different-objects/ | # Different padding for different objects?
Hey everyone,
I have the following problem with MoveIt & DepthImageOctomapUpdater:
I have a Kinect looking at my robot-arm. For the self-detection to work properly, I need to set pretty big padding values in the sensors_rgbd.yaml. That's not ideal, but ok for me. But now the robot is standing on a table, which I also add as a collisionObject. Once it's added it gets the same high padding values. This leads to objects disappearing in the octomap, once they are placed on the table.
Is there a way to tell moveIt to only apply the generous padding to the robot and not to the collisionObjects? Also, would some of you mind explaining me the difference between padding_scale and padding_offset? I couldn't quite make reason out of the explanation in the wiki.
Rabe
sensors:
- sensor_plugin: occupancy_map_monitor/DepthImageOctomapUpdater
image_topic: /camera_top/depth/image_raw
queue_size: 15
near_clipping_plane_distance: 0.65
far_clipping_plane_distance: 2.0
skip_vertical_pixels: 3
skip_horizontal_pixels: 3 | 2023-01-27 13:56:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24479921162128448, "perplexity": 2191.6280622328773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494986.94/warc/CC-MAIN-20230127132641-20230127162641-00674.warc.gz"} |
https://blog.csdn.net/Jaihk662/article/details/77846118 | # bzoj 1650: [Usaco2006 Dec]River Hopscotch 跳石子(二分)
## 1650: [Usaco2006 Dec]River Hopscotch 跳石子
Time Limit: 5 Sec Memory Limit: 64 MB
Submit: 721 Solved: 457
[ Submit][ Status][ Discuss]
## Description
Every year the cows hold an event featuring a peculiar version of hopscotch that involves carefully jumping from rock to rock in a river. The excitement takes place on a long, straight river with a rock at the start and another rock at the end, L units away from the start (1 <= L <= 1,000,000,000). Along the river between the starting and ending rocks, N (0 <= N <= 50,000) more rocks appear, each at an integral distance Di from the start (0 < Di < L). To play the game, each cow in turn starts at the starting rock and tries to reach the finish at the ending rock, jumping only from rock to rock. Of course, less agile cows never make it to the final rock, ending up instead in the river. Farmer John is proud of his cows and watches this event each year. But as time goes by, he tires of watching the timid cows of the other farmers limp across the short distances between rocks placed too closely together. He plans to remove several rocks in order to increase the shortest distance a cow will have to jump to reach the end. He knows he cannot remove the starting and ending rocks, but he calculates that he has enough resources to remove up to M rocks (0 <= M <= N). FJ wants to know exactly how much he can increase the shortest distance *before* he starts removing the rocks. Help Farmer John determine the greatest possible shortest distance a cow has to jump after removing the optimal set of M rocks.
## Input
* Line 1: Three space-separated integers: L, N, and M * Lines 2..N+1: Each line contains a single integer indicating how far some rock is away from the starting rock. No two rocks share the same position.
## Output
* Line 1: A single integer that is the maximum of the shortest distance a cow has to jump after removing M rocks
25 5 2
2
14
11
21
17
## Sample Output
4
#include<stdio.h>
#include<algorithm>
using namespace std;
int n, m, a[50010];
int Jud(int x)
{
int i, sum, temp;
temp = 1, sum = 0;
for(i=2;i<=n;i++)
{
if(a[i]-a[temp]<x)
sum++;
else
temp = i;
}
if(sum>m)
return 0;
return 1;
}
int main(void)
{
int len, i, l, r, mid;
scanf("%d%d%d", &len, &n, &m);
a[1] = 0, a[n+2] = len;
for(i=2;i<=n+1;i++)
scanf("%d", &a[i]);
n += 2;
sort(a+1, a+n+1);
l = 1, r = len;
while(l<r)
{
mid = (l+r+1)/2;
if(Jud(mid))
l = mid;
else
r = mid-1;
}
printf("%d\n", l);
return 0;
}
01-28 1145
11-04 41
02-23 145
02-16 42
01-21 120
11-30 374
03-16 1005
03-15 63
06-21 366
06-03 1476
03-23 17
07-06 700 | 2021-06-22 09:19:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25923749804496765, "perplexity": 4494.268107877142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488512243.88/warc/CC-MAIN-20210622063335-20210622093335-00044.warc.gz"} |
http://wikieducator.org/User:Wsiaosi/My_Sandbox | # User:Wsiaosi/My Sandbox
This is bold and this is italics This is bold and this is italics This is bold and this is italics This is bold and this is italics
• One
• Two
• This is a sub of two
• Third
1. One
• This is a sub of one
2. Two
1. This is a sub of two
3. Three | 2018-03-19 18:55:53 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8940196633338928, "perplexity": 9116.89134999821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647044.86/warc/CC-MAIN-20180319175337-20180319195337-00620.warc.gz"} |
https://www.studysmarter.us/textbooks/physics/college-physics-urone-1st-edition/work-energy-and-energy-resources/q59pe-a-how-long-can-you-play-tennis-on-the-800-kj-about-200/ | Suggested languages for you:
Americas
Europe
Q59PE
Expert-verified
Found in: Page 263
### College Physics (Urone)
Book edition 1st Edition
Author(s) Paul Peter Urone
Pages 1272 pages
ISBN 9781938168000
# (a) How long can you play tennis on the 800 kJ (about 200 kcal) of energy in a candy bar? (b) Does this seem like a long time? Discuss why exercise is necessary but may not be sufficient to cause a person to lose weight.
(a) The duration for which you can play tennis is $30.3min$ .
(b) Yes, it seems like a long time.
See the step by step solution
## Step 1: Power
Power is scalar quantity which is defined how fast the energy is being consumed by the system.
Mathematically,
$P=\frac{E}{t}$……………….. (1.1)
Here, E is the amount of energy consumed, and t is the time.
## Step 2: The time which you can play tennis
(a)
The time can be calculated using equation (1.1).
Rearranging equation (1.1) in order to get an expression for time.
$t=\frac{E}{P}$
Here, E is the energy of the tennis player $\left(E=800\text{kJ}\right)$ , and P is the power consumed while playing tennis $\left(P=440\text{W}\right)$ .
Putting all known values,
$\begin{array}{rcl}t& =& \frac{800\text{kJ}}{440\text{W}}\\ & =& \frac{\left(800\text{kJ}\right)×\left(\frac{1000\text{J}}{1\text{kJ}}\right)}{440\text{W}}\\ & =& 1818.18\text{sec}×\left(\frac{1\text{min}}{60\text{sec}}\right)\\ & =& 30.3\text{min}\\ & & \end{array}$
Therefore, the duration for which you can play tennis is $30.3min$ .
## Step 3: Exercise is necessary
(b)
Yes, this is surprisingly long. Exercise is required to burn more energy than is consumed. | 2023-03-27 23:38:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5795469880104065, "perplexity": 1203.6710763723609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00354.warc.gz"} |
https://www.jobacle.nl/?tag=licensing | ## Utilizing your IT environment with Oracle Database Appliance
By |2016-07-21T08:40:50+00:00July 21st, 2016|Categories: ODA|Tags: , , , , , , |
I admit, I’m a bit of a fan of the Oracle Database Appliance. And I also admit there are some characteristics of the X5-2 ODA’s which made it sometimes a bit hard to fit in the needs of the customer. I’ll come to that later in this post. With the […]
## Oracle licenses and the cloud
By |2015-03-15T10:02:00+00:00March 15th, 2015|Categories: Database, licensing|Tags: , , , , , |
Suppose the number of Oracle licenses you acquired in the past, is in line with the use. That is, you’re compliant with all the licensing rules Oracle come up with. The license form you use is the so called ‘Full use’ license, this is the most common license form. Everybody […]
## Oracle has changed the exhange rate of the Euro
By |2015-03-02T17:02:25+00:00March 2nd, 2015|Categories: licensing|Tags: , |
For years the number of 0.7893 meant something to those who are working on regular basis with licenses in Europe. This number is the ‘Current local Pricing Exchange rate’.
And Oracle is entitled to change this rate twice a year:
## Licensing development and test environments
By |2014-05-17T18:28:20+00:00May 17th, 2014|Categories: Database, licensing|Tags: , , |
This post has already been published in the past on the AMIS technology blog.
Once in a while a company wants to know if her Oracle development- and test- environments needs to be licensed. And in a lot of cases this question simply can be answered as: yes, these […] | 2018-08-20 12:17:06 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8226990699768066, "perplexity": 3813.9393216053177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221216453.52/warc/CC-MAIN-20180820121228-20180820141228-00112.warc.gz"} |
https://quant.stackexchange.com/questions/35016/how-do-i-control-for-a-firms-factor-loadings-based-on-the-fama-french-model-i?noredirect=1 | # How do I control for a firm's “factor loadings” based on the Fama French model in a regression model?
I asked this question before, but in the wrong community (sorry):
I want to explain stock returns in a regression model. Besides regressing against my main explanatory variables, I want to control for at least the most common risk factors. In a paper (reference below), the authors state that they "[...] control for the firm’s factor loadings based on the Fama-French three-factor model [...]" (p. 13).
This sounds as if they include the factor loadings as explanatory variables in their regression. To me, this does not make sense and I guess, I interprete this wrong. I would be very thankful, if somebody could help me to clarify this. How exactly do they control for the firm's factor loadings? Do they include the factor returns, rather than the factor loadings, as explanatory variables?
Thank you very much in advance!
PS: This is the paper I am talking about:
Lins, Karl V., Henri Servaes, and Ane Tamayo. "Social capital, trust, and firm performance: The value of corporate social responsibility during the financial crisis." The Journal of Finance (2017).
• Are your explanatory variables tradable returns (eg. return on Apple stock, return on MSCI index)? Or are they something other than returns (eg. GDP growth)? – Matthew Gunn Jul 6 '17 at 18:49
• Hello @MatthewGunn! They are something other than tradable returns (corporate social responsibility ratings). Does this cause problems since the factor returns are, more or less tradable, returns? – Hans Leifson Jul 6 '17 at 19:48
It sounds like something reasonable/standard to do would be.
1. Sort your companies into five portfolios based upon quintiles of social responsibility.
• Also make a long-short portfolio of the top quintile portfolio minus the bottom quintile portfolio. (This long-short return will be an excess return so when you run the below regression, you would not subtract the risk free rate.)
2. Regress returns on Fama-French factors (and possibly momentum) to control for those risk factors.
For example, to compute Jensen's alpha relative to the Fama-French three factor model you would run the following regression for portfolio $i$:
$$R_{it} - R^f_t = \alpha_i + \beta_{i,1} \mathit{RMRF}_t + \beta_{i,2} \mathit{SMB}_t + \beta_{i,3} \mathit{HML}_t + \epsilon_{it}$$
Or for five factor model: $$R_{it} - R^f_t = \alpha_i + \beta_{i,1} \mathit{RMRF}_t + \beta_{i,2} \mathit{SMB}_t + \beta_{i,3} \mathit{HML}_t + \beta_{i,4}\mathit{CMA}_t + \beta_{i,5} \mathit{RMW}_t + \epsilon_{it}$$
The $\alpha_i$, Jensen's alpha, is the average return above and beyond what would be expected based upon covariance with the various risk factors.
Factor returns etc... are on Ken French's website.
You're forming portfolios based upon some signal and checking against some asset pricing model by estimating Jensen's alpha. Some call this forming calendar time portfolio sand it naturally corrects standard-errors for cross-sectional correlation in returns. Calculate heteroscedasticity consistent standard errors.
### Computing abnormal returns
The basic idea of abnormal returns is that they are returns minus some expectation of what returns should be given an asset pricing model.
$$\mathit{AR}_{it} = R_{it} - \operatorname{E}[R_{it} \mid \mathcal{F}]$$
For example, under the Fama-French three factor model, the abnormal return would be:
$$\mathit{AR}_{it} = R_{it} - R^f_t - \left( \beta_1 \mathit{RMRF}_t + \beta_2 \mathit{SMB}_t + \beta_3 \mathit{HML}_t \right)$$
where the betas are computed using a time-series regression.
If you regress abnormal returns on stuff, you should cluster standard errors by time because of cross-sectional correlation.
• Thank you very much for this answer @Matthew Gunn! It will certainly help me with my thesis. You even gave me hints regarding appropriate standard errors! However, I came across criticism concerning such kind of portfolio studies, because effects of CSR might be "drowned by noise" (or something like that, I can provide some sources if you like). That's why I thought about regressing returns of single stocks against social responsibility and control variables, including the Fama-French factors. Do you have any idea how I could do so? – Hans Leifson Jul 7 '17 at 16:02
To my knowledge, there are two options which can be utilized in the Fama-French Five-Factor Model.
1. Multivariate multiple regression model
2. Use MANOVA but you should be using a purposive or judgmental sampling technique.it means that before going to select any sample, you should use inclusion and exclusion criteria, which is possible through purposive sampling. also, this is the primary way to control errors and any biasness in the independent variables. As, a result, you should be able to control errors and biasness before data analysis takes place.if you don't do this use a random sampling technique, and then besides MANOVA, you should also utilizing MANCOVA for removing errors and biases in the mean population of the dependent variables. Thus, you should use Covariate as a controlling factor.
3. You should use dummy variables as a covariate in MANCOVA analysis. Examples, psychological factors, active manger, passive managers, etc. Thus, we use 1 and 0 in the inclusion of dummy variables. as 1 for active and 2 for passive.
4. We use covariate for confounding or extraneous factors. These are those factors that don't have any correlation with independent variables but directly correlated with dependent variables. ALSO, in another language, extraneous factors have a correlation with independent variables when we using MANOVA not MANCOVA because that error term first affects independent variables and then indirectly dependent variables.
• Hi, we prefer to keep all communication on this site. – Bob Jansen Apr 24 at 10:48 | 2020-06-01 14:09:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5355293154716492, "perplexity": 2440.980623183518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347417746.33/warc/CC-MAIN-20200601113849-20200601143849-00097.warc.gz"} |
https://magnusdv.github.io/pedsuite/articles/web_only/quickped.html | What is QuickPed?
QuickPed is an interactive web application for drawing and analysing pedigrees. A created pedigree may be saved as an image or as a text file in ped format (see below). You may also obtain various information about the pedigree, including relatedness coefficients and verbal descriptions of relationships.
QuickPed is powered by the ped suite and kinship2 for pedigree plotting. The web app was built with Shiny.
Getting started
Creating pedigrees with QuickPed is very intuitive: Select a suitable start pedigree and modify it as needed. You may also load an existing ped file (see below). Modifications are done by clicking on one or several individuals and then applying appropriate buttons, for instance to add children, siblings or parents. At any time you may change attributes like sex, affection status, twin status and ID labels.
Tips and tricks
• Selecting individuals. Select/deselect pedigree members by clicking on them. Selected individuals are shown in red colour. Pro tip: To deselect everyone, click the “Selection” button under the “Remove” heading.
• Labels: Automatic labelling of the pedigree members are available in two different formats. The button marked 1,2,.. applies numeric labels to all individuals, in the order of their appearance in the pedigree plot. Alternatively, the I-1, I-2,.. button numbers the members generation-wise, using roman numerals to indicate the generation number.
• Unknown sex. If you double click on a pedigree member, its symbol will change into a diamond representing unknown sex. Double click again to revert. Note: Only pedigree leaves (members without children) may have unknown sex.
• Plot settings. If the pedigree gets too large, increase the plot region using the control panel on the far right. Here you may also adjust the margins, the size of pedigree symbols and text labels.
Built-in pedigrees
In the left-most panel of QuickPed the user may choose among a selection of standard pedigrees, including trios, full/half siblings, avuncular and cousin pedigrees of different kinds. Also included are several interesting (albeit less common) pedigree structures like double cousins and quad half first cousins. Finally, the following historic pedigrees are available:
• Habsburg: A subset of the infamously inbred family tree of the Habsburg royalties. The inbreeding coefficient of King Charles II of Spain (1661-1700) was approximately 0.25, i.e., equivalent to that of a child produced by full siblings. Pedigree adapted from Wikipedia. See also The Role of Inbreeding in the Extinction of a European Royal Dynasty.
• Jicaque: A pedigree of Jicaque Indians originally studied by Chapman & Jacquard (1971) and subsequently used in many papers on relatedness and pedigree coefficients.
• Queen Victoria (haemophilia): The royal family tree descending from Queen Victoria, showing the X-linked inheritance of haemophilia. Adapted from Figure S1 of Genotype Analysis Identifies the Cause of the “Royal Disease”.
• Tutankhamun: The family tree of the Egyptian pharao Tutankhamun, as inferred from genetic evidence presented by Hawass et al. (2010), Ancestry and Pathology in King Tutankhamun’s Family.
Relationship information
The buttons Coeffs and Describe can be used to analyse the relatedness between selected individuals in the current pedigree.
• Coeffs: This prints a variety of pedigree coefficients.
• The inbreeding coefficient of each individual (this works for any number of selected members).
• The kinship coefficient $$\varphi$$.
• The IBD coefficients $$\kappa = (\kappa_0, \kappa_1, \kappa_2)$$, defined as the probabilities of sharing 0, 1, and 2 alleles identical by descent (IBD). These are well-defined only if both individuals are non-inbred.
• The 9 condensed identity coefficients of Jacquard, $$\Delta = (\Delta_1, ..., \Delta_9)$$.
More information about these coefficients can be found in the documentation of the ribd package, which is used in the calculations.
• Describe: This prints a verbal description of the relationship, generated by verbalisr.
Ped files
A useful feature of QuickPed is to produce text files describing pedigrees in so-called ped format. Such files are often required by software for pedigree analysis.
For a simple illustration, consider this pedigree:
A text file describing this pedigree may contain the following.
id fid mid sex aff
1 0 0 1 1
2 0 0 2 1
3 1 2 2 2
The columns are:
• id: Individual ID
• fid: Father’s ID (or 0 if not included in the pedigree)
• mid: Mother’s ID (or 0 if not inlcuded in the pedigree)
• sex: Sex (1 = male; 2 = female; 0 = unknown)
• aff: Affection status (1 = unaffected; 2 = affected; 0 = unknown)
It should be noted that the ped format is not completely standardised, and different software may use slightly different versions. For example, a first column with Family ID is sometimes required. Also, the aff column may not be needed in non-medical applications. These and other details may be specified when using QuickPed.
Some pedigree information may be shown on the plot, but is not stored in the ped file. In the current version of QuickPed, this includes twin relationships, and also deceased status. | 2021-12-06 22:49:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24282796680927277, "perplexity": 5099.032208246082}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363327.64/warc/CC-MAIN-20211206224536-20211207014536-00028.warc.gz"} |
https://ask.sagemath.org/question/41187/ed25519-elliptic-curve/?sort=votes | # ed25519 elliptic curve
Is is possible to represent the elliptic curve used by the ed25519 signature scheme in Sage? How?
EllipticCurve takes parameters for the long Weierstrass form of an Elliptic curve. But I don't know how to convert the ed25519 curve to that form, if it even is possible.
edit retag close merge delete
Sort by » oldest newest most voted
Did you mean Curve25519? It looks like the standard way to write Curve 25519 is already in long Weirstrass form (see the wiki site: wikipedia.org/wiki/Curve25519). In Sage we can build this as:
E = EllipticCurve(GF(2^255-19),[0,486662,0,1,0])
I verified that the trace matched that listed on the safecurves site. Namely,
E.trace_of_frobenius() == -221938542218978828286815502327069187962
The curve used in the signature scheme Ed25519 (as explained on wikipedia.org/wiki/EdDSA#Ed25519 ), is birationaly equivalent to Curve25519, which is what we constructed above. A change of variables is given in the wikipedia entry.
I hope this helps,
Travis
more
Did you mean Curve25519?
No. I mean the curve used by ed25519 signature scheme. It is birationally equivalent to E = EllipticCurve(GF(2^255-19),[0,486662,0,1,0]), but I wish to define a Sage EllipticCurve for the ed25519 curve:
-x^2 + y^2 = 1 - (121665/121666) x^2 y^2
I wish I could express, for example, Diffie-Hellman, with simple Sage code like:
E = EllipticCurve(???)
P = ... base point...
a, b = random field elements
A = a*P
B = b*P
a*B == b*A # Returns True
I would like to experiment with ed25519 signatures, including distributed signing. I need Sage to generate the correct test data.
I suppose I could convert coordinates in x15519 to ed25519 coordinates as Travis says. However, I don't want to loose nice Sage syntax.
( 2018-02-20 05:56:14 -0600 )edit | 2020-02-23 23:52:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2964998781681061, "perplexity": 3202.74708947517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145859.65/warc/CC-MAIN-20200223215635-20200224005635-00213.warc.gz"} |
https://www.physicsforums.com/threads/simple-harmonic-motion-problem.215612/ | # Simple Harmonic Motion problem
1. Feb 15, 2008
"A particle moves along the x-axis. It is initially at the position 0.270 m, moving with velocity 0.140 m/s and acceleration -0.320 m/s^2. Assume it moves with simple harmonic motion for 4.50 s and x=0 is its equilibrium position. Find its position and velocity at the end of this time interval."
x=Amplitude * cos2(pi)ft
Found f using T= radical (x/a) * 2pi = 5.769 so f= 0.1734 Hz
However, even with solving for f I'm still left with two unknowns, i.e. A and t. Please help...
Last edited: Feb 15, 2008
2. Feb 15, 2008
### Staff: Mentor
You are given data for the initial postion, velocity, and acceleration. Set up equations for each. That will allow you to solve for the amplitude and phase.
3. Feb 15, 2008
### Cynapse
This is how I've done it quickly...
x(t) = Bsin[2pi.f.t] + Acos[2pi.f.t] NB because at t=0, x=0.27 you can ignore the sin part
x(t) = Acos[2pi.f.t]
v(t) = -A.2pi.f.sin[2pi.f.t] (Diffrentiated once x(t) with respect to t)
a(t) = -A.(2pi.f)^2.cos[2pi.f.t] (Diffrentiated twice x(t) with respect to t)
x(t=0) = Acos[2pi.f.0]
x(t=0) = A (cos(0) = 0)
A = 0.27
a(t=0) = -0.27.(2pi.f)^2.cos[2pi.f.0]
a(t=0) = -0.27.(2pi.f)^2 (cos(0) = 0)
-0.320 = -0.27.(2pi.f)^2
f = sqrt[0.320/(0.27.(2.pi)^2)]
f = 0.173
x(t) = 0.27.cos[2pi.0.173.t]
x(t=4.5) = 0.27.cos[2pi.0.173.4.5]
x(t=4.5) = 0.050m
Dont quote me though its a while since I've done SHM
4. Feb 15, 2008
### Staff: Mentor
Sanity check: Do you think that the particle is at maximum displacement at t = 0?
$$x = A\sin(\omega t + \phi)$$ | 2016-10-22 05:47:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5423184633255005, "perplexity": 4049.3663935897157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718426.35/warc/CC-MAIN-20161020183838-00502-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://andrewtweddle.blogspot.com/2013/03/four-binary-calculation-method.html | ## All posts in this series:
• One Potato: A description of the problem
• Two potato: The basic equations
• Three potato: An algebraic formula for the last person left
• Four: A neat calculation method using binary numbers
• Five potato: F# functions to check the various formulae
• Six potato: Using a calculation graph to see the pattern
• Seven potato: Generalizing to other intervals
• More: An efficient algorithm for arbitrary intervals
• Online research: In which I discover that it is called the Josephus problem
## Introduction
In my first three posts in the series I derived Mathematical formulae for calculating the last person left in a circle if every second person is asked to leave (continuing until only one person remains).
In this post I am going to show a really neat way of calculating the answer. It is based on the binary number representation of the number of people in the circle.
If you are unfamiliar with the binary number system, then I suggest looking for a basic tutorial on binary numbers first.
## The binary calculation rule
Take the binary representation of the number of people in the circle. Move the left-most (non-zero) bit to the end. Convert back to decimal and you have the number of the last person left.
## Some examples
$$f(\underbrace{10}_{\text{decimal}}) = f(\underbrace{1010}_{\text{binary}}) = \underbrace{0101}_{\text{binary}} = \underbrace{5}_{\text{decimal}}$$ $$f(\underbrace{13}_{\text{decimal}}) = f(\underbrace{1101}_{\text{binary}}) = \underbrace{1011}_{\text{binary}} = \underbrace{11}_{\text{decimal}}$$
## Derivation
In my third post of the series I derived an algebraic formula for the last person left in the circle: $$f(n) = 2n + 1 - 2^{{\lfloor log_2 n \rfloor} + 1} \tag{9}$$ I'd like to write that formula slightly differently: $$f(n) = 2(n - 2^{\lfloor log_2 n \rfloor} ) + 1 \tag{10}$$
$2^{\lfloor log_2 n \rfloor}$ is just the highest power of 2 that is less than (or equal to) the number. Now each position in a binary representation represents a power of 2. So this represents the left-most bit in the binary representation of n (and all other positions zero).
Then $n - 2^{\lfloor log_2 n \rfloor}$ is just the original number n with its first (non-zero) binary digit dropped (i.e. replaced with a zero). Let's call this number m.
In binary, you can multiply a number by 2 by shifting all its bits one place to the left. So 2m+1 is just a "left shift" of m and set the rightmost bit to 1.
So we can express the binary method as:
• take the binary representation of the number | 2022-06-28 05:25:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6736792325973511, "perplexity": 604.6376339655302}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00651.warc.gz"} |
https://math.stackexchange.com/questions/1200692/subspace-topology-in-r | # Subspace topology in $R$
I am new to topology and I want to understand this example from wikipedia about subspace topology.
The example is:
Let $S = [0, 1)$ be a subspace of the real line $R$. Then $[0, 1/2)$ is open in $S$ but not in $R$. Likewise $[½, 1)$ is closed in $S$ but not in $R$. $S$ is both open and closed as a subset of itself but not as a subset of $R$.
I cannot understand the part where it says:
Likewise $[½, 1)$ is closed in $S$ but not in $R$.
Does the phrase mean the it is closed in $S$ but open in $R$? I hope someone can help me understand it better.
Thank you.
• No, the phrase does not imply that $[1/2,1)$ is open in $\mathbb{R}$. Just to make sure that you understand, there are sets which are neither open nor closed in $\mathbb{R}$. Hence, a set which is not closed does not have to be open. And vice versa. Mar 22 '15 at 8:18
• I understand now. Thanks a lot. Mar 22 '15 at 8:30
Usee the fact that
Let $Y$ be a subspace of a topological space $X$. Then a set $A$ is closed in $Y$ if and only if it equals the intersection of a closed set of $X$ with $Y$.
Clearly $[\frac{1}{2},1)=[\frac{1}{2},1]\cap S$ is the intersection of a closed subset of $\Bbb R$ with $S$ and so $[\frac{1}{2},1)$ is closed in $S$. But $[\frac{1}{2},1)$ is not closed in $\Bbb R$ as $1$ is the limit point of that set which do not belong to that set (or, the complement $\Bbb R\setminus [\frac{1}{2},1)=(-\infty,\frac{1}{2})\cup[1,\infty)$ is not open in $\Bbb R$).
• Thank you very much for your help Mar 22 '15 at 8:29
A set is said to be open if its complement is closed and vice versa. So if we consider $S\setminus [\dfrac{1}{2},1)$ then
$S\setminus [\dfrac{1}{2},1)=[0,\dfrac{1}{2})$ which is open in $S$. Hence $[\dfrac{1}{2},1)$ is closed in $S$. But this set is not closed in $\mathbb{R}$ because $\mathbb{R}\setminus[\dfrac{1}{2},1)=(-\infty,\dfrac{1}{2})\cup[1,\infty)$.
Consider $S\setminus [1/2,1)$. $S\setminus [1/2,1)=[0,1/2)$ is open in $S$. So $[1/2,1)$ is closed in $S$ (since complement of $[1/2,1)$ in $S$ is open in $S$). But $[1/2,1)$ is not closed in $\mathbb{R}$ since $1$ is a limit point of $[1/2,1)$ and $1\notin [1/2,1)$. | 2021-11-28 20:08:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8883514404296875, "perplexity": 41.08182497223981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358591.95/warc/CC-MAIN-20211128194436-20211128224436-00523.warc.gz"} |
https://ajft.org/2008/01/09/journal | ## North road, bus lanes and road-raging bus drivers @ Adrian Tritschler · Wednesday, Jan 9, 2008 · 3 minute read · Update at Jan 9, 2008 ·
How odd… as I was riding along North road this morning I was thinking that I really should write back to VicRoads and thank them for their of the use of the bus lanes. Less than a minute later I had to bite my lip to stop from laughing as a VicRoads maintenance vehicle drove out of a side-street, safety orange light busy spinning around on the roof, assorted warning signs all sitting in the back… and the driver chatting away on THE MOBILE PHONE. Bloody typical. Debated jotting down the details to forward them to VicRoads for comment, but then it all went out the window, and out of memory, in a frantic bid for self-preservation.
The driver of the Grenda’s route 900 bus, rego. 6###-A01 then tried to run me off the road. Blasting on the horn as he went flying past without pulling out, forced me completely out of the lane and onto the 20cm wide, debris-filled concrete kerb. When I caught him at the Dandenong road intersection he told me that it was “bus lane, you not allowed to ride in it.” I informed him I had been told by VicRoads that I could and was told “You not allowed to ride after 8:30.” I informed him again that VicRoads had told me that it was legal for cyclists to use the bus lane at all times and he changed his words to “I just letting you know I was there.” I pointed out that I was fully aware of his presence and that his actions were aggressive, dangerous and unnecessary. He told me I should not ride on the road (at all). At this point I realised it was pointless to continue attempting to discuss it with him, and as per the recommendations of the two other cyclists at the intersection, I recorded the details of his vehicle.
I’ve written to Grenda bus lines, with Cc to Bike Vic. and VicRoads, asking for their response to this drivers actions, and what they intend to do to prevent it recurring. I have asked that they not introduce specious arguments to the effect that I should use off-road cycle paths, as under no circumstances do their presence, however unsuitable they may be for commuter cycling, excuse road rage in other vehicle operators.
…as for the “bicycle path”, well it still isn’t finished, and it still has all the design problems that make it appear more of an after-thought designed from a viewpoint of getting bicycles off the road and not with any view of providing all road users, be it by car, bicycle or bus with a safe and effective route.
So far this year I’ve seen about twenty people cycling along North road, in all cases they’ve chosen to ride on the road, since it appears that the incomplete and unsuitable “bicycle path” doesn’t provide any advantages — and in many ways, its presence reinforces the dangerous belief in many people’s minds that cyclists have to get off the roads.
## Footnotes
1 [] — Damn! Just saw on my scribble pad notes that although I’d written 6###-A0, in the letter I’ve written to Grenda I typed 9###-A0, probably because it was route 900 at 9 o’clock.
# …The Owner
There’s not much more I can add to who I am.
# …The Site
Vanity site? Technology experiment? Learning tool? Blog? Journal? Diary? Photo album? I could tell you, but then I’d have to kill you…
I experiment. I play. I write and I take pictures. Some of the site is organised around topics, other parts are organized by date, then there’s always the cross-references between them.
Its all been here a fairly long time. Like the papers on my desk, or the books on the bedside table, the pile just grew… and it all grew without much plan or structure. I try not to break URLs, so historical oddities abound.
Long ago it started as a learning experiment with a few static HTML pages, then I added a bit of server-side includes and some very ugly PHP. A hand-built journal/blog on top of that PHP, then a few experiments in moving to various static publishing systems. I’ve never wanted a database-based blogging engine, so over the years I’ve tried PHP, nanoblogger, emacs-muse, silkpage and docbook before settling on Emacs Org mode for writing and jekyll for publishing. But the itch remained… I never really liked jekyll and the ruby underneath always seemed so much black magic. So now the latest incarnation is Org mode and hugo.
# …The ISP
• Hosted by @cos | 2021-11-27 20:40:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23700205981731415, "perplexity": 3351.7553456237993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358233.7/warc/CC-MAIN-20211127193525-20211127223525-00328.warc.gz"} |