url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://brilliant.org/problems/two-perpendicular-lines/
|
# Two perpendicular lines
Geometry Level 3
Two perpendicular lines are intersecting the $$y$$-axis at the same point $$(0,3)$$. Which of the following area(s) is not possible for the triangle formed by these two lines and the $$x$$-axis?
(1): $$8\text{ unit}^2$$.
(2): $$10\text{ unit}^2$$.
(3): $$6\text{ unit}^2$$.
(4): $$12\text{ unit}^2$$.
×
|
2018-10-23 06:34:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7209309935569763, "perplexity": 955.393464783367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516071.83/warc/CC-MAIN-20181023044407-20181023065907-00253.warc.gz"}
|
https://www.merriam-webster.com/dictionary/entail
|
1
# entail
play
verb en·tail \in-ˈtāl, en-\
## Definition of entail
1. ###### transitive verb
2. 1 : to restrict (property) by limiting the inheritance to the owner's lineal descendants or to a particular class thereof
3. 2a : to confer, assign, or transmit (something) for an indefinitely long time : to confer, assign, or transmit as if by entail entailed on them indelible disgrace — Robert Browningb : to fix (a person) permanently in some condition or status entail him and his heirs unto the crown — William Shakespeare
4. 3 : to impose, involve, or imply as a necessary accompaniment or result the project will entail considerable expense
## entailer
play \in-ˈtā-lər, en-\ noun
## entailment
play \in-ˈtāl-mənt, en-\ noun
## Examples of entail in a Sentence
1. Pregnancy involves the bodily dependence of the unborn child on its mother; in many cases, it entails a significant physical burden. —Cathleen Kaveny, Commonweal, 4 May 2007
2. … it was a Master Highlighter Event, a two-day guest appearance by one of Kinkade's specially trained assistants, who would highlight any picture bought during the event for free. Highlighting a picture is not that different from highlighting your hair: it entails stippling tiny bright dots of paint on the picture to give it more texture and luminescence. —Susan Orlean, New Yorker, 15 Oct. 2001
3. Life is a difficult and complicated enterprise. It entails joy but also suffering, gain but also loss, hope but also despair. —Neal Gabler, Life: The Movie, 1998
4. Discourse is a social as well as an intellectual activity; it entails interaction between minds, and it revolves around something possessed in common. —David A. Hollinger, In the American Province, (1985) 1992
5. He accepted the responsibility, with all that it entails.
6. a lavish wedding entails extensive planning and often staggering expense
• Board members voted 5-2 to approve the recommendation offered by the district's administration, which entails spending about $15,000 more per year on lawn care. • Johnson did not immediately respond to an email Monday morning seeking comment about what that position would entail. • The operation, which entailed implanting computer code in sensitive computer systems that Russia was bound to find, served only as a reminder to Moscow of the United States’ cyber reach. • The news conference and junket — which entails having reporters hop from one hotel room to the next to interview the movie’s principals for scant minutes as publicists count down the time — were to be held the next day. • It was founded by the firebrand Ian Paisley in 1971 at the height of the Northern Ireland conflict, which entailed some three decades of violence between the province’s largely Catholic nationalist minority and largely Protestant unionist majority. • The intention was to project an image of control, without having to grapple with the issues which Brexit actually entailed. • While those entailed rappelling down a skyscraper and climbing a tower of construction pallets, Flanary said the most difficult task to face was bungee jumping off a bridge in Greece. • That V10 is a peach of an engine, revving out to 8,700rpm with all the wonderful attendant noise that entails. These example sentences are selected automatically from various online news sources to reflect current usage of the word 'entail'. Views expressed in the examples do not represent the opinion of Merriam-Webster or its editors. Send us feedback. ## Origin and Etymology of entail Middle English entailen, entaillen, from 1en- + taile, taille limitation — more at 4tail 2 ## entail play noun en·tail \ˈen-ˌtāl, in-ˈtāl\ ## Definition of entail 1. 1a : a restriction especially of lands by limiting the inheritance to the owner's lineal descendants or to a particular class thereofb : an entailed (see 1entail 1) estate 2. 2 : something transmitted as if by entail ## Recent Examples of entail from the Web • Can’t wait to see what their third wedding entails! • Just as there is no one definition of what offering sanctuary entails, there is no official number of churches and congregations participating in the sanctuary movement. • That$500,000 project entails reducing automobile lanes on the one-way eastbound street, which planners say does not carry a heavy traffic load, from three to one.
• Students know what the cutting edge entails and are eager to participate.
• But what person who’s responsible for another living creature ever really understands what care entails?
• During her presentation, Siskind explained that fracturing entails pumping millions of gallons of solution containing water, chemicals, and sand into the ground at very high pressure to shatter or fracture shale to release oil or gas.
• Both entail—not to give the game away—a large primate who has made absolutely no effort to meet with his therapist.
• Levin's lens gives us a glimpse into the lives of people from both sides of the block, revealing a similar consciousness—and conscience—about what such disparity entails.
These example sentences are selected automatically from various online news sources to reflect current usage of the word 'entail'. Views expressed in the examples do not represent the opinion of Merriam-Webster or its editors. Send us feedback.
see 1entail
play
verb
## Definition of entail for English Language Learners
• : to have (something) as a part, step, or result
1
## entail
play
transitive verb en·tail \in-ˈtāl\
## Legal Definition of entail
1. : to make (an estate in real property) a fee tail : limit the descent of (real property) by restricting inheritance to specific descendants who cannot convey or transfer the property estates are entailed entire on the eldest male heir — Benjamin Franklin
noun
## Origin and Etymology of entail
Middle English entaillen, from en-, causative prefix + taille restriction on inheritance — see tail
2
noun en·tail
## Legal Definition of entail
1. 1 : an act or instance of entailing real property; also : the practice of entailing property the repeal of the laws of entail would prevent the accumulation and perpetuation of wealth in select families — Thomas Jefferson — see also De Donis Conditionalibus
2. 2 : an entailed estate in real property if entails had not become barrable — Eileen Spring
3. 3 : the fixed line of descent of an entailed estate
## Seen and Heard
What made you want to look up entail? Please tell us where you read or heard it (including the quote, if possible).
### Love words? Need even more definitions?
Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!
#### akimbo
play
set in a bent position
Get Word of the Day daily email!
#### BROWSE DICTIONARY
The Emoji Quiz
• What phase of the moon is this: 🌖 ?
Can you spell these 10 commonly misspelled words?
TAKE THE QUIZ
Test Your Knowledge - and learn some interesting things along the way.
TAKE THE QUIZ
### Love words? Need even more definitions?
Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!
#### Words at Play
##### Words We're Watching: 'Aquafaba'
Don't dump out the bean water just yet...
Don't just eat your words—play with them!
##### 'Drought' & 'Drouth' Take Us Back In Time
A variant spelling with a long history
##### Words of the Year: 1066
Or, Why Pig Meat is Called 'Pork' and Cow Meat is Called 'Beef'
##### How Often Is 'Biweekly'?
Don't get us started on 'bimonthly'
##### Irregardless
It is in fact a real word (but that doesn't mean you should use it).
##### Weird Plurals
One goose, two geese. One moose, two... moose. What's up with that?
#### Word Games
##### The Emoji Quiz
Think you’re an emoji 👓 expert? Let’s see 🤔
Take the quiz
##### A Game of Thrones Word Quiz
In the Game of Thrones quiz, you learn some new vocabulary. (It’s quite safe, really. You will not die.)
Take the quiz
##### Spell It
Can you spell these 10 commonly misspelled words?
Take the quiz
|
2017-07-26 19:43:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17436498403549194, "perplexity": 13466.560280641916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426372.41/warc/CC-MAIN-20170726182141-20170726202141-00257.warc.gz"}
|
https://www.studyandscore.com/studymaterial-detail/number-series-concept-types-of-number-series-tips-and-tricks-to-find-pattern-of-number-sequences
|
• This Site can be best viewed in Mozilla or Internet Explorer.
# Number Series: Basic Concept, Types of Number Series, Tips and Tricks to Find out Pattern of number SequencePosted on : 19-12-2018 Posted by : Admin
## Introduction
A series is an ordered collection of figures or numbers or words or alphabets. A sequence of numbers which follow a particular pattern is called number series. In number series questions, some specific pre-decided rules are hidden and the candidate needs to find at that hidden rule to arrive at correct answer.
For example, consider 1, 4, 7, 10, 13….. Here the difference between the consecutive numbers is three. It is important to note that in number series each number except the first number is related to the prior number with some specific rule.
## Types of Number Series
There are many types of number series. A few of them are explained below,
Arithmetic Series: In this type the series progresses with addition or subtraction of some specific numbers. Here the DIFFERENCE between any successive terms is not very large, increases in a specific manner and is constant throughout the series.
Example: 5, 10, 17, 26, 37, 50, 65
Geometric Series: In this type the series progresses with addition or subtraction of some specific numbers. Here the RATIO between any successive terms is large, increases in a specific manner and is constant throughout the series.
Example: 4, 12, 36, 108, 324, 972
Arithmetico-Geometric Series: As name suggests this series is the combination of arithmetic and geometric series. An important property of such series is that the DIFFERENCE of successive terms is in geometric series.
Example: 1, 8, 22, 50, 106, 218
Product series: In this type of series, each term is multiplied by a fixed number or specific number pattern to get the next successive number. This can be of following types,
• Multiplication of the previous number by a fixed number
Example: 2, 4, 8, 16, 32, 64, 128
• Multiplication of the previous number by the next decreasing number
Example: 30, 180, 900, 3600, 10800, 21600
• Multiplication of the previous number by the next increasing number
Example: 21, 105, 630, 4410, 35280
Difference Series: Difference series can be further classified as
• The number series with a constant difference, where there is always a constant difference between the two consecutive numbers.
Example: 3, 6, 9, 12, 15, 18, 21
• In the number series with an increasing or decreasing difference the differences between the two consecutive numbers is increasing or decreasing
Example: 1, 2, 4, 7, 11, 16, 22
Here the difference is increasing from left to right
• In the number series with an increasing or decreasing difference the differences between the two consecutive numbers is decreasing
Example: 7, 16, 24, 31, 37, 42
Here the difference is decreasing from left to right
Division series: In this type of series, each term is divided by a fixed number or specific number pattern to get the next successive number.
Example: 128, 64, 32, 16, 8
Square or cube series: This type of numbers series progresses with squaring or cubing of the numbers.
Example: 4, 16, 36, 64, 100 (squaring)
Example: 1, 8, 64, 125, 216, 343 (Cubing)
Mixed series: In this type, more than one series pattern is involved. Generally these type of questions are a little complicated.
Example: 61, 72, 60, 73, 59, 74, 58, 75
## Tricks to solve Number series Question
Number series questions are common in any competitive exam. Candidates can score good marks with just the basic knowledge of math. These questions are relatively easy provided the candidate is able to find out the hidden rule. Here Studyandscore provides you with basic tips and tricks to solve number series questions. After you read through this article, click below to revise and practice with our tests.
Let us see the types of number series questions and tricks to solve them
### Trick 1: Finding the missing term in a series
In these questions, the candidate must find the missing term in the given number series.
For example: Find the number which must come in place of question mark in the below given series
17, 19, 25, 37, ?, 87
The given series follows the below pattern,
So, 57 comes in place of (?)
### Trick 2: Spotting the odd number in the given series
In these questions, one of the number does not follow the specific rule being followed by all other numbers of the series. Candidates need to spot this odd number.
For example: Find the odd term in the following given series
13, 16, 21, 27, 39, 52, 69
The given number series follows the below pattern,
So, 27 is the odd term in the given series
### Trick 3: Finding the correct term in the place of odd term
In these questions, one of the number does not follow the specific rule being followed by all other numbers of the series. Candidates need to spot this odd number and replace it with the correct number from the given options.
For example: Find out the odd term from the following number series and replace it with the correct term.
484, 240, 120, 57, 26.5, 11.25, 3.625
The given number series follows the below pattern,
So, 120 is the odd term in the given series and it must be replaced by 118
### Trick 4: Finding the value of unknown term in a given expression
In these questions, first the value of the missing term is to be found. Then, using this value find the value of unknown variable in the given expression. These kind of questions are relatively a bit complicated.
For example: Find the value of 'n' in the given series and using this value of 'n', find the value of x in the given expression..
68, 68.5, 69.5, 71, n, 75.5, 78.5. Replace the value of 'n' in expression n×121+x=10000
The given number series follows the below pattern,
68+0.5= 68.5
68.5+1= 69.5
69.5+1.5= 71
71+2= 73
73+2.5= 75.5
75.5+3= 78.5
So n= 73
Replacing the value of 'n' in the given expression,
n×121+x=10000
⇒ 73×121+x=10000
⇒ 8833+x=10000
⇒ x=10000-8833=1167
### Trick 5: Finding the missing terms in a series with respect to another given series
In these questions, two series patterns are given. The pattern of the second series is same as the first series. Based on the first series, candidate needs to find out the unknown term in second series.
For example: In series-1, only one number is wrong. If the wrong number is corrected, the series gets established following a certain logic. Complete series-2 with same logic and find what will come in place of (c)?
Series 1 5 9 25 91 414 2282.5
Series 2 3 a b c d e
Given number series 1 follows the logic
⇒ 5×1.5+1.5= 7.5+1.5= 9
⇒ 9×2.5+2.5= 22.5+2.5= 25
⇒ 25×3.5+3.5= 87.5+3.5= 91
⇒ 91×4.5+4.5= 409.5+4.5= 414
⇒ 414×5.5+5.5= 2277+5.5= 2282.5
Similarly, series 2
(a)⇒ 3×1.5+1.5= 4.5+1.5= 6
(b)⇒ 6×2.5+2.5= 15+2.5= 17.5
(c)⇒ 17.5×3.5+3.5= 6125+3.5= 64.75
## Techniques to know hidden number pattern
BASED ON DIFFERENCE BETWEEN TWO CONSECUTIVE TERMS OF A SERIES Difference between two consecutive terms is same For example: ${2}\underset{+5}{\to }{7}\underset{+5}{\to }{12}\underset{+5}{\to }{17}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}{35}\underset{-8}{\to }{27}\underset{-8}{\to }{19}\underset{-8}{\to }{11}$ Difference between two consecutive terms is in arithmetic progression (AP) For example: ${15}\underset{+11}{\to }{26}\underset{+16}{\to }{42}\underset{+21}{\to }{63}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}{34}\underset{-9}{\to }{25}\underset{-7}{\to }{18}\underset{-5}{\to }{13}$ *Here 11, 16, 21 are in AP in example 1 whereas -9, -7, -5 are in AP example 2. Difference between two consecutive terms is a perfect square For example: ${7}\underset{+{3}^{2}}{\to }{16}\underset{+{5}^{2}}{\to }{41}\underset{+{7}^{2}}{\to }{90}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}{70}\underset{-{4}^{2}}{\to }{54}\underset{-{3}^{2}}{\to }{45}\underset{-{2}^{2}}{\to }{41}$ Difference between two consecutive terms are multiples of a number For example: ${11}\underset{+12}{\to }{23}\underset{+24}{\to }{47}\underset{+48}{\to }{95}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}{204}\underset{-44}{\to }{106}\underset{-33}{\to }{127}\underset{-22}{\to }{105}$ *Here 12, 24, 48 are multiples of 12 in example 1 whereas -44, -33, -22 are multiples of 11 example 2. Difference between two consecutive terms is a perfect Cube For example: ${7}\underset{+{2}^{3}}{\to }{15}\underset{+{3}^{3}}{\to }{42}\underset{+{4}^{3}}{\to }{106}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}{121}\underset{-{4}^{3}}{\to }{57}\underset{-{3}^{3}}{\to }{30}\underset{-{2}^{3}}{\to }{22}$ Difference between two consecutive terms is in geometric progression (GP) For example: ${1}\underset{+1}{\to }{2}\underset{+3}{\to }{5}\underset{+9}{\to }{14}\underset{+27}{\to }{41}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}{33}\underset{-16}{\to }{17}\underset{-8}{\to }{9}\underset{-4}{\to }{5}\underset{-2}{\to }{3}$ *Here 1, 3, 9, 27 are in GP in example 1 whereas -16, -8, -4, -2 are in GP example 2.
BASED ON RATION BETWEEN TWO CONSECUTIVE TERMS OF A SERIES Ratio between two consecutive terms is same For example: ${3}\underset{×5}{\to }{15}\underset{×5}{\to }{75}\underset{×5}{\to }{375}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}{32}\underset{÷2}{\to }{16}\underset{÷2}{\to }{8}\underset{÷2}{\to }{4}$ Ratio between two consecutive terms is a Prime number For example: ${5}\underset{×1}{\to }{15}\underset{×2}{\to }{30}\underset{×3}{\to }{90}\underset{×5}{\to }{450}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}{770}\underset{÷11}{\to }{70}\underset{÷7}{\to }{10}\underset{÷5}{\to }{2}$ *Here 2, 3, 5 are prime numbers in example 1 whereas 11, 7, 5 are prime numbers example 2. Ratio between two consecutive terms is in arithmetic progression (AP) For example: ${2}\underset{×2}{\to }{4}\underset{×4}{\to }{16}\underset{×6}{\to }{96}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}{420}\underset{÷5}{\to }{84}\underset{÷4}{\to }{21}\underset{÷3}{\to }{7}$ *Here 2, 4, 6 are in AP in example 1 whereas 5, 4, 3 are in AP example 2. Ratio between two consecutive terms is a perfect square For example: ${3}\underset{×{2}^{2}}{\to }{12}\underset{×{4}^{2}}{\to }{192}\underset{×{6}^{2}}{\to }{6912}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}{44100}\underset{÷{7}^{2}}{\to }{900}\underset{÷{5}^{2}}{\to }{36}\underset{÷{3}^{2}}{\to }{4}$ Ratio between two consecutive terms are multiples of a number For example: ${5}\underset{×2}{\to }{10}\underset{×4}{\to }{40}\underset{×8}{\to }{320}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}{972}\underset{÷9}{\to }{108}\underset{÷6}{\to }{18}\underset{÷3}{\to }{6}$ *Here 2, 4, 8 are multiples of 2 in example 1 whereas 9, 6, 3 are multiples of 3 example 2. Ratio between two consecutive terms is a perfect Cube For example: ${2}\underset{×{1}^{3}}{\to }{2}\underset{×{3}^{3}}{\to }{54}\underset{×{5}^{3}}{\to }{6750}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}{13824}\underset{÷{4}^{3}}{\to }{216}\underset{÷{3}^{3}}{\to }{8}\underset{÷{2}^{3}}{\to }{1}$ Ratio between two consecutive terms is in geometric progression (GP) For example: ${1}\underset{×1}{\to }{1}\underset{×2}{\to }{2}\underset{×4}{\to }{8}\underset{×8}{\to }{64}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}{729}\underset{÷27}{\to }{27}\underset{÷9}{\to }{3}\underset{÷3}{\to }{1}$ *Here 1, 3, 9, 27 are in GP in example 1 where a -16, -8, -4, -2 are in GP example 2.
Hope you have liked this post.
|
2019-01-24 08:34:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6112668514251709, "perplexity": 599.0262808336244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584519757.94/warc/CC-MAIN-20190124080411-20190124102411-00616.warc.gz"}
|
https://homework.cpm.org/category/CC/textbook/cca/chapter/4/lesson/4.3.1/problem/4-99
|
### Home > CCA > Chapter 4 > Lesson 4.3.1 > Problem4-99
4-99.
Consider the equation $−6x=4−2y$. Homework Help ✎
1. If you graphed this equation, what shape would the graph have? How can you tell?
If there's an x and a y, and there are no exponents like x², what kind of graph does the equation make?
Remember that this equation could be written in y = mx + b form.
2. Without changing the form of the equation, find the coordinates of three points that must be on the graph of this equation. Then graph the equation on graph paper.
Make a table and substitute values for x and y until you have enough points to graph the line.
3. Solve the equation for y. Does your answer agree with your graph? If so, how do they agree? If not, check your work to find the error.
• Solve for y:
−6x − 4 = −2y
6x + 4 = 2y
3x + 2 = y
Yes, they both have the same starting value (2) and growth (3).
|
2019-08-18 13:28:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5718873739242554, "perplexity": 585.4027732577849}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313889.29/warc/CC-MAIN-20190818124516-20190818150516-00436.warc.gz"}
|
https://math.stackexchange.com/questions/1369541/what-is-the-cardinality-of-the-set-of-roots-of-unity
|
# What is the cardinality of the set of roots of unity?
Consider the geometric interpretation of "roots of unity":
My intuition says that you can place arbitrarily many equidistant points on the unit circle and catch every point that lies on it. Therefore every $z \in \mathbb{C}$ that lies on the unit circle should be a solution to $z^n = 1$ for some $n \in \mathbb{N}$.
But, if I understand correctly, for any $n$, $z^n = 1$ has exactly $n$ roots.
Therefore, we have a countable union of countable sets, and therefore the set of roots of unity is countable.
Does that mean that there are points on the unit circle that are not in the sets of roots of unity?
• The statement "Therefore every $z \in \mathbb{C}$ should be a solution to $z^n=1$ for some $n \in \mathbb{N}$" is incorrect unless you allow $n = 0$. Consider $z = 2$ (or any point $z$ not on the unit circle). Everything you have written after that statement is correct. – JimmyK4542 Jul 22 '15 at 4:20
• That same intuition would also lead you to believe every real number is rational, since we can (by your logic) subdivide the interval $[0,1]$ arbitrarily many times and "catch every point on it". – David Wheeler Jul 22 '15 at 4:22
• Corrected. Thank you. – Andrei Savin Jul 22 '15 at 4:24
For a concrete example, consider $z=e^i$.
Suppose for the sake of contradiction that $z^n=1$ for some $n$. Then
$$e^{in}=1\hspace{5mm}\implies\hspace{5mm}n=2m\pi$$ for some integer $m$. But this means $\pi=\dfrac{n}{2m}$ which is impossible because $\pi$ is irrational.
Your argument that the number of roots is countable is good. Your intuition that you "catch every point that lies on it" is not correct, and your other argument proves that. Every root of unity is an algebraic number, as it solves a polynomial $z^n-1=0$ The argument, which is essentially the same, that the algebraic numbers are countable applies here.
Does that mean that there are points on the unit circle that are not in the sets of roots of unity?
Yep!
My intuition says that you can place arbitrarily many equidistant points on the unit circle and catch every point that lies on it.
Your intuition about "catching" a point reflects the fact that the unit circle is the closure of the set of roots of unity. So this example proves that the closure of a countable set may be uncountable, which is very, very useful.
Yes, take any angle $\theta$ that is not commensurable with $\pi$, i.e., $\theta/\pi \notin \mathbb Q$. Then, applying DeMoivre's theorem:
$(\cos\theta+i\sin \theta)^n=\cos(n\theta)+i\sin(n\theta)=\cos(4k \pi)+i\sin(4k\pi)$, which forces $n\theta =2k\pi \implies\theta=2k\pi/n$. So $\theta, \pi$ must have this relation.
|
2019-05-19 08:36:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9075515866279602, "perplexity": 145.66100679139336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232254731.5/warc/CC-MAIN-20190519081519-20190519103519-00084.warc.gz"}
|
https://understandyourself.org/the-resilience-exhsi/ijfne.php?id=61c82b-square-root-symbol-text
|
# square root symbol text
Search for the square root symbol from scrolling up or down using the scroll bar. List of Root signs, make over 3 root symbols text character. You can also use your Keyboard Viewer as an alternative to my list. Math ML is an XML language designed to present complex equations. You would just need to replace the XX part of the formula with the cell containing the value for which to determine the square root. Here are some miscellaneous facts about the square root. So, a square looks like this: 42 A square root is a number that we multiply by itself to get the result specified inside the square root symbol. ... cube root – ∛ (∛) ... See the Math Symbol Unicode chart or Alan Wood’s Mathematical Operator Unicode table for other common symbols. Cube Root (Symbol/sign/mark) Preview and HTML-code. Select the desired symbol from the list. Use your right arrow key (⇨) again when finished typing your subs… CharMap allows you to view and use all characters and symbols available in all fonts (some examples of fonts are "Arial", "Times New Roman", "Webdings") installed on your computer. Rate this symbol: (4.40 / 5 votes) The square root of a number is the value that you need to multiple by itself to get the number. Or you can highlight the square root symbol, click [Ctrl] C, go to your document and click [Ctrl] V (where [Ctrl] C means to hold down the "control" button and then click the "C" button. From this dialog, search for the symbol you wish to insert. In the Symbol dialog, choose Mathematical Operators from the Subset dropdown, and scroll down to find the square root character. This should give you the symbol. These text box signs are text symbols which are usually used to make pseudographics - block graphics known as text symbol art, or ASCII art. For MS Word, place the curser at the desired location; for Excel, activate the cell that will contain the symbol. Select the square root and click the Insertbutton… Here you go with some maths power symbols, like text square/squared symbol for x², plus a white and black text square box symbol assortment in case you were looking for those. To insert an exponent, use the caret (^) symbol to move your cursor up to the exponent slot, where you can then insert your exponent. window.__mirage2 = {petok:"9aaebd1ba50901f8c4094b069e0e74914d0c0dbf-1608584028-86400"}; As far as we know, the idea itself of square root was probably first expressed by the ancient Babylonians. You can assign mathematical square root symbol √ and any other text characters to your keyboard using this technique. Once you are finished, use the right arrow key (⇨) to move out of the exponent slot and continue typing your equation. Go to where you want to type the square root symbol. And those with an unpair power (1, 3, 5...) would have only one positive solution. To type the square root symbol in Word using the keyboard, first ensure that your Num Lock is turned on. Click in the text where you want to insert the square root symbol. Place the curser at where you want to type the symbol. These are the steps involved to add the square root symbol anywhere in your computer. So let’s say we want to show the Pythagorean Theorem: For subscript, use the underscore (_) key to enter your subscript. It doesnât involve using the keyboard. in comments in Adobe reader. I'll show you how to do it by using different techniques … While we can show a square root symbol in Microsoft Word, we can actually calculate a square root in Microsoft Excel. The check mark symbol for square root is called the "Radical" symbol. This will convert the code into corresponding square symbol. The square function (ƒ(x)=x 2) is the inverse of the square root function (ƒ(x)=√x). The square root symbol shortcut in Windows is Alt+254 (on the numeric keypad). The equation feature in Word is very helpful when you need to insert lots of mathematical symbols into your project. Square root √ is a mathematical text symbol (we will talk later about its meaning) that people had been texting from the times when ASCII encoding was developed. 3. However, this method will only work in Microsoft Word documents. And you can type it right from your keyboard. In order to code the square root symbol, the characters that follow the square root character must have an overline.This is accomplished by creating a span of characters with a CSS style "text-decoration:overline;" as in the following example: :) The latex symbol for square root is \surd and not \sqrt. Tap and hold down the symbol until it is highlighted and tap "Copy." For positive "A", the principal square root can also be written in exponent notation, as A1/2. The keyboard itself is preinstalled on your iOS device, so you don't have to download, or buy anything. There actually are 3 different ways to type symbols on Linux with a keyboard. - 10112992 Without any further ado, letâs get started! The square root symbol refers to the principal square root, which is the positive one. But there are more roots than just square roots! Look for a site that shows the symbol and open it. Whether you are using Mac or the Windows PC, youâll learn the techniques to easily type or insert this symbol anywhere in your PC including apps like Microsoft Word, Excel, PowerPoint, etc. However, in this section, Iâll show you the easiest approaches. For instance, 6^2, means, 6 to the power 2. If you have any further questions, please let me know in the comments. Whilst on Mac, it is Option+V. See square root symbol stock video clips. You can find the Symbol dialog by following the INSERT > Symbols > Symbolpath in the Ribbon. Click the 'Insert' tab on the Ribbon. Following is a list of HTML and JavaScript entities for square root symbol. Right-click, and on the context menu, click Symbols. Below are the steps to type the square root symbol in Windows PC using the Alt Code: The square root symbol will immediately be inserted as soon as you let go of the Alt key. 5,051 square root symbol stock photos, vectors, and illustrations are available royalty-free. Configure your keyboard layout in Windows so that you can type all additional symbols you want as easy as any other text. Now, while the square symbol is shown as a symbol in all the three kinds of phones, there is another way of squaring your text. Just as in Microsoft Word, you can Insert the square root symbol into Excel through several methods. This is how you may insert the Square root symbol into Word. There are several ways to get any symbol in Word including the square root symbol. So, 4 is the square root of 16. In the text note, move the cursor to the location where you want to insert a symbol or character. Only on Microsoft Word documents, type 221B and press alt and x keys to make cube root symbol ∛. This tool is very convenient to help you preview the symbol, including viewing the details of the symbol display and the effect displayed on the web page. But only third and fourth level chooser keys and unicode hex codes can produce mathematical square root text symbol. The alt code (character code) for the square root symbol is 251. Place the cursor where you want to type the square root symbol. Browse other questions tagged java unicode printing square-root or ask your own question. Using the alt code is one of the simplest methods you can use to type the square root symbol in Windows. On windows, the square root symbol shortcut is Alt+125. With this method, you can type the symbol anywhere you want to enter text including Word, Excel, and PowerPoint. Click within the text box where you want the square root symbol, then click the α button at the bottom of the dialog box: Now select the left portion of the square root symbol in the next-to-last row, towards the right: someting like square root of X 0 Comments. Here is an example: √4 = 2 and 2² = 4 In addition, there is a root function, which is the inverse function of the quadratic function. The square root symbol should appear in the equation field after you type \sqrt and press on the spacebar key. Text Symbols with iPhone Emoji keyboard . Using the insert symbol dialog is a mouse way approach to inserting the square root symbol in Word. Double-click on the symbol to insert it into your Excel worksheet. In mathematics, a square root of a number x is a number y such that y2 = x; in other words, a number y whose square (the result of multiplying the number by itself, or y ⋅ y) is x. Click the lower part of the 'Equation' icon on the Ribbon -- the part that says 'Equation' and shows a downward arrow, not the part showing a pi symbol. HTML Arrows offers all the html symbol codes you need to simplify your site design. Square Root (Symbol/sign/mark) Preview and HTML-code With this tool, you can adjust the size, color, italic, and bold of Square Root (symbol). The two square roots of a negative number are both imaginary numbers, and the square root symbol refers to the principal square root, the one with positive imaginary part. At this moment, only the Mathematical Operators symbols including the Square root symbol will display in the visible area of the Symbols dialog box. The Square root symbol is in a sense the opposite of the ² Superscripted two symbol. It can also help you lookup Unicode codes for entering symbols with keyboard. For Windows users, to type the square root symbol in Excel using a keyboard shortcut, first click on the cell to contain the symbol; press and hold the Alt key and press 251 (i.e. Math ML. As you hold on to the Alt key, press 251 (the square root symbol’s Alt code) on the numeric keypad … And you can type it right from your keyboard. Character Palette allows you to view and use all characters and symbols, including square root sign, available in all fonts (some examples of fonts are "Arial", "Times New Roman", "Webdings") installed on your computer. Some symbols have not changed when I adjust, why? In this section, Iâll show you how to insert the square root symbol in Word using the insert equation feature. Press the alt key and type 8730 using numeric keypad to make square root √ symbol. A number squared uses the exponent 2. Return to the app in which you need the symbol, tap and hold down on the screen and select "Paste." We can say that the Nth root of a number "X" is a number "R" which, when raised to the power of "N", equals "X" and its denoted like Rⁿ = X. If you are on Mac, use Option+V shortcut to type the square root symbol in your Word document. Try these curated collections. You can use the ‘^’ to show the reader that whatever you will write after this will be a power to that number. Get 16 16 } $when we multiply that number by itself radicals square math 3d 3d. To my list Windows is Alt+254 ( on the numeric keypad ; release. There are several ways to type symbols on Linux with a keyboard these are steps! '' into the Safari search box two symbol are several ways to get symbol! Steps involved to add a virtual keyboard for emoji symbols visible as pictures. Use these shortcuts to get any symbol in Word is very helpful when you need to simplify your design! In which you need to simplify your site design in printed form in a string ( or text... Questions, please let me know in the text where you want to insert a square! Code ) on the numeric keypad ; then release the alt key the cell that will contain the symbol insert... Is Alt+125 51. radical numbers radication square roots: positive and the other negative is of! Itself of square root symbol place if you are using a keyboard shortcut be typing like boss! = 16 is an XML language designed to present complex equations about minutes! Note, move the cursor where you want to enter text including Word, Excel or even on browser. Insert lots of mathematical symbols into your project get any symbol in Word including the square root.. Root √ symbol can produce mathematical square root of a number is$...:... oops about this character.The u+221A name is square root sign when finished typing your subs… ! Work in Microsoft Word, place the curser at where you want to type symbol..., because 42 = ( −4 ) 2 = 16 codes can produce square! Root of a number paper by Christoph Rudolff back in 1525 in Windows code to the. Have seen DOS programs should know what i 'm talking about choose mathematical Operators from dialog..., italic, and you can insert the square root symbol place a square root symbol should appear in text! Adjust the size, color, italic, and you can assign mathematical square root symbol,. You can type the square root sign press alt and x keys to make cube root symbol. That you can also find u … list of html and JavaScript entities square. Number, we multiply 4 by itself, we multiply 4 by itself, we multiply by! Other questions tagged java unicode printing square-root or ask your own question type symbol. Is a mouse way approach to inserting the square root symbol shortcut for â! You should write like a boss Windows, the square root is \surd and not \sqrt is. + [ V ] produces √ square root symbol anywhere you want to type the symbol to it! This browser for the square root symbol in Word is very helpful when you need the...., type the square root was probably first expressed by the ancient Babylonians are square roots third fourth... And x keys to make cube root symbol Microsoft Word documents show specific meta-data that is about! Squared symbol '' into the Safari search box place if you have any further questions, please let know... Method, you can also help you lookup unicode codes for entering symbols keyboard! Alt x will insert a ballot square box with check mark symbol as ☑ please let me know the... A site that shows the symbol regularly, add this Copy as keyboard. Insert it into your project square box with check mark symbol for square root probably. The app in which you need the symbol, tap and hold down on the numeric keypad the symbol. Know, the square root symbol for square root was probably first expressed by the ancient Babylonians into.! Using a keyboard typing like a = this \u2669 symbol '' the! Root emoji you 'll be typing like a boss that shows the symbol Excel worksheet 've compiled a list root... After you type \sqrt and press on the symbol would have only one solution... As easy as any other text characters to your keyboard using this technique a site shows. Be Microsoft Word, Excel, and scroll down to find the symbol anywhere you want as easy as other... Get the symbol dialog is a list of root signs, make over 3 root symbols text character have. Also help you lookup unicode codes for entering symbols with keyboard ) on the spacebar.... So that you can insert the square root itself, we get 16 keyboard. Ancient Babylonians could be Microsoft Word documents questions, please let me in! To include a special symbol in Word is very helpful when you need to lots. Code into corresponding square symbol mathematical square root symbol for MS Word, you can find the square root as! Paste. produces √ square root symbol in a string buy anything … list of html and entities... ” keys to make square root of a number cursor to the location where you want insert! The Ribbon XML language designed to present complex equations ( 1, 3, 5... ) would only... On the screen and select Paste. a site that shows the symbol insert it into your.! The Subset dropdown, and scroll down to find the square root called!
|
2021-09-20 22:21:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5749254822731018, "perplexity": 1189.7110223122015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057119.85/warc/CC-MAIN-20210920221430-20210921011430-00635.warc.gz"}
|
https://math.stackexchange.com/questions/4507673/how-to-compute-a-matrix-relative-to-the-basis-consisting-of-its-eigenvectors
|
How to compute a Matrix relative to the basis consisting of its eigenvectors
I'm currently studying Linear Algebra on my own, and I've run into some difficulties with eigenvectors:
In my book, the Matrix A is given as: $$\begin{equation*} A= \begin{bmatrix} 1 & 3 & 0 \\ 3 & 1 & 0 \\ 0 & 0 & -2 \\ \end{bmatrix} \end{equation*}$$ With the eigenvalues $$4,-2,-2$$ and bases for the eigenspaces $${(1,1,0),(1,-1,0),(0,0,1)}$$ So far so good, but now it sais, that if we have a basis $$B'$$ made up of the three eigenvectors (same as the bases for the eigenspaces I assume) then $$A'$$ for $$T$$ (the linear transformation with standard matrix $$A$$)relative to the basis $$B'$$ is diagonal, however when I calculate $$A$$ relative to $$B'$$, so $$\begin{equation*} \begin{bmatrix} 1&1&0&1 & 3 & 0 \\ 1&-1&0&3 & 1 & 0 \\ 0&0&1&0 & 0 & -2 \\ \end{bmatrix} \end{equation*}$$ And then simplify $$B'$$ to the Identity matrix I don't get $$\begin{equation*} A'= \begin{bmatrix} 4 & 0 & 0 \\ 0 & -2 & 0 \\ 0 & 0 & -2 \\ \end{bmatrix} \end{equation*}$$ But rather $$\begin{equation*} A'= \begin{bmatrix} 2 & 2 & 0 \\ -1 & 1 & 0 \\ 0 & 0 & -2 \\ \end{bmatrix} \end{equation*}$$ I tried reversing the two, so $$A B'$$, I also tried multiplying the eigenvectors with their corresponding eigenvalue, but those didn't get me anywhere, let alone made sense.
Thank you for everyone who takes the time, if this is just a really stupid mistake, I'm sorry. Please consider that I'm in highschool so that my brain might not develope yet for this question.
• Evidently you calculated $A'$ incorrectly. Nobody can help you find the error since you don't tell us how you did this calculation Aug 7 at 15:26
There's no need to make derogatory remarks about yourself. It's not going to do you any good to view yourself as having an undeveloped brain. You're learning and that's all that matters to anyone here who cares about helping you.
Now, you've introduced some notation in your question that needs to be further clarified. You need to try and explicitly write out what some of these objects are. In particular, $$T$$ is the linear map associated with the matrix $$A$$. If we multiply $$A$$ by the column vector $$(x_1,x_2,x_3)$$, then we obtain: $$T(x_1,x_2,x_3) = (x_1+3x_2,3x_1+x_2,-2x_3)$$
This is just a consequence of matrix multiplication and recognizing that if $$x \in \mathbb{R}^3$$, then $$T(x) = Ax$$. In fact, I placed your matrix into an eigenvalue calculator (I'm too lazy to calculate the eigenvalues manually) and that calculator tells me that your eigenvalues are actually $$4$$ and $$-2$$. That's quite likely what you meant to write. If not, you need to check your calculations again. Now, you have the eigenvectors: $$B' = \{v_1 = (1,1,0),v_2 = (1,-1,0), v_3 = (0,0,1)\}$$ What this means is that $$T(v_1) = 4v_1$$, $$T(v_2) = -2v_2$$ and $$T(v_3)= -2v_3$$. Now, you're correct that these vectors do form a basis for $$\mathbb{R}^3$$. In order to find the matrix relative to this basis, let's write the following: $$T(v_1) = 4v_1 +0v_2 + 0v_3$$ $$T(v_2) = 0v_1 -2v_2+0v_3$$ $$T(v_2) =0v_1 + 0v_2 -2v_3$$ By definition, the matrix of $$T$$ relative to $$B'$$ is going to be given by the following scheme: $$[T(v_1) \ T(v_2) \ T(v_3)]$$ where $$T(v_1)$$ just refers to the first column and so on. Moreover, if we are truly finding the matrix of $$T$$ relative to $$B'$$, then we need to represent $$T(v_i)$$ in terms of its expansion in the basis $$B'$$. In other words: $$T(v_1) = (4,0,0)$$ $$T(v_2) = (0,-2,0)$$ $$T(v_3) = (0,0,-2)$$ These are the coordinates of each $$T(v_i)$$ in terms of the basis $$B'$$. So, you actually do get a diagonal matrix in this case.
Let me explain the above a bit more, though I'm sure that your textbook will contain the exact definition. So, let $$T: V \to W$$ be a linear map with $$V$$ and $$W$$ being finite dimensional. Let $$(v_1,\ldots,v_n)$$ be a basis of $$V$$ and let $$(w_1,\ldots,w_m)$$ be a basis of $$W$$. Then, the matrix of $$T$$ relative to these bases is calculated as follows:
1. Observe that each $$T(v_j)$$ is a vector in $$W$$. So, for each $$i \in \{1,\ldots,n\}$$, we can find scalars $$(a_{ij})_{i=1}^{m}$$ such that: $$T(v_j) = a_{1j} w_1 + a_{2j} w_2 + \ldots + a_{mj}w_m$$
In other words, in the basis $$(w_1,\ldots,w_m)$$, the coordinates of $$T(v_j)$$ are $$(a_{1j},a_{2j},\ldots,a_{mj})$$.
1. Take the coordinates of each $$T(v_j)$$ with respect to the basis $$(w_1,\ldots,w_m)$$. Each of these becomes the column of a matrix. This matrix is, then, the matrix of $$T$$ relative to these two bases.
Once again, this definition should be in your book and it should've been thoroughly motivated, so you need to go back and have a look at it again.
|
2022-09-26 22:04:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 64, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9459909796714783, "perplexity": 74.38567283650019}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00090.warc.gz"}
|
https://www.physicsforums.com/threads/matrix-operations-query.783267/
|
# Matrix Operations Query
1. Nov 20, 2014
### bugatti79
Hi Folks,
I have an inertia tensor D in the old Cartesian system which i need to rotate through +90 in y and -90 in z to translate to the new system. I am using standard right hand rule notation for this Cartesian rotation.
$D= \mathbf{\left(\begin{array}{lll}I_{xx}&I_{xy}&I_{xz}\\I_{yx}&I_{yy}&I_{yz}\\I_{zx}&I_{zy}&I_{zz}\\\end{array}\right)}$, $N_y(+90)=\mathbf{\left(\begin{array}{lll}0&0&1\\0&1&0\\-1&0&0\\\end{array}\right)}$, $N_z(-90)=\mathbf{\left(\begin{array}{lll}0&1&0\\-1&0&0\\0&0&1\\\end{array}\right)}$
If we let
$N_R=N_z N_y$ (I am pre-multiplying $N_y$ by $N_z$ because that is the order) and the transpose $N'_R=N_R^T$.
Is the the new system tensor $N_RDN'_R$ or $N'_RDN_R$...?
Thanks
2. Nov 21, 2014
### Staff: Mentor
Imagine how a vector in the new system would come in: the matrix on the right side would transform this vector to your old system, then the old matrix is applied, then the matrix on the left side transforms it back to your new coordinate system.
3. Nov 21, 2014
### bugatti79
Hi mbf,
Not sure I follow. Can you clarify a bit?
Thanks
4. Nov 21, 2014
### Staff: Mentor
If v is a vector in your new coordinate system, does $N_R v$ or $N'_R v$ represent the vector in the original coordinate system?
This will be used in the product $N_R D N_R v$ (with the right ' added).
5. Nov 22, 2014
### bugatti79
I still havent grasp your idea of a vector to cross-check. However, I know from a clue that the value of $I_{zz}$ in the new system has to be the same as $I_{xx}$ in the old system because "z axis new" lines up with "x axis old" and so $N_R D N'_R$ does this for me.
However, in the event of no clue, im still not clear how to use a vector....
Thanks
|
2017-08-19 16:29:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6115569472312927, "perplexity": 895.8627006773454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105455.37/warc/CC-MAIN-20170819143637-20170819163637-00531.warc.gz"}
|
https://byjus.com/question-answer/a-pipe-can-empty-the-tank-in-8-hours-and-another-can-fill-6-liters/
|
Question
# A pipe can empty the tank in $$8$$ hours and another can fill $$6$$ liters in $$1$$ min. If both pipe opened , the tank is emptied in $$12$$ hours . The capacity of tank is ?
Solution
## Let capacity of the Tank is 100 %First pipe can empty the tank = 8 hours = 8 *60 = 480 minWork rate of the pipe = x/480 liter per min.The work rate of both pipe = x/(12 *60) liter per min = (x / 480) - (x / 720) = Work rate of second pipe720 x - 480x = 6 liter *720 *480240x = 6 *720 *480x= 8640 litres.Mathematics
Suggest Corrections
0
Similar questions
View More
|
2022-01-18 19:52:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29660388827323914, "perplexity": 8268.80076794853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300997.67/warc/CC-MAIN-20220118182855-20220118212855-00651.warc.gz"}
|
http://www.askphysics.com/qa/?qa=410/derive-the-expression-for-conductivity-in-terms-of-mobility&show=412
|
Derive the expression for conductivity in terms of mobility
The electron mobility is defined by the equation:
.
where:
E is the magnitude of the electric field applied to a material,
vd is the magnitude of the electron drift velocity (in other words, the electron drift speed) caused by the electric field, and
µ is the electron mobility.
There is a simple relation between mobility and electrical conductivity. Let n be the number density (concentration) of electrons, and let μe be their mobility. In the electric field E, each of these electrons will move with the velocity vector , for a total current density of (where e is the elementary charge). Therefore, the electrical conductivity σ satisfies:
.
This formula is valid when the conductivity is due entirely to electrons
DERIVATION
answered Jan 5 by (4,450 points)
|
2018-03-22 03:51:47
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9395591020584106, "perplexity": 904.4061279717996}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647768.45/warc/CC-MAIN-20180322034041-20180322054041-00498.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=130&t=30241
|
## Q5 on the midterm
$\Delta U=q+w$
Mary Becerra 2D
Posts: 53
Joined: Fri Sep 29, 2017 7:06 am
### Q5 on the midterm
In step 1 we find that the work done by the system is -158J. How can we make the conclusion that in step 2, our U = +158J? I thought that since there is no heat exchange, w=U.
Jonathan Tangonan 1E
Posts: 50
Joined: Sat Jul 22, 2017 3:01 am
### Re: Q5 on the midterm
The delta U is +158J because in step 2 it returns to its original internal energy. So, the work the system had done which was -158J returns back to its original state by saying delta U of the second step is +158J.
|
2021-01-28 06:29:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46536245942115784, "perplexity": 1696.9115351004484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704835901.90/warc/CC-MAIN-20210128040619-20210128070619-00577.warc.gz"}
|
https://brilliant.org/discussions/thread/is-the-value-of-i0-1/
|
# Is the value of i^0 =1 ?
Respected members of Brilliant !
I was puzzled by this thought.
Kindly help me as soon as possible
Note by Manish Dash
5 years, 11 months ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
Yes, $i^{0} = 1$, see this !
What I felt was $\dfrac{\sqrt{a}}{\sqrt{b}} = \sqrt{\dfrac{a}{b}}$ is valid in $\mathbb{C}$. But as explained by Brian Charlesworth sir, $\dfrac{\sqrt{a}}{\sqrt{b}} = \pm \sqrt{\dfrac{a}{b}}$
- 5 years, 11 months ago
This is a classic fallacy; this equality only holds true if both $a$ and $b$ are positive reals. All we can say when dealing with complex numbers is that
$\dfrac{\sqrt{a}}{\sqrt{b}} = \pm \sqrt{\dfrac{a}{b}}.$
@Manish Dash So in your example we end up with
$\dfrac{\sqrt{-2}}{\sqrt{3}} = i\dfrac{\sqrt{2}}{\sqrt{3}} = i\sqrt{\dfrac{2}{3}}$ and $\dfrac{\sqrt{2}}{\sqrt{-3}} = \dfrac{\sqrt{2}}{i\sqrt{3}} = -\sqrt{\dfrac{2}{3}},$ and thus $\dfrac{\sqrt{-2}}{\sqrt{3}} = - \dfrac{\sqrt{2}}{\sqrt{-3}}.$
Also, you are correct in saying that $\dfrac{i}{i} = 1.$ :)
- 5 years, 11 months ago
Thanks a lot sir :) !
- 5 years, 11 months ago
So, is sqrt (-2) / sqrt (3) = sqrt (2) / sqrt (-3) ??
- 5 years, 11 months ago
You mean whether $\dfrac{\sqrt{-2}}{\sqrt{3}} = \dfrac{\sqrt{2}}{\sqrt{-3}}$ ?
- 5 years, 11 months ago
Yes, I do mean by that statement. As I do not know how to insert special symbols like sqrt, or pi in a comment so I had to specify my comment in that form
- 5 years, 11 months ago
Well, I think I am not enough experienced to answer your question of though, I have no idea !
- 5 years, 11 months ago
I hope if @Raghav Vaidyanathan can explain this.
- 5 years, 11 months ago
$\Large i^0=(e^{i \frac {\pi} {2}})^0=e^0=1$
- 5 years, 11 months ago
Bro, can you provide the proof of your substitution :- $i = e^{\frac{i\pi}{2}}$ and wouldn't it be a circular argument in doing so ? And also, how can you establish the equation $(e^{i \frac {\pi} {2}})^0=e^0$.
Please do explain, I am a beginner in the study of complex numbers. I think this note is quite suitable to ask such fundamental questions. Thanks in advance ! I hope you don't mind me calling you 'bro' :P.
- 5 years, 11 months ago
Using Euler's formula
$e^{ix} = \cos x + i \sin x,$
we get
$e^{i\frac{\pi}{2}} = \cos \frac{\pi}{2} + i \sin \frac{\pi}{2}$
$e^{i\frac{\pi}{2}} = 0 + i = i$
- 5 years, 11 months ago
Proof of Euler's Formula ?
- 5 years, 11 months ago
Sir, is my remark valid in $\mathbb{C}$ ? Please reply !
- 5 years, 11 months ago
First of all, don't call me sir.
- 5 years, 11 months ago
Is there any other method to prove this?
- 5 years, 11 months ago
Thanks !
- 5 years, 11 months ago
and all others who see this note
- 5 years, 11 months ago
I support @Karthik Venkata answer, as $\frac{\sqrt{-1}}{\sqrt{-1}} = \sqrt{\frac{-1}{-1}}= \sqrt{1}=1$
@Manish Dash , you can see @Raghav Vaidyanathan 's answer for a precise solution of your note's heading.
- 5 years, 11 months ago
sqrt(a*b) = sqrt(a) * sqrt(b) is valid only if at least one of a or b is non negative .... so you cannot say sqrt(-1) / sqrt(-1) = 1
- 5 years, 11 months ago
i think 1^1/2= 1 or -1 am i right?
- 5 years, 11 months ago
The square root function over the non-negative reals always produces a non-negative result. Thus $1^{\frac{1}{2}} = \sqrt{1} = 1$ only. Over the complex numbers, with $1 = e^{n*2\pi}$ we have that $1^{\frac{1}{2}} = (e^{n*2\pi})^{\frac{1}{2}} = e^{n\pi},$ which equals $1$ for even integers $n$ and $-1$ for odd integers. In this context we would call $1$ the principal root.
- 5 years, 11 months ago
i think sqrt{1} = [(+or -1)^{2}]^{0.5}.
- 5 years, 11 months ago
When you're searching for $\dfrac ii$, you're really searching for some number $X$ such that $X\times i=i$. A quick check (hint: write $X=a+bi$ for $a,b$ real) shows that $X=1$ is the only solution.
- 5 years, 11 months ago
|
2021-04-18 18:54:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 41, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9869310855865479, "perplexity": 2410.1283964126887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038507477.62/warc/CC-MAIN-20210418163541-20210418193541-00250.warc.gz"}
|
http://www.sacredduty.net/2012/04/11/mop-block-calculations-part-1/
|
# MoP Block Calculations – Part 1
There’s been some question about how the new block mechanics will affect our damage intake, and how the different stats (dodge, parry, matery, hit, expertise, and even haste) will compare for minimizing that metric. While damage intake isn’t always the most relevant metric, it is one that’s fairly easy to calculate and gives some basic information about the baseline effectiveness of different stats. In this series of posts, I will work through the full derivation and make some comparisons between the different secondary stats.
It’s also worth noting that our mechanics will be changing in the near future, according to a new forum post by Ghostcrawler. The previous version had us gaining 25% block value and one guaranteed block for 6 seconds. The new version will have us gaining 45% block value for the guaranteed block (total of 75%) and 20% block value (total of 50%) for the remainder of the buff duration. I’m modeling the new mechanics, though it’s fairly trivial to convert this derivation to the old mechanics if things change down the line.
The formula for time-averaged damage taken per second (DTPS) is
$D = D_0F_a(1-A)F_b$
$D$ is the net damage taken per second after all mitigation and avoidance effects
$D_0$ is DTPS before all mitigation and avoidance effects
$F_a$ is the armor mitigation factor, calculated the usual way
$A$ is your decimal avoidance from all sources, after diminishing returns
$F_b$ is the “block factor” which models blocking mitigation.
As I said in an earlier blog post, this differs from the old one-roll system because it converts a combined avoidance/block factor like $(1-A-B_vB_c)$ into two multiplicative factors $F_{\rm avoid}=(1-A)$ and $F_b$. In that earlier post, I gave an explicit form for the block factor: $F_b=(1-B_vB_c)$. If we ignored Shield of the Righteous (SotR), $B_v$ is just your block value (30%) and $B_c$ is your character sheet block chance. However, SotR throws a wrench into everything. We can still express it in a form similar to the one given above, but with more complicated expressions for the average block value $\tilde{B_v}$ and average block chance $\tilde{B_c}$. And we’ll find that there’s no compelling reason to do so, as there’s a different form that gives more insight into what’s going on anyway.
To calculate $F_b$, we start with the following expression:
$F_b = G(1-B_v”) + (1-G)\left [ B_c S (1-B_v’)+ B_c(1-S)(1-B_v) + (1-B_c)\right ]$
Let’s go through this term by term. The first term is the chance that we are guaranteed a block by Shield of the Righteous ($G$) times the amount of damage we take when that happens $(1-B_v”)$, where $B_v”$ is our block value for the guaranteed block (75%). After distributing, the second term is $(1-G)B_c(1-S)(1-Bv’)$, which is the chance that we don’t get a guaranteed block $(1-G)$ multiplied by our block chance $B_c$ multiplied by the probability that the SotR buff is active $S$ multiplied by the damage taken during that buff $(1-B_v’)$, with $B_v’$ being our block value during the 6-second SotR buff (50%). In other words, this term represents the chance that we aren’t guaranteed a block, but block anyway during the 6-second duration of the SotR block value buff. There’s a subtlety here involving $S$ that complicates matters, but we’ll come back to that later.
The third term is $(1-G)B_c(1-S)(1-B_v)$, which represents the chance that we aren’t guaranteed a block, but do so anyway without the SotR buff active. And the final term is $(1-G)(1-B_c)$, which is the chance we don’t get a guaranteed block and that our natural block mechanism fails us, forcing us to take a full sized hit (damage taken = 1). Those are the four possible outcomes of the system, each weighted by their individual probabilities. And in fact, it’s easy to show that this expression is correct: let $B_v$ and $B_v’$ equal zero, and the expression sums to 1, just as one would expect for a sum of probabilities that spans the space of possible outcomes.
With a little bit of algebra, we can put this in one of two simpler forms:
$F_b = 1 – G B_v” – (1-G)B_c(S B_v’+(1-S) B_v)$ (1)
or, equivalently
$F_b = 1 – G B_v” – (1-G) B_c S B_v’ – (1-G) B_c (1-S) B_v$ (2)
Each expression gives slightly different intuition. The first expression says your block mitigation factor is just one minus the mitigation from guaranteed blocks minus the mitigation from non-guaranteed blocks, with an average block value $\overline{B_v} = (1-S) B_v + S B_v’$ based on SotR uptime $S$. In other words, it breaks it down by the type of block – guaranteed versus not guaranteed. The second expression breaks it down differently, by the amount of block value. In this case, the factor is one minus the mitigation from blocks of size $B_v”$ minus the mitigation from blocks of size $B_v’$ minus the mitigation of regular blocks ($B_v$). The coefficients of each term are simply the probabilities for blocking each amount.
As I said earlier, we could put this in a form $F_b = 1 – \tilde{B_c}\tilde{B_v}$ by defining the overall chance of blocking $\tilde{B_c}=G+(1-G)B_c$. With that definition, we get the following form for $\tilde{B_v}$:
$\tilde{B_v} = \frac{G B_v” + (1-G) B_c S B_v’ + (1-G) B_c (1-S) B_v}{G + (1-G) B_c}\large$.
But this version doesn’t do much for us – it’s no simpler an expression than (1) or (2), and it doesn’t give us any additional insight into the meaning of the terms. So for any of our calculations, we’ll start with one of the numbered equations.
We already know $B_v”$, $B_v’$, $B_v$, and $B_c$ from the character sheet. All that remains is to calculate $G$ and $S$ and we have a complete analytical expression describing the new block mechanics. Let’s proceed to do that.
Calculating $G$ is fairly straightforward. The probability that any given attack will be a guaranteed block is simply
$G = R_{\rm SotR} / R_{\rm att}$
where $R_{\rm SotR}$ is your SotR cast rate, and $R_{\rm att}$ is the incoming blockable attack rate, or one over the time between blockable attacks $T_{\rm att}$. Note that this is a little different from the boss’s attack rate because of avoidance. It’s actually $(1-A)/T^{(0)}_{\rm att}$, where $T^{(0)}_{\rm att}$ is the boss’s “true” attack/swing timer. So we need to assume a boss attack speed to get useful results, which is annoying, but do-able. Since SotR is off-GCD, we can estimate the SotR cast rate pretty straightforwardly as your HP generation rate divided by three, or
$R_{\rm SotR} = R_{\rm HPG}/3$
However, $R_{\rm SotR}$ is complicated slightly if we want to include talents. Holy Avenger is actually fairly easy since it can be modeled as a simple increases to your average HPG rate. Divine purpose is more irritating, because it’s a 15% chance on every SotR to proc the effect. So it modifies the SotR cast rate as follows:
$R_{\rm SotR} = R_{\rm HPG}/3 + \alpha_{\rm DP} R_{\rm SotR}$
where I’ve let $\alpha_{\rm DP}$ be the DP proc rate in case it changes. Solving for SotR:
$R_{\rm SotR}= R_{\rm HPG}/3 (1-\alpha_{\rm DP})$
Luckily, $R_{\rm HPG}$, is easily calculable (analytically or via simulation) for a given rotation and haste value. So we can get the information we need to calculate $G$.
$S$ looks very tricky, but ends up being deceptively simple. My first attempts were pretty ugly, involving double-integrals of a comb function multiplied by a rect function. And they were correct, but I realized afterward that the resulting expression was something I could have guessed using geometry. So I’ll give you the easy version:
The duration of the SotR buff is $T_{\rm buff}$. The uptime of the buff is $R_{\rm SotR}T_{\rm buff}$, bounded by $[0, 1]$ (i.e. if the product ever goes above 1, it equals 1 because we cap out at 100% uptime). The number of boss attacks that occur during the buff is $R_{\rm SotR}T_{\rm buff}/T_{\rm att} = R_{\rm SotR}R_{\rm att}T_{\rm buff}$, bounded by $[0, R_{\rm att}]$. So far so good.
But $S$ isn’t the probability that the SotR block value buff is active (i.e., it’s not the uptime). In fact, it’s the probability that the buff is active for attacks that weren’t already guaranteed to be blocked due to the other SotR buff. So we need to account for the guaranteed blocks. The rate of guaranteed-blocked attacks is just $G R_{\rm att} = R_{\rm SotR}$, bounded by $[0, R_{\rm att}]$, and the rate of non-auto-blocked attacks is $(1-G) R_{\rm att} = R_{\rm att} – R_{\rm SotR}$, bounded by $[0, R_{\rm att}]$.
To calculate $S$ properly, we need to find
$\frac{\text{number of attacks during buff – number of attacks auto-blocked}}{\text{number of attacks not auto-blocked}}\large$
or their equivalent rates. Which is:
$S = \frac{R_{\rm SotR}R_{\rm att} T_{\rm buff} – R_{\rm SotR}}{R_{\rm att} – R_{SotR}}\large$
You might notice that this expression looks questionable, in particular the denominator. If $R_{\rm SotR} \rightarrow R_{\rm att}$, it looks like $S \rightarrow \infty$. In fact, it doesn’t, because the numerator goes to zero in that situation as well. This is why I was careful about specifying the bounds of each term earlier on – in the limit where $R_{\rm SotR} \rightarrow R_{\rm att}$, the term $R_{\rm SotR}R_{\rm att} T_{\rm buff}$ approaches its upper bound of $R_{\rm att}$, and the expression approaches 1. That’s good, because it’s what you’d logically expect – if you’re casting SotR as often as the boss is swinging, the buff should be up for every attack (though note that $G\rightarrow 1$ in this limit as well, making this a bit of a moot point).
This expression for $S$ is the last piece of the puzzle, because we have all of those values already. From this point it’s just a matter of doing some substitutions and taking some derivatives. In the next installment, I’ll make those substitutions and derive the expressions that will tell us how mastery stacks up to dodge, parry, hit, expertise, and haste.
This entry was posted in Tanking, Theck's Pounding Headaches, Theorycrafting and tagged , , , , , , , , . Bookmark the permalink.
### 8 Responses to MoP Block Calculations – Part 1
1. kalbear says:
Well, thank goodness that they are taking out all those complicated mechanics like arpen and going to these simple mechanics like double roll block with diminishing returns.
2. Weebey says:
It’s a good thing that SoTR duration buffs are additive; accounting for clipping, especially when thinking about things like Divine Purpose and Holy Avenger, would have probably made an analytic expression for S almost impossibly hard to compute.
I was a little confused by the following paragraph:
“S looks very tricky, but ends up being deceptively simple. My first attempts were pretty ugly, involving double-integrals of a comb function multiplied by a rect function. And they were correct, but I realized afterward that the resulting expression was something I could have guessed using geometry. So I’ll give you the easy version:”
I don’t see any geometry in the subsequent derivation of the expression for S in terms of known quantities; it seems to be a fairly elementary argument based on the definition of the terms in question. Was the point that something about the geometry of the other argument showed you that this simple approach would actually work?
Also, one other point, which I’m sure you are aware of, but which could be useful to other readers, is that one of the definitions above as stated isn’t quite right. T_{att} isn’t just be the boss’s attack/swing timer, since, as you showed in the MoP testing thread, the SoTR buff is not consumed if the next melee attack is avoided, so the value that should be used here is something
like T_{att}/(1-Av).
This actually brings up a possible complication: what happens if you refresh SoTR while the guaranteed block buff is still in effect? This can definitely happen with e.g. Divine Purpose procs. The model above implicitly assumes that nothing is overwritten, so that when this happens you block the next two melee attacks, but I honestly don’t know if that is how it works in game. It looks like logs indicate that the duration of the guaranteed block portion do NOT add, which makes me think that clipping probably happens. If it does, then the expression for G isn’t quite right.
• Theck says:
The geometry comment is actually only relevant to the part where I say that the number of attacks during the buff is $T_{rm buff}/T_{rm att}$. You can get that from the integral, or you can draw a comb function with $T_{rm att}$ spacing and a rect function with $T_{rm buff}$ width, and see geometrically that it’s the right answer. Otherwise you’d have to resort to some combinatorics.
You’re correct about $T_{rm att}$ not strictly being the boss’s swing timer. I realized that about halfway through the derivation, and decided that it was easier to encapsulate that factor of $1/(1-A)$ into $T_{rm att}$ directly. We never end up using $T_{rm att}$ outside of the calculation for $S$, and the correction occurs in each instance. So it seemed logical to handle that when we plugged in numbers. Also, I didn’t think anyone would notice. 😛 Now that you’re pointing it out, though, I’m second-guessing whether that was the clearest way to approach it.
The SotR guaranteed block buff does not extend additively. It just over-writes itself. So you’re correct that clipping can happen. Here’s why we don’t have to worry about it – we assume the player is smart enough not to spam SotR, and since it’s off-GCD, we can assume they cast it as soon as it’s useful again. So if we cast it at 3 Holy Power, we have the time it takes to generate another 5 Holy Power before we’re in danger of wasting a cast. As long as HPG is slow (we’re going to be generating less than 0.4/sec, so on average we’re talking 13 or more seconds, it’s reasonable to assume that a block occurs before we recast SotR. Divine Purpose procs fool around with that some too, of course, but that’s still a pretty large window.
In practice, there will be some clipping that reduces G compared to the model. It shouldn’t be a very large effect, but it will take some combinatorics (or a simulation, if we’re lazy) to figure out the magnitude of that effect. That’s in the cards for Part 3, where I’ll be writing a very basic combat simulator to see how well the model holds up.
• Weebey says:
I see. When I first read the post I thought it was intuitively obvious that the average number of attacks that occur with the buff up (per unit of time) was R_{sotr} * T_{buff}/ T_{att}. I still think that, to be honest, but when I tried to formalize my intuitions they ended up relying on a mathematical fact which, while again intuitively very plausible, is probably not much easier to prove than other methods (*).
I think I should apologize for pointing out the subtlety with T_{att}; I know how annoying it can be when “that guy” points out some little technicality which you didn’t really want to burden the exposition with! I only noticed it because none of the terms seemed to depend on your avoidance rate, whereas it is obvious that there has to be some effect.
I am not on the beta, so I can’t fiddle around with it, but at least as stated I’m not so sure we can be that confident that Divine Purpose won’t ever, or even often, overwrite the initial block buff. While one obviously should wait, my guess is that a lot of people hit a DP SotR as soon as they can, which means they will only have at most one melee attack to consume the buff before it gets overwritten. Optimal play of course will minimize this, while still ensuring that the block value doesn’t drop, but my guess is that will take some getting used to to really get it right.
(*) For anyone who cares: the formula follows from the fact that, if I have a discrete probability measure on a finite interval of the real line, with a fixed distance between the points in the support of the measure, all of which have equal weight, then, as the “mesh” of the discrete measure goes to 0 (equivalently, as the length of the interval goes to infinity), these discrete measures will converge to the continuous uniform probability measure on the interval.
• Theck says:
I’ve gone back and re-written the portions with $R_{rm att}$ and $T_{rm att}$ to make the distinction between “true” swing timer and effective swing timer clearer. After thinking about it more, the encapsulation made it easier to work with the math in my hand-calculations, but it really hampers the understanding when you’re reading it (and aren’t burdened by having to write excess $^{(0)}$ superscripts.
|
2017-01-23 00:23:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8040542602539062, "perplexity": 682.0691829526834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00414-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=142&t=30126&p=93581
|
## When do you use LNQ vs LogQ?
$E_{cell} = E_{cell}^{\circ}-\frac{RT}{nF}\ln Q$
Moderators: Chem_Mod, Chem_Admin
Adriana Rangel 1A
Posts: 96
Joined: Fri Sep 29, 2017 7:04 am
### When do you use LNQ vs LogQ?
When using the Nernst equation when do you use LnQ vs Log Q? I know the equation has Ln Q but some work I have seen has LogQ
Aijun Zhang 1D
Posts: 53
Joined: Tue Oct 10, 2017 7:13 am
### Re: When do you use LNQ vs LogQ?
When you use equation $E = E^{o}-\frac{RT}{nF}lnQ$, you use lnQ.
When you use equation $E = E^{o}-\frac{0.05916}{n}logQ$, which is the simpler equation for condition under 298K, you use logQ.
Return to “Appications of the Nernst Equation (e.g., Concentration Cells, Non-Standard Cell Potentials, Calculating Equilibrium Constants and pH)”
### Who is online
Users browsing this forum: No registered users and 1 guest
|
2020-11-28 09:06:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4378162920475006, "perplexity": 10027.973989161621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195198.31/warc/CC-MAIN-20201128070431-20201128100431-00665.warc.gz"}
|
https://tex.stackexchange.com/questions/30595/is-there-such-a-thing-as-a-latex-code-formatter?noredirect=1
|
# Is there such a thing as a LaTeX code formatter
I was looking for this a while back for JavaScript, but I was wondering a general purpose one exists for all, or most languages.
For example for LaTeX I would put the following in a text box
$f_i^k+10=x$
and it spits out the better formatted version
%%%
%% Insert comment describing function here
%%%
$f_{i}^{k} + 10 = x$
I can't be the only person on the planet that does not wish to go through a massive .tex file and fix these tedious problems.
• @dmckee this is too funny, on the linke bdares provided, someone has further complained "This should probably be community wiki". I imagine if I checked the community wiki it will claim that it should probably belong on a blog...
– puk
Oct 5, 2011 at 18:30
• There is no guarantee that a question will find a good fit on the Stack Exchange network. In any case bdares link is not specifically about pretty printing now about literate programming/in-line documentation, so it may or may not be a duplicate. I'm not active enough on TeX.SE to be certain. Oct 5, 2011 at 18:34
• @puk You misunderstand the comment. It merely means that the answers should belong to the community, instead of to individual users (it’s a Stack Overflow feature). The question bdares links to is fine, and so is yours. Oct 5, 2011 at 18:34
• @puk: Are you interested in "correcting" your LaTeX code (in the .tex file) so that super-/subscripts are actually put in braces { }, thereby possibly avoiding formatting problems?
– Werner
Oct 5, 2011 at 19:17
• @Werner I'm interested in an automatic code fixer upper which also aids in avoiding potential formatting problems.
– puk
Feb 2, 2012 at 6:27
I created a website that formats the latex code to make indents look correct.
The general idea of the website is to make sure you can read the code. It also provides table indentations. I am still looking to see if adding an empty comment block for formulas is possible.
https://c.albert-thompson.com/latex-pretty/
Take a look at TeXpretty. I have used it a couple of times for cleaning up messy code and it does a decent job.
As of Version 3.7, latexindent.pl can help with this.
As a warning: do take care when using regular expressions such as those below, always check that they are behaving as you would like, and test them before using them on anything important.
Starting with the following sample code
\begin{env}
$f_i^k+10=x$
\end{env}
$g_ij^12-3=x$
and the YAML file, say puk1.yaml
replacements:
-
substitution: |-
s/\$(.*?)\$/%%
%% Insert comment describing function here
%%
\1\$/sxg and running the command latexindent.pl -r myfile.tex -l=puk1.yaml gives the output: \begin{env} %% %% Insert comment describing function here %%$f_i^k+10=x$\end{env} %% %% Insert comment describing function here %%$g_ij^12-3=x$ This doesn't have everything you requested. We can incorporate some more replacements in the following file, say puk2.yaml replacements: - substitution: |- s/\$\h*(.*?)\h*\$/# my$comment = "%%\n%% Insert comment describing function here\n%%\n";
my $body =$1;
# add braces to superscripts and subscripts
$body =~ s@(\^|\_)([a-zA-Z0-9]+)@$1\{$2\}@sg; # add a single space around + - =$body =~ s@\h*([+\-=])\h*@ $1 @sg; # put it all together$comment."\$".$body." \$";/sxge and running latexindent.pl -r myfile.tex -l=puk2.yaml gives \begin{env} %% %% Insert comment describing function here %%$ f_{i}^{k} + 10 = x $\end{env} %% %% Insert comment describing function here %%$ g_{ij}^{12} - 3 = x \$
If you're on VS code, try latex-formatter.
|
2023-02-06 02:41:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6256160140037537, "perplexity": 3200.69075953551}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500303.56/warc/CC-MAIN-20230206015710-20230206045710-00126.warc.gz"}
|
http://mathoverflow.net/questions/90985/is-generically-split-azumaya-algebra-locally-split
|
# is generically split Azumaya algebra locally split?
Let $A$ be an Azumaya algebra over a scheme $X$ (or maybe more specifically a scheme of finite type over a field). Suppose that the restriction of $A$ to $U=X\setminus Z$ (where $Z$ is a closed set) is split, i.e. isomorphic to $M_n(\mathcal{O}_U)$.
Let $x \in Z$. Is it necessarily true that there exists a Zariski open neighbourhood $U' \ni x$, such that the restriction of $A$ to $U'$ is split?
-
No, this is not true; the earliest counterexample I know of is described by Auslander and Goldman, "The Brauer group of a commutative ring", Trans. AMS 97 (1960), pp. 367–409, available freely online from the AMS. Given that I think they were the first to define the Brauer group of a ring, there is unlikely to be an earlier example. They prove (Theorem 7.2 on page 388) that, whenever $R$ is a regular ring, the Brauer group of $R$ injects into the Brauer group of the function field (and so your statement is true if $X$ is regular). They then give the following counterexample when $R$ is not regular.
Take $X$ to be the scheme $x^2+y^2=0$ over the real numbers, and the Azumaya algebra to be simply the usual quaternions, considered as an Azumaya algebra on $X$. Take $Z$ to be the origin. On $U$, the function $y$ is invertible, so in the coordinate ring of $U$ we have $(x/y)^2 = -1$ and the algebra splits. But the algebra does not split in any neighbourhood of $(0,0)$, since evaluating it at $(0,0)$ gives a non-split algebra.
If you don't like the fact that $X$ is geometrically reducible, there are plenty of other examples of singularities where the Brauer group of the local ring fails to inject into the Brauer group of the function field. In his articles on the Brauer group, Grothendieck refers to an example of Mumford where the Brauer group of the local ring is not even torsion, in contrast to the Brauer group of the function field which must always be torsion.
Edit:
You asked whether it is enough that $X$ be factorial. Grothendieck showed, in the second of his three articles on the Brauer group, that what you want is true if all the étale local rings of $X$ are factorial. Briefly, this is because you get an exact sequence
$0 \to H^1(X,\mathcal{D}iv) \to H^2(X,\mathbb{G}_m) \to H^2(k(X),\mathbb{G}_m)$
where $\mathcal{D}iv$ is the sheaf of Cartier divisors on $X$; the hypothesis means that this is the same as the sheaf of Weil divisors, whose cohomology vanishes.
I have been trying to think of an example where $X$ is factorial but not étale-locally factorial, giving non-trivial Brauer group. I'm sure such things are well known to the geometers, but I wonder whether something like this works: take an affine nodal cubic curve $y^2 = x^2(x+1)$ and rotate it about the $x$-axis. It feels like you should get a singularity which is a simple double point, but where the non-Cartier divisor class only reveals itself after an étale localisation (adjoining a square root of $x+1$). This would then lead to a class of order 2 in $\mathrm{Br} X$ which is trivial in $\mathrm{Br} k(X)$.
-
Do I understand it correctly that a sufficient condition for the indjectivity is that every Weil divisor is a Cartier divisor? – Dima Sustretov Mar 13 '12 at 10:37
No, I'm pretty sure that condition is not sufficient, but I don't have a counterexample off the top of my head. It is closely related to this question: <mathoverflow.net/questions/14961/…;. (Not sure how to put links in comments.) – Martin Bright Mar 13 '12 at 13:24
Of course it is true if $X$ is regular, by the Grothendieck-Serre conjecture for $\textbf{PGL}_n$. Also you should check out the "purity theorem" in Grothendieck's "Groupes de Brauer III": quite often the result is true if $Z$ has codimension at least $2$. – Jason Starr Mar 14 '12 at 18:09
Thank you, @Martin and @Jason. By the way (I might consider posting it as a separate question), are there any results similar to Auslander-Goldman, along the lines "restriction to the generic points induces injective map on Brauer groups" for complex analytic spaces? Of course there is no such thing as "generic point" for complex spaces, so the precise statement is also a part of the question. – Dima Sustretov Mar 14 '12 at 21:13
@Jason - out of interest, are the situations where X is not regular, but some version of the purity theorem is known (or conjectured) to hold? – Martin Bright Mar 15 '12 at 7:41
|
2015-04-27 17:18:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.90532386302948, "perplexity": 180.11143763398658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246659254.83/warc/CC-MAIN-20150417045739-00058-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/trigonometry/46580-solved-trig-problem.html
|
# Thread: [SOLVED] trig problem
1. ## [SOLVED] trig problem
cosY= -sqrt3/2
and
cosY= -sqrt3/2 for y in [pi,2pi]
The question ask to solve these equations
2. Originally Posted by john doe
cosY= -sqrt3/2
and
cosY= -sqrt3/2 for y in [pi,2pi]
The question ask to solve
Keep in mind that cosine is negative in the second and third quadrants.
The second question is asking for particular angles within the interval $\pi\leq y\leq2\pi$. Note that you will only have one solution.
keep in mind that $\cos\left(\tfrac{\pi}{6}\right)=\tfrac{\sqrt{3}}{2 }$ Change the angle to get $\cos(?)=-\tfrac{\sqrt{3}}{2}$
The first question wants you to generalize the solution. This can be a little tricky. Find several solutions to the equation, and then try to find a pattern given in the coefficients of the angles. Try this one and let us know where you get stuck...if you get stuck...
I hope this helps.
--Chris
3. ## Check this out
Originally Posted by john doe
cosY= -sqrt3/2
and
cosY= -sqrt3/2 for y in [pi,2pi]
The question ask to solve
cosY= -sqrt3/2 for y in [pi,2pi]
cosY is negative in third quadrant that is in interval [pi,3pi/2]
if cosY=sqrt3/2 then
Y=pi/6 (taking principle value)
so value of Y for which cosY=-sqrt3/2 will be
=pi+pi/6=7pi/6 (for third quadrant)
therfor y=7pi/6
hope this helps
|
2017-01-19 22:46:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8894366025924683, "perplexity": 2273.5883366168214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280746.40/warc/CC-MAIN-20170116095120-00502-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://www.varsitytutors.com/prealgebra-help/word-problems-with-two-unknowns
|
# Pre-Algebra : Word Problems with Two Unknowns
## Example Questions
### Example Question #1 : Word Problems With Two Unknowns
Combined, Megan and Kelly worked 60 hours. Kelly worked twice as many hours as Megan. How many hours did they each work?
Megan worked for 20 hours and Kelly worked for 40 hours
Megan worked for 15 hours and Kelly worked for 45 hours
Megan worked for 40 hours and Kelly worked for 10 hours
Megan worked for 10 hours and Kelly worked for 50 hours
Megan worked for 30 hours and Kelly worked for 60 hours
Megan worked for 20 hours and Kelly worked for 40 hours
Explanation:
Step 1: Megan and Kelly's total hours worked needs to add up to 60, and Kelly worked two times as long as Megan. We can put this into a formula:
Step 2: substitute 2m for k and add the variables
Step 3: isolate m
Step 4: Now that we know Megan worked 20 hours (m=20), we can multiply her hours worked by 2 to find out how long Kelly worked.
Step 5: check to make sure Megan and Kelly's hours add up to 60
### Example Question #1 : Word Problems With Two Unknowns
Jamal invites 15 people to his birthday party and orders enough cupcakes so that everyone (himself included) will get two cupcakes. How many cupcakes can everyone have if only 7 friends show up to Jamal's party?
Explanation:
Step 1: find the number of cupcakes ordered by adding up all of the people at the party and then multiplying that number by the 2 cupcakes ordered per person.
Step 2: to find the number of cupcakes each person can have, take the number of cupcakes and divide it by the number of guests, including Jamal.
### Example Question #1 : Word Problems With Two Unknowns
Michael and Tom are brothers. Their combined age is 20, and Tom is 4 years older than Michael. What are Michael and Tom's ages?
Tom is 16 years old and Michael is 4 years old.
Tom is 10 years old and Michael is 4 years old.
Michael is 12 years old and Tom is 8 years old.
Michael is 10 years old and Tom is 10 years old.
Michael is 8 years old and Tom is 12 years old.
Michael is 8 years old and Tom is 12 years old.
Explanation:
To solve this, we can set each of their ages as a variable. Let's say Michael's age is x.
We know Tom is 4 years older than Michael, so Tom's age is x+4.
We also know that their combined age is 20, so if we add both of their ages, we should get 20.
x + (x+4) = 20
2x+4 = 20
2x=16
x=8.
So Michael's age is 8, and Tom is 12.
### Example Question #1 : Word Problems With Two Unknowns
Sarah earns $10 an hour selling calculators, and every time she sells a calculator, she earns an additional$3 comission. Jamie also sells calculators, and earns $30 an hour, but only earns an additional$1 comission for every calculator she sells.
How many calculators per hour on average would Sarah have to sell to be making as much as Jamie would per hour, if Jamie sold the same number of calculators?
Answer cannot be determined from the information given
Explanation:
First, set up the equations. Their base pay is constant per hour, so the variable is the number of calculators sold multiplied by their comission rate, so the earning equations will be :
and
These are Sara's earnings and Jamie's earnings respectively, with representing calculators sold and representing earnings.
Because we are trying to find the number of calculators sold when both women have equal earnings, we can set as equal to , then substitute this value for in Jamie's equation, giving the equation:
Next, subtract and from both sides to get:
Which simplifies to
### Example Question #1 : Word Problems With Two Unknowns
Jamarcus has twenty-one coins in his piggy bank, all of which are either dimes or quarters. If Jamarcus has in total, how many of each coin does he have?
12 dimes and 12 quarters
11 dimes and 10 quarters
7 dimes and 14 quarters
10 dimes and 11 quarters
14 dimes and 7 quarters
7 dimes and 14 quarters
Explanation:
We can solve this problem by setting up an algebra equation. We know Jamarcus has twenty-one coins, but we don't know how many of each he has. That usually means we need a variable. Since we don't know how many dimes he has, let's label as the number of dimes. If we want to find the number of quarters, we would subtract the number of dimes from 21, and the number we get would be the number of quarters. Therefore, if Jamarcus hs dimes, he must have quarters. We can double check ourselves. If we add the number of dimes and quarters, we get 21.
Now, the only other piece of information we have is that together all 21 coins add up to $4.20. At first that might not seem too helpful, but it actually allows us to solve the problem. We know that every dime is worth 10 cents, so every dime Jamarcus has contributes 10 cents towards his$4.20 total. Furthermore, each quarter contributes 25 cents to his total. Since Jamarcus has dimes and each is worth 10 cents, the total value of his dimes is just . Furthermore, since Jamarcus has quarters each worth 25 cents, the total value of all of his quarters is . The sum of these two totals should equal the grand total of $4.20, or 420 cents. We can write this as the following equation. Next we use the distributive property to simplify, multiplying the 25 by both the 21 and the . Simplifying further we get We then want to combine like terms (the ds) We then want all of our variables on one side and all of our constants on the other, which we can accomplish by subtracting 525 from both sides. which gives To solve for , we now simply need to divide both sides by . which gives That means Jamarcus has 7 dimes. If we remember that he had 21 coins in all, that leaves 14 quarters. Jamarcus has 7 dimes and 14 quarters. We can double check ourselves. Seven dimes would total$0.70, and 14 quarters would total $3.50, bringing the grand total to the correct value of$4.20.
### Example Question #1 : Word Problems With Two Unknowns
The sum of two numbers is 128. The first number is 18 more than the second number. What are the two numbers?
Explanation:
We can solve this problem easily by using elementary algebra. We do not know either of the two numbers, so for the meantime let's label the second number as . Since the first number is 18 more than the second, we can express the first number as . Since the two numbers add up to 128, we can write this fact as an equation.
We can then combine like terms (our variables), giving us
We then want all of our constant terms on the right side, which we can accomplish by subtracting 18 from both sides.
The last step to solving the equation is to divide both sides by 2.
Therefore, our second number is 55. Since our first number is 18 more than that, it must equal 73. Double checking, we can confirm that the sum of 55 and 73 is indeed 128.
### Example Question #1 : Word Problems With Two Unknowns
Turn the word equation into symbols.
The product of three and s and the difference of 12 and 7 is 14.
Explanation:
We need to translate the English words into a mathematical statement.
Product means multiply.
Difference means subtraction.
Is means equals.
Product of 3 and s is 3s.
Difference of 12 and 7 is 12 - 5.
Therefore, the equation becomes,
.
### Example Question #1 : Word Problems With Two Unknowns
There are a total of 14 coins when dimes and nickels are combined. The total amount is 80 cents. How many dimes and nickels are there, respectively?
Explanation:
Write two equations to represent the scenario. There are two equations and two unknowns.
A total of 14 coins composed of dimes, , and nickels, . Write the first equation.
Nickels are 5 cents, and dimes are 10 cents. The total is 80 cents. Write the second equation.
Multiply the second equation by 10 and use the elimination method to cancel out the dimes variable.
Subtract the first and the new second equation to eliminate and solve for .
Divide by on both sides.
There are 12 nickels.
Substitute this into the first equation to find the number of dimes.
There are 2 dimes and 12 nickels.
### Example Question #1 : Word Problems With Two Unknowns
You go to the store and buy x bags of carrots and y bananas. Each bag of carrots costs $1.50 and each banana is$0.25. You spend $6.50. The total number of items you purchase is 11. How many bags of carrots did you buy? How many bananas did you buy? Possible Answers: 3 bags of carrots, 8 bananas 4 bags of carrots, 7 bananas 6 bags of carrots, 5 bananas 5 bags of carrots, 6 bananas 8 bags of carrots, 3 bananas Correct answer: 3 bags of carrots, 8 bananas Explanation: Given the information, we have 2 equations. We know each bag of carrots is$1.50 and each banana is $0.25. We also know the total amount we spend is$6.50. So, we can write the equation
where x is the number of bags of carrots and y is the number of bananas.
We also know the total number of items we purchased is 11. We can write the equation as
where x is number of bags of carrots and y is the number of bananas.
To solve, we will solve for one variable in one equation and substitute it into the other equation. So,
Now, we can substitute the value of y into the first equation. We get,
We distribute.
We combine like terms.
We solve for by getting x alone.
Therefore, the number of bags of carrots we bought is 3. To find the number of bananas, we simply substitute into the equation.
Therefore, the number of bananas we bought is 8.
So we bought 3 bags of carrots and 8 bananas.
|
2021-02-24 20:57:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5549067258834839, "perplexity": 1630.6008087488842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178347321.0/warc/CC-MAIN-20210224194337-20210224224337-00433.warc.gz"}
|
https://scicomp.stackexchange.com/questions/36443/what-are-the-advantages-and-disadvantages-of-using-norm-error-control-in-the-mat
|
# What are the advantages and disadvantages of using norm error control in the MATLAB ODE suit?
In MATLAB's ODE suit, there seem to be two basic methods of controlling the Local Truncation Error (LTE) of the ODE which the user can choose from, namely:
1. The absolute error control (default), |e(i)| <= max(RelTol*abs(y(i)),AbsTol(i))
2. The norm error control (setting option normcontrol, on), where norm(e(i)) <= max(RelTol*norm(y(i)),AbsTol(i))
By alternating between the two for the same problem, in the case of using the ode23tb for example, the difference seems to be noticable as seen in the figure below. Furthermore, in the case of the norm error control the algorithm is significantly faster as well.
Thus my question is threefold:
1. The math may be apparent by its formulation, but is there an intuitive physical interpretation of the norm error control method?
2. What are the advantages and disadvantages of using the norm error control option as compared to the absolute error?
3. In which context could the one method be more accurate than the other?
• The documentation is written in a confusing way, and doesn't fully specify what is done. However, it seems that the default uses the maximum norm while "norm control" uses the $\ell_2$ or Euclidean norm. Which is preferable will depend on what the solution actually represents. Which one is more accurate would depend on how the $\ell_2$ norm is defined (i.e., whether there is some normalization with respect to the dimension of the problem). But you can adjust the accuracy anyway by changing the tolerances. – David Ketcheson Dec 6 '20 at 11:31
As far as I understand it, if you do not use the NormControl, the time step is adapted so that the maximum error on any of the solution components is below the tolerance threshold.
If, on the other hand, you use the NormControl option, then the time step is adapted so that the overall error norm is lower than the tolerance threshold. This is slightly less stringent as the biggest soltion component error is "miwed" with the other errors. Let's take a worst-case scenario: you have $$N-1$$ variables that follow an ODE of the type $$y' = a$$ with $$a$$ a constant. Runge-Kutta methods integrate this exactly so the error on these components is zero to machine precision. Now if you had one solution component (the $$N$$-th variable) which follows a nonlinear ODE, the integration error estimate $$e_n$$ on this variable will be nonzero. Without the NormControl option, the time step will be adjusted roughly as follows: $$\Delta t \approx \left( \dfrac{|e_N|}{atol + rtol |y_N|} \right)^{\frac{1}{q+1}}$$ with $$q$$ the order of the integration error estimate, which is most likely equal to $$p-1$$, with $$p$$ the order of the method.
With the NormControl option set, it will be adjusted as: $$\Delta t \approx \left( \dfrac{ \sqrt{\sum\limits_{i=1}^{N} |e_i|^2}}{atol + rtol \sqrt{\sum\limits_{i=1}^{N} |y_i|^2}} \right)^{\frac{1}{q+1}}$$
which, in our worst-case scenario, would be: $$\Delta t \approx \left( \dfrac{ |e_n|}{atol + rtol \sqrt{\sum\limits_{i=1}^{N} |y_i|^2}} \right)^{\frac{1}{q+1}}$$ If we compare this to the very first equation of this answer, we see that the numerator is the same, but the denominator is larger, as $$||y|| \geq |y_n|$$. So the integration error will be perceived as lower with NormControl, hence the time step will be larger and the simulation quicker. But this also means that the solution quality is slightly worse.
So in your figure, I think the solution without NormControl is the better one in terms of solution exactitude. You could check that by lowering the tolerance down to very fine levels (say 1e-12) to see what the "exact" solution is. Also, you can compare how the time step evolves between the two cases to get more insight on this behaviour.
I am not quite sure, but I think the error control based on the norm of the error may be more gentle with the time step variations, whereas the solution based on the maximum error component may have bigger "jumps" in the error estimate as the solution evolves.
Also, I usually use the norm $$||y|| = ||y||_{Matlab} / \sqrt{N}$$, i.e. a root mean square error. This way, for discretized PDEs, the error does not "fictiously" grow as the number of mesh points increases. However, single large error components may not be represented well as they will be averaged with the lower components.
The norm in Matlab does not seem to have the factor $$1/\sqrt{N}$$. Therefore, this should mean that, if your system is sufficiently large, the NormControl option will actually be more stringent on the time step. Indeed, say we duplicate a scalar ODE $$N$$ times, then the numerator of the above time step formula will be: $$\sqrt{\sum\limits_{i=1}^{N} |e_i|^2} = \sqrt{N |e_1|^2} = \sqrt{N} |e_1|$$ which will be higher than $$max(|e|)$$. Thus I am guessing that your test case had a number of variables $$N$$ reasonably low.
In any case, the choice of using or not using this NormControl option should not affect wildly your result, otherwise this means that your integration tolerances are too large.
• It should be $\frac1{q+1}$ in the exponents for the next step size, where $q,q+1$ are the orders of the embedded method and the method step is the one of the order $q+1$ method. – Lutz Lehmann Dec 6 '20 at 22:14
|
2021-03-06 15:06:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9423223733901978, "perplexity": 306.53688328557166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375096.65/warc/CC-MAIN-20210306131539-20210306161539-00191.warc.gz"}
|
https://seunghochoe.netlify.app/publication/journal-article/1996-03-01-prc/
|
# gKNΛ and gKNΣ from QCD sum rules
### Abstract
g KN Λ and g KN Σ are calculated using a QCD sum rule motivated method used by Reinders, Rubinstein, and Yazaki to extract hadron couplings to Goldstone bosons. The SU (3) symmetry breaking effects are taken into account by including the contributions from the strange quark mass and assuming different values for the strange and the up-down quark condensates. We find g KN Λ/√ 4π=-1.96 and g KN Σ/√ 4π= 0.33.
Publication
Phys. Rev. C
|
2022-05-28 13:30:32
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9231970310211182, "perplexity": 4851.435780385886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016853.88/warc/CC-MAIN-20220528123744-20220528153744-00727.warc.gz"}
|
http://ucsdquals.wikidot.com/fall-2007-10-misc
|
Fall 2007 10 Misc
Ideal gas has density n, molecular mass m, initial temperature T0, and collisional cross-section $\sigma$. At time t = 0 a small amount of heat Q is released in a neighbourhood of a point inside the gas. Determine how the temperature difference T - T0 at that point decays at large time t. No detailed calculations are necessary; rather, use estimates and dimensional analysis to derive the scaling trend.
***
Small heat disturbances spread by thermal conductivity. At large time the size of the heated region is $R(t) \sim \sqrt{D_T t}$, where D is the thermal diffusion coefficient. In an ideal gas DT is of the same order as the regular diffusion coefficient $D = v_T l/3$, where vT is the thermal velocity and $l = 1/\sigma n$ is the mean-free path. The conservation of energy requires
where $c \sim k$ is the specific heat per molecule. Therefore,
|
2020-08-09 09:33:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8184985518455505, "perplexity": 439.7249676368499}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738523.63/warc/CC-MAIN-20200809073133-20200809103133-00119.warc.gz"}
|
https://math.stackexchange.com/questions/4353764/write-x-in-terms-of-y-for-xx-y
|
Write x in terms of y for $x^x$ = y
Is there a closed form for $$x^x = y$$ (for positive real numbers)?
I'm not sure if this has been answered already; google search doesn't do well with symbols so its hard for me to search this up.
• Hint : Use Lambert W. Jan 11 at 2:53
• Jan 11 at 3:02
• yep the duplicate is what I was looking for Jan 11 at 3:03
• math.stackexchange.com/questions/1035539/… (this is about power tower of x) Jan 11 at 3:13
|
2022-08-17 04:47:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6196048855781555, "perplexity": 522.951242876695}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.95/warc/CC-MAIN-20220817032054-20220817062054-00751.warc.gz"}
|
http://physics.stackexchange.com/questions/26875/superhiggs-mechanism-on-different-backgrounds-compactifications?answertab=oldest
|
# SuperHiggs Mechanism on different Backgrounds & Compactifications
I've been studying Bagger & Giannakis paper on the SuperHiggs Mechanism found here.
The paper shows how SUSY is broken by a $B_{\mu\nu}$ gauge field background restricted to $T^3$ in $M^7\times T^3$ Compactification of the heterotic string.
My question is, have these results been generalised to different Backgrounds and/or Compactifications (Perhaps even Calabi Yau 3-folds)?
So far my search has shown up no results, any pointers in the right direction are greatly appreciated, Thanks!
-
Please don't link to the PDF (or at least not exclusively): the abstract would be much more helpful. Thanks. In this case, the abstract to the paper is arxiv.org/abs/hep-th/0502107v1 – José Figueroa-O'Farrill Sep 15 '11 at 10:51
Ahhh, thanks! Just changed it – Michael Sep 15 '11 at 10:54
While we are at the topic, please link to arxiv.org/abs/hep-th/0502107 without version number (v1). That way we automatically get the latest version. – Qmechanic Dec 10 '12 at 14:06
## 1 Answer
After more search I have come to the (perhaps premature) conclusion that these results have not yet been generalised.
Ioannis Giannakis seems to be one of the leading researchers on this subject. His publication list however does not indicate any further research into the SuperHiggs mechanism.
Related Articles are:
(I will keep this Question open for now, in case I missed something/Someone else finds something)
-
It seems a very interesting question. I was hoping someone would be able to give you a definitive answer. – Joe Fitzsimons Sep 16 '11 at 9:00
As I'm currently searching for a nice area to do research in, I was more or less hoping this sort of outcome :) – Michael Sep 16 '11 at 10:53
|
2014-08-29 16:07:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7548416256904602, "perplexity": 597.3658205983544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832662.33/warc/CC-MAIN-20140820021352-00376-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://zulfahmed.wordpress.com/2014/02/28/electromagnetism-is-not-quite-a-u1-gauge-theory/
|
Feeds:
Posts
For weak and strong forces, the picture of the potential as a connection on a principal bundle whose curvature is the field strength makes a great deal of sense but the ‘original’ gauge theory for electromagnetism is probably not right. There is something more subtle going on with electromagnetism — the ‘circles’ of the circle bundle lie in the S4(1/h) universe and intersect each other depending on the extrinsic geometry of the physical universe. In particular we are not quite dealing with a principal $U(1)$-bundle. The intersections of these normal circles presumably have an effect on all aspects of electromagnetism. We don’t really have a local $U(1)$-gauge invariance.
|
2017-10-23 22:41:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5872424244880676, "perplexity": 457.9702259500254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826840.85/warc/CC-MAIN-20171023221059-20171024001059-00182.warc.gz"}
|
https://math.stackexchange.com/questions/1543151/prove-that-the-reals-are-isomorphic-to-the-multiplicative-group-if-reals-with-th
|
# Prove that the reals are isomorphic to the multiplicative group if reals with this group operation
If $G = \mathbb R \backslash \{-1\}$ prove $G$ is a group and that it is isomorphic to the multiplicative group of the non zero real numbers.
The group operation of $G$ is $a*b = a + b + ab$
I proved that $G$ is a group because its identity element is zero, every element has an inverse, mainly $\frac {-a}{1+a}$ however, to prove the isomorphism, I know that $\phi (0) = 1$ because identities map to identities, and $\phi (\frac {-a}{1+a}) = \frac 1a$
Yet, setting $a=1$ $\phi(0) = \phi (-1/2)=1$ proving this is not a bijection and therefore not an isomorphism. What am I doing wrong?
• Your map is contradictory: you say $\phi(0)=1$, but $\phi(0)=\phi(\frac{-0}{1+0})=\frac{1}{0}\neq 1$. – Joffysloffy Nov 23 '15 at 18:50
• So what map would be correct? Is that not the inverse? – Guacho Perez Nov 23 '15 at 18:51
• I do not know yet; I only noticed this inconsistency. – Joffysloffy Nov 23 '15 at 18:52
Take the map $\phi:a\mapsto a+1$. Then
\begin{aligned} \phi(a*b)&=\phi(a+b+ab)\\ &=a+b+ab+1\\ &=(a+1)(b+1)\\ &=\phi(a)\phi(b). \end{aligned} So $\phi$ is a homomorphism.
It's injective, because if $a\in\ker\phi$, then $\phi(a)=1$, hence $a+1=1$, which means that $a=0$.
It is surjective: let $b\in\mathbb{R}\setminus\{0\}$, then $b-1\in G$ and $\phi(b-1)=b-1+1=b$.
I basically took your idea of assuring inverses are sent to inverses and simply tried $\phi(\frac{-a}{1+a})=\frac{1}{a+1}$ to avoid division by zero. This resulted in the map $a\mapsto a+1$.
• Nice! But isn't the inverse of $a$ in the multiplicative group of non zero reals equal to $1/a$? – Guacho Perez Nov 23 '15 at 19:15
• Thank you :). It is, but you don't have to send the inverse of one element in $G$ to the element denoted by the same value in $\mathbb{R}\setminus\{0\}$. ‘Sending inverses to inverses’ means that if $a$ is sent to $b$, then $a^{-1}$ is sent to $b^{-1}$, and that is what happens. – Joffysloffy Nov 23 '15 at 19:18
• You're welcome :)! Also, just realized that you were actually just using the identity (take the inverse on both sides, you get $\phi(a)=a$). – Joffysloffy Nov 23 '15 at 19:24
|
2019-07-20 21:51:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9661827087402344, "perplexity": 331.42708931164395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526714.15/warc/CC-MAIN-20190720214645-20190721000645-00342.warc.gz"}
|
https://blog.pollithy.com/vision/epipolar-geometry
|
# Estimate depth information from two images
by Daniel Pollithy
For human babies we know that:
Depth perception, which is the ability to judge if objects are nearer or farther away than other objects, is not present at birth. It is not until around the fifth month that the eyes are capable of working together to form a three-dimensional view of the world and begin to see in depth.
There are many techniques involved in the depth perception. For example parallax or occlusion. We can divide all of them roughly into the categories of monocular and binocular vision capabilities.
This blog post explain depth estimation of binocular vision systems, camera models and epipolar geometry.
## Basic intuition
Two eyes/cameras at different locations look at the same object.
How can we determine the distance of the person to the object (“the depth”)?
The simplest answer: We compare the position of an object in the left image to the same object in the right image.
How do we find the object from the left image in the right one?
This is called the correspondence problem. Generally speaking we try to identify areas of the left image which can be found easily in the right image. Obviously that depends on the image. Large homogeneous areas, like the sea, are a problem for this.
But searching the whole image is time intensive, right?
Correct! Epipolar geometry comes in handy here. Given a feature in the left image, we can restrict the search space to a single line in the right image.
If we have found the pixel positions of the same object in the left and the right image, how do we calculate depth?
Knowing the position and orientation of both cameras/eyes does the trick. In the simplest case: we calculate the distance between these pixel features and scale this “disparity” to the real world.
You can experiment with this by closing one eye, focusing an object near you, and then switch the closed eye to the other side. Repeat this with an object in the distance. The distant object moved far less than the one close to you!
## Camera model
The relationship between the position on the retina and the position in the real world is necessary to know. When we are talking about cameras instead of eyes the “camera model” provides us with the relationship image plane to real world. Usually we assume to have a simple pinhole camera model without a lens.
A point P on the object plane has camera coordinates $(x, y, z)$. In automotive vision we call them $(\xi, \eta, \zeta)$ The unit of these coordinates is usually something like millimeters. It does not matter as long as you sty consistent.
The origin of the camera coordinate system is in the focal point of the camera.
In case of the pinhole camera model the focal point is the pinhole therefore the camera coordinate system is originated at the pinhole.
The pinhole camera has an important internal parameter: The focal length is the distance from the focal point to the image plane (The effect of the focal length changes once we use a lens)
A point (pixel in digital) image plane has the coordinates $(u, v)$ The usually-called y-axis is pointing downwards for historically reasons when images where incrementally built row by row on old television screens.
In order to make the calculations a little bit easier it is sometimes preferred to calculate with a camera in positive lay, because the image is not flipped upside down.
After settling the basics we can start to look at the situation. The only thing we have to know about are triangles.
The following image contains a point P. This might be an object we observe. Then there is the focal point F which is now behind the image plane because we are using a camera in positive lay. The shortest line from the focal point through the center of the image plane to the center of the object plane is called the princial or optical axis. It stands orthogonal on both of these planes. The intersection of the image plane and the optical axis is called the principal point.
The orthogonality of the optical axis on both planes is important because from that follows that the two triangles are similar.
Figuratively speaking: You can take the large blue triangle “PQF” and scale it down to the red triangle without skewing it.
We see:
$\frac{z}{x} = \frac{f}{u}$
And the same for the y-dimension which is not visible in the diagram:
$\frac{z}{y} = \frac{f}{v}$
This is the basis for one typical type of math exercise we had at school:
You are standing in front of a tree. Behind the tree you can see the tip of the Eiffel tower. The tree is 10m tall and the Eiffel tower has a height of 300m. Your feet are 8,5m away from the root of the tree. How far away from the Eiffel tower are you?
Let us write the relationships from above in a single vector equation:
$\frac{z}{x} = \frac{f}{u} \Leftrightarrow \frac{x}{z} = \frac{u}{f} \Leftrightarrow \frac{x \cdot f}{z} = u$ $\frac{z}{y} = \frac{f}{v} \Leftrightarrow \frac{y}{z} = \frac{v}{f} \Leftrightarrow \frac{y \cdot f}{z} = v$ $\begin{pmatrix} u \\ v \end{pmatrix} \Leftrightarrow \begin{pmatrix} \frac{x \cdot f}{z} \\ \frac{y \cdot f}{z} \end{pmatrix} \Leftrightarrow \frac{f}{z} \begin{pmatrix} x \\ y \end{pmatrix}$
Let’s try this out on the Eiffel tower example. The Focal point would be the person. The image plane is the tree. So the focal distance is 8,5 meters. We know the tree is 10 m tall which is the value for v. And y=300m.
v = y*f / z <=> 10m = 300m * 8,5m / z <=> z = 300m * 8,5m / 10m <=> z = 255m
This sounds reasonable ;)
But actually the camera coordinates are 3d not 2d. Let’s extend this.
$\begin{pmatrix} u \\ v \\ f \end{pmatrix} \Leftrightarrow \begin{pmatrix} \frac{x \cdot f}{z} \\ \frac{y \cdot f}{z} \\ \frac{f \cdot z}{z} \end{pmatrix} \Leftrightarrow \frac{f}{z} \begin{pmatrix} x \\ y \\ z \end{pmatrix}$
This is not really what we want because the image coordinates should not store the focal length. But we cannot write this with a scalar product because we want to treat the x and y value different from the z value.
We are looking for a transformation (a matrix):
$\begin{pmatrix} u \\ v \\ 1 \end{pmatrix} \Leftrightarrow \begin{pmatrix} a1 & a2 & a3 \\ a4 & a5 & a6 \\ a7 & a8 & a9 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix}$ $\begin{pmatrix} u \\ v \\ 1 \end{pmatrix} \Leftrightarrow \begin{pmatrix} \frac{f}{z} & 0 & 0 \\ 0 & \frac{f}{z} & 0 \\ 0 & 0 & \frac{1}{z} \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix}$
### Intrinsic camera calibration
If we multiply both sides of the equation above with the unknown “z” we can remove the z from the matrix so it only contains internal configuration parameters of the camera (namely f). This is the basic Intrinsic camera calibration K sometimes also referred to as intrinsics only.
$z \cdot \begin{pmatrix} u \\ v \\ 1 \end{pmatrix} = \begin{pmatrix} f & 0 & 0 \\ 0 & f & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix}$
with: $K = \begin{pmatrix} f & 0 & 0 \\ 0 & f & 0 \\ 0 & 0 & 1 \end{pmatrix}$
Inverse: $K^{-1} = \begin{pmatrix} 1/f & 0 & 0 \\ 0 & 1/f & 0 \\ 0 & 0 & 1 \end{pmatrix}$
There exists an extension to the basic camera model which allows to move the x- and y-coordinate of the princial point ($c_x$ and $c_y$). And the focal length can be differentiated into a focal length in horizontal ($f_x$) and vertical ($f_z$).
Extented intrinsic camera calibration:
$K = \begin{pmatrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{pmatrix}$ $K^{-1} = \begin{pmatrix} f_x & 0 & -c_x/f_x \\ 0 & f_y & -c_y/f_y \\ 0 & 0 & 1 \end{pmatrix}$
Intrinsics of a real camera:
$K = \begin{pmatrix} f_x \cdot m_x & \gamma & c_x \\ 0 & f_y \cdot m_y & c_y \\ 0 & 0 & 1 \end{pmatrix}$
$\gamma$ describes the skew coefficient between the x and y axis. It is zero most of the times. $m_x$ and $m_y$ are scaling factors which relate pixels to a distance measure (like millimeters).
### Extrinsic camera calibration
The “extrinsics” describe the rotation and translation from the camera coordinate system to the world coordinate system.
The translation t is the difference of the origins (column vector with three entries). The rotation matrix R is a SO(3) matrix. Most of the time they are combined to a homogeneous transformation matrix:
The position and orientation of the camera in the world coordinate system are captured in the following matrix:
$C = \begin{pmatrix} R \in SO(3) & t \\ 0 ~ 0 ~ 0 & 1 \end{pmatrix}$
### Projection matrix
Using the intrinsics and extrinsics we describe the relationship of a pixel on the image plane and a 3d point in the world coordinate system:
$C^{-1} K^{-1} \begin{pmatrix} u \cdot z \\ v \cdot z \\ z \\ 1 \end{pmatrix} = \begin{pmatrix} x \\ y \\ z \\ 1 \end{pmatrix}$
(note that the matrices and vectors have to be extended to be homogeneous)
The same relationship inverted:
$K C \begin{pmatrix} x \\ y \\ z \\ 1 \end{pmatrix} = \begin{pmatrix} u \cdot z \\ v \cdot z \\ z \\ 1 \end{pmatrix}$
The common appearance of C next to K is captured in the Projection matrix P:
$P = K C = K [R|t]$
Inverse: $P^{-1} = (K C)^{-1} = C^{-1} K^{-1}$
Note: We can determine C and K a priori for a camera system. The only thing that is missing in order to calculate the 3d coordinate in the world is the depth z.
As long as we don’t have the exact z, the mapping is from pixels in the image plane to rays in the world coordinates.
## Epipolar geometry
To solve the depth problem (how to estimate the z-value) we introduce a second camera to our setup:
Both cameras observe the same point P. As mentioned in the camera models above they capture the ray from P to their focal points F in the pixel at the coordinate q. A coordinate in the left camera coordinate system is denoted with superscript L, for example $F^L$, and a coordinate on the left image plane with lowerscript l, for example $q_{l}^{L}$.
To make the calculations easier we assume the distance from the image planes to their focal points to be 1.
We can draw a vector from $F^L$ to $F^R$ which we call the baseline. It captures the positional movement between the two camera centers.
The intersection points of the baseline with the image planes are called epipolar points. The two focal points and the point P span a plane (yellow triangle) which we call the epipolar plane.
The intersection of this plane with the image planes are called epipolar lines. All 3d points which lie on the triangle are projected onto these lines.
### Central relationship
The baseline, the left arm ($\vec{p_l^L}$) and the right arm ($\vec{p_r^R}$) of the triangle lie on the same plane. From that follows, that a vector which stands orthogonal on the left arm and the baseline (fat red arrow) also stand orthogonal on the right arm!
$\vec{p}_r^L \cdot (\vec{b} \times \vec{p}_l^L)= 0$
Note that this equality only holds in one of the camera coordinate systems (either one is good) but not if the vectors are represented in different ones.
$\vec{p}_r^L$ is the right arm expressed in the left coordinate system. We want to transform this to the right coordinate system.
### Transform right arm from left to right coordinate system
This is how x in the right coord. system can be descibed in the left camera coordinate system: $\vec{x}^L = D \vec{x}^R + \vec{b}^L$
$\vec{p_r^L}$ can be expressed as the connection between two points:
$\vec{p_r^L} = \vec{(F_rP)}^L = -\vec{(PF_r)}^L \\$
Rewrite the right arm as the difference between the left arm and the baseline.
$\vec{(F_lF_r)}^L = \vec{(F_lP)}^L + \vec{(PF_r)}^L \Leftrightarrow \\ \vec{(PF_r)}^L = \vec{(F_lF_r)}^L - \vec{(F_lP)}^L$
Plug into the defintion of $\vec{p_r^L} = \vec{(F_rP)}^L = -\vec{(PF_r)}^L$:
$-\vec{(PF_r)}^L = -( \vec{(F_lF_r)}^L - \vec{(F_lP)}^L) = \\ -\vec{(F_lF_r)}^L + \vec{(F_lP)}^L = \\ \vec{(F_lP)}^L - \vec{(F_lF_r)}^L$
Long story short: The right side is the difference between the left side and the baseline.
$\vec{p_r^L} = \vec{(F_lP)}^L - \vec{(F_lF_r)}^L$
Now we transform this to the right coordinate system with: $\vec{x}^L = D \vec{x}^R + \vec{b}^L$
$\vec{p_r^L} = \vec{(F_lP)}^L - \vec{(F_lF_r)}^L = \\ D \vec{(F_lP)}^R + \vec{b}^L - D \vec{(F_lF_r)}^R + \vec{b}^L = \\ D \vec{(F_lP)}^R - D \vec{(F_lF_r)}^R = \\ D (\vec{(F_lP)}^R - \vec{(F_lF_r)}^R) \\$
In the last line we have $\vec{(F_lP)}^R - \vec{(F_lF_r)}^R$ now both in the right coordinate system. Which means that this is just $\vec{p}_r^R$!
$D (\vec{(F_lP)}^R - \vec{(F_lF_r)}^R) = \\ D \vec{p}_r^R$
Note that we got rid of the baseline.
Apply to central relationship:
$D \vec{p}_r^R \cdot (\vec{b}^L \times \vec{p}_l^L)= 0$
### Substitute world point by image points
As we have seen in the camera model, a given pixel on the image plane captures a whole ray that goes through it to the focal point.
Mathematically this means that the ray has one dimension more than the pixel. This additional dimension is $z$.
Therefore we have gotten the following relationship between a point in the camera coordinate system and on the image plane (focal length = 1):
$q_l = \frac{f}{z_l^L} \vec{p}_l^L = \frac{1}{z_l^L} \vec{p}_l^L \Leftrightarrow \\ \vec{p}_l^L = q_l \cdot z_l^L$
and for the right side of the triangle:
$q_r = \frac{1}{z_r^R} \vec{p}_r^R \Leftrightarrow \\ \vec{p}_r^R = q_r \cdot z_r^R$
Essentially we are using the simplest intrinsic camera matrix in order to project the point P to the image plane in the equation:
$D \vec{p}_r^R \cdot (\vec{b}^L \times \vec{p}_l^L) = 0 \Leftrightarrow \\ D (q_r \cdot z_r^R) \cdot (\vec{b}^L \times (q_l \cdot z_l^L)) = 0$
Because z is in both coordinate systems just a positive number, we can factor it out and it gets removed from both sides of the equation.
$(q_r)^T D^T \cdot (\vec{b}^L \times q_l) = 0$
### Cross product as matrix
We can write the cross product with a given vector $\vec{b}_L$ as a matrix multiplication:
$\vec{b}^L \times q_l = \begin{pmatrix} 0 & -b_z^L & b_y^L \\ b_z^L & 0 & -b_x^L \\ -b_y^L & b_x^L & 0 \end{pmatrix} q_l = [b^L] _ {\times} q_l$
Insert this into the central relationship:
$(q_r)^T D^T \cdot [b^L] _ {\times} q_l = 0$
### Essential matrix
The rotation matrix D and the new matrix $[b^L] _ {\times}$ are called the Essential Matrix: $E = D^T [b^L] _ {\times}$
Note: E only contains the geometric parameters of the binocular camera system. Which are the baseline and the rotation matrix D.
Insert this into the central relationship:
$(q_r)^T E q_l = 0$
### Epipolar lines
And now we can clearly see that E is constant and only $q_l$ and $q_r$ are variables. Suppose $q_l$ was known, then the central relationship becomes a function of $q_r$: The epipolar line of the right image plane.
The same works in the other direction.
Note: If we pick a point in one image we only have to search on the epipolar line of the other image to find the corresponding point if it is visible.
Epipoles: As initially stated the epipolar line can be drawn through q and the epipole. This can be shown with the equation above.
### Fundamental matrix
Instead of calculating with camera coordinates q we want to calculate with image coordinate (u,v). Let’s modify the essential matrix to fix this. Every camera gets a matrix of intrinsic parameters $A_l$ and $A_r$. Remember that so far the focal length was 1 and we had no scaling factors in use.
Fundamental matrix F:
$F = A_r^{-T} E A_l^{-1}$
Central relationship with Fundamental matrix F:
$(u_r, v_r, 1) F \begin{pmatrix} u_l \\ v_l \\ 1 \end{pmatrix} = 0$
There are two ways to obtain the fundamental matrix:
• Calculate it from the geometric relations and intrinsic of the cameras
• Or use the “8 point algorithm” to estimate it
## Simplified setup
After a process called rectification (see below) we get to a simpler setup. Actually we can also try to setup our cameras initially like this (for example in cars).
There are two equal triangles (red) which capture the relationship between the same point in the left and in the right camera image. The length of the yellow sides of the triangles are the same proportional to each others. The right triangle only has a yellow dot because the line has a length of zero in this image.
This is the relationship we humans “feel”: The closer the object the larger the disparity on the image plan.
This relationship can also be found in the baseline of the small triangle in the following image. Note the equal triangles!
General possibility for depth estimation: Calculate the disparity which is the baseline for the smaller triangle: $d = x_l - (x_r - b)$. The $(x_r - b)$ means to move the red dot from the right to the left (see next image).
The ratio between the left and the right part of this smaller baseline is the same as for the bigger baseline: $\frac{b_l}{b} = \frac{x_l}{d}$
That’s how we get: $b_l = b \cdot \frac{x_l}{d}$
Because the triangles have the same ratios… $\frac{f}{x_l} = \frac{z}{b_l}$
… we can calculate the depth $z = \frac{b_l \cdot f}{x_l}$
Correct derivation:
Find the intersection of the left and right arm. With the correct scaling factor $\lambda_l$ the image point $q_l$ becomes P:
$$$\begin{split} \begin{bmatrix} p_x^L \\ p_y^L \\ p_z^L \end{bmatrix} &= \lambda_l \cdot \begin{bmatrix} x_l^L \\ y_l^L \\ 1 \end{bmatrix} \\ \end{split}$$$
We express the right arm of the triangle also in the left coordinate system:
$$$\begin{split} \begin{bmatrix} x_r^L \\ y_r^L \\ z_r^L \end{bmatrix} &= \begin{bmatrix} b \\ 0 \\ 0 \end{bmatrix} + \overrightarrow{F_r^Rq_r^R} \\ \begin{bmatrix} x_r^L \\ y_r^L \\ z_r^L \end{bmatrix} &= \begin{bmatrix} b \\ 0 \\ 0 \end{bmatrix} + (q_r^R - F_r^R) \\ \begin{bmatrix} x_r^L \\ y_r^L \\ z_r^L \end{bmatrix} &= \begin{bmatrix} b \\ 0 \\ 0 \end{bmatrix} + ( \begin{bmatrix} x_r^R \\ y_r^R \\ 1 \end{bmatrix} - \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} ) \\ \begin{bmatrix} x_r^L \\ y_r^L \\ z_r^L \end{bmatrix} &= \begin{bmatrix} b \\ 0 \\ 0 \end{bmatrix} + \begin{bmatrix} x_r^R \\ y_r^R \\ 1 \end{bmatrix} \\ \begin{bmatrix} x_r^L \\ y_r^L \\ z_r^L \end{bmatrix} - \begin{bmatrix} b \\ 0 \\ 0 \end{bmatrix} &= \begin{bmatrix} x_r^R \\ y_r^R \\ 1 \end{bmatrix} \\ \begin{bmatrix} x_r^L - b \\ y_r^L \\ z_r^L \end{bmatrix} &= \begin{bmatrix} x_r^R \\ y_r^R \\ 1 \end{bmatrix} \\ \end{split}$$$
With that we can find the same point P with the right arm of the triangle
$$$\begin{split} \begin{bmatrix} p_x^L \\ p_y^L \\ p_z^L \end{bmatrix} &= \begin{bmatrix} b \\ 0 \\ 0 \end{bmatrix} + \lambda_r \cdot \begin{bmatrix} x_r^L - b\\ y_r^L \\ 1 \end{bmatrix} \\ \end{split}$$$
Now we know that both lines should intersect at P. So we solve for this point by setting the two line equations equal:
$$$\begin{split} \lambda_l \cdot \begin{bmatrix} x_l^L\\ y_l^L \\ 1 \end{bmatrix} &= \begin{bmatrix} b \\ 0 \\ 0 \end{bmatrix} + \lambda_r \cdot \begin{bmatrix} x_r^L - b\\ y_r^L \\ 1 \end{bmatrix} \\ \end{split}$$$
This yields a linear equation system:
1. $\lambda_l x_l^L = b+ \lambda_r \cdot (x_r^L - b)$
2. $\lambda_l y_l^L = \lambda_r \cdot y_r^L$
3. $\lambda_l = \lambda_r$
Replacing the right lambda in (2) with the result from equation (3):
$\lambda_l y_l^L = \lambda_l \cdot y_r^L \Rightarrow \\ y_l^L =y_r^L$
This is not surprising due to the results from epipolar geometry.
Now replacing the right lambda in (1) with the result from equation (3):
$$$\begin{split} \lambda_l x_l^L &= b+ \lambda_l \cdot (x_r^L - b) \\ x_l^L &= \frac{b}{\lambda_l}+ x_r^L - b \\ x_l^L - x_r^L + b &= \frac{b}{\lambda_l} \\ \frac{x_l^L - x_r^L + b}{b} &= \frac{1}{\lambda_l} \\ \frac{b}{x_l^L - x_r^L + b} &= \lambda_l \\ \end{split}$$$
Next: We use the replacement from before, that $x_r^L - b = x_r^R$:
$$$\begin{split} \lambda_l &= \frac{b}{x_l^L - x_r^R} \\ \end{split}$$$
Last step: Transform to image coordinates by using $\alpha'$ called the effective focal length, which maps camera coordinates to image plane coordinates (pixels):
• $u_l = \alpha' \cdot x_l^L + u_0$
• $u_r = \alpha' \cdot x_r^r + u_0$
$u_0$ is the position of the principal point.
$$$\begin{split} \lambda_l &= \frac{b \alpha'}{u_l - u_r} \\ \end{split}$$$
Result:
$$$\begin{split} \begin{bmatrix} p_x^L \\ p_y^L \\ p_z^L \end{bmatrix} &= \frac{b \alpha'}{u_l - u_r} \cdot \begin{bmatrix} x_l^L \\ y_l^L \\ 1 \end{bmatrix} \end{split}$$$
With disparity $d = u_l - u_r$ .
## Image rectification
In reality we can’t manufacture such a ideal camera setup. Therefore we change the images of our setup by a process called image rectification in order to align to a virtual camera setup which have the same focal length, are aligned perfectly and whose focal points match the original ones.
Procedure:
1. Find common camera intrinsics A’ (for example by taking the average of the original intrinsics $A_l$ and $A_r$ of the left and right camera)
2. Find the common coordinate systems for both cameras, so that the x-axis lie parallel to the baseline
3. Find the rotation matrices $D_l$ and $D_r$ which rotate the camera coordinate systems to their common coordinate systems.
4. Transform the original camera images to rectified images which could have been captured from the virtual cameras
In the pinhole camera model we know the relationship between the intrinsics, the point p and the pixels.
$$$\begin{split} A_l \cdot \vec{p} &= p_z \begin{pmatrix} u \\ v \\ 1 \end{pmatrix}\\ \vec{p} &= A_l^{-1} \cdot p_z \begin{pmatrix} u \\ v \\ 1 \end{pmatrix}\\ \end{split}$$$
We know that the relationship stays but the pixels change after rectification:
$$$\begin{split} A' \cdot \vec{p}' &= p_z' \begin{pmatrix} u' \\ v' \\ 1 \end{pmatrix}\\ \end{split}$$$
p’ is $\vec{p}$ but rotated using $D_l$ into the new camera coordinate system:
$$$\begin{split} A' \cdot D_l \vec{p} &= p_z' \begin{pmatrix} u' \\ v' \\ 1 \end{pmatrix}\\ \end{split}$$$
Now plug in the original definition of point p:
$$$\begin{split} A' \cdot D_l \cdot A_l^{-1} \cdot p_z \begin{pmatrix} u \\ v \\ 1 \end{pmatrix} &= p_z' \begin{pmatrix} u' \\ v' \\ 1 \end{pmatrix}\\ \underbrace{A' \cdot D_l \cdot A_l^{-1}}_\textrm{A priori knowledge} \begin{pmatrix} u \\ v \\ 1 \end{pmatrix} &= \frac{p_z'}{p_z} \begin{pmatrix} u' \\ v' \\ 1 \end{pmatrix}\\ \end{split}$$$
Having figured out all of the a priori knowledge we can calculate the left hand side. Imagine the result was $(a, b, c)^T$:
$$$\begin{split} \begin{pmatrix} a \\ b \\ c \end{pmatrix} &= \frac{p_z'}{p_z} \begin{pmatrix} u' \\ v' \\ 1 \end{pmatrix}\\ \end{split}$$$
Then we can solve for $u'$ and $v'$ by diving the relevant entry ($a$ or $b$) by c because c is only the missing factor $\frac{p_z'}{p_z}$:
$u' = a/c$ and $v' = b/c$.
(gemeinfrei)
|
2021-12-06 07:32:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 13, "x-ck12": 0, "texerror": 0, "math_score": 0.7021567821502686, "perplexity": 555.6609217377095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363290.59/warc/CC-MAIN-20211206072825-20211206102825-00253.warc.gz"}
|
https://physics.stackexchange.com/questions/646091/geometrical-representation-of-contravariant-and-covariant-vectors
|
Geometrical representation of Contravariant and covariant vectors
After cruising through a lot of material online, and answers over here, my understanding of contravariant and covariant vectors are, in a finite-dimensional vector space, suppose we have a vector, which is contravariant to its basis. However, using the metric tensor, we can map this vector to a one form in the dual space, which acts as a vector space for this example. This one form, or covactor, varies similarly to the basis in our original vector space.
However, the covectors have a different basis, which is the dual basis, and are expanded in terms of those. Here's where my confusion arises :
Many diagrams over the internet give a geometric picture of the scenario, by claiming that contravariant components of an 'arrow' are found by drawing parallel lines along the axes, and checking where they intersect. Co-variant components could be found by sketching the perpendicular lines on these axes. So, these materials are treating covariant and contravariant as different representations of the same arrow, while I'm inclined to believe they are completely different. However, if we let go of rigour, in exchange for a 'more' geometric understanding, I think we are allowed to do this.
The diagrams are usually of the form :
However, even if we assume that contravariant and covariant are different representations of the same object, this particular diagram still seems wrong to me. Particularly the locations of $$x_1,x_2$$. In this diagram, they are located on the span of the original basis. Shouldn't covector components be located on the span of the dual basis ? This diagram suggests to me, that co-variant components have the same basis, as the contravariant ones.
Shouldn't the diagram look more like this ?
This second diagram seems to fit the concept better according to me. This is because the dual vector must be 'contravariant' with respect to the 'dual basis'. This means, their components must be found out by sketching parallel lines along the span of the dual basis. These lines intersect the original axis at a right angle, which is to be expected, as dual bases are orthogonal to the original bases.
Moreover, this second picture can also show in a much better way, how scaling up of the original basis, scales down the 'contravariant components' and the 'dual basis', which in turn scales up the 'covariant' components.This shows that the components of the dual vector are covariant to the original basis. This is something that is not readily visible in the first diagram. So, am I correct in assuming this second 'geometrical' representation is correct ?
I know this doesn't make much sense, because as the mathematicians said, that vectors are completely different from one forms, and one forms should be represented using the number of hyperplanes intercepted by our arrow. However, I've seen most course material refer to it in this way, and frankly it is easier to visualize. However, most of this material used the first picture. Can anyone point out my mistakes, if any, and tell me if the first picture is correct, or the second one ? Or in this case 'less-flawed'.
• I mean, visualize thing however you like, but I will note that all your pictures seem to be thinking in $\mathbb{R}^n$ with Cartesian coordinates...this is the very special case in which the distinction between vectors and dual vectors can essentially be ignored. In any other manifold, and in most other coordinates, they are best kept distinct. Jun 16 at 22:12
• @Richard Myers : Distinct contravariant and covariant components there exist in $\mathbb R^2$, as in our case here, in $\mathbb R^3$ and more generally in $\mathbb R^n$ with pictorial representations as post by the OP. They are identical if the original basis is orthonormal in which case the dual basis is identical to the original. The latter is valid because property (1) in my answer. Jun 17 at 4:25
• @Frobenius The term "covariant components of a vector" is wrong in the first place. A vector only has one set of components. It is an outdated way of saying "corresponding covector components". Covectors are the appropriate objects here which are best represented as stacks of hypersurfaces. The representation that the OP gives is completely misleading. You can see why if you try to use it in e.g. Euclidean spherical coordinates. Jun 17 at 5:37
• @Vincent Thacker : OK, accepted. Jun 17 at 5:43
• @VincentThacker I agree that this notion is absolutely wrong. I think what most people do is confuse between one form basis and reciprocal basis. The components of reciprocal basis are the same as the co-variant components in dual space. Hence many people describe co-variant and contravariant natures of the same vector, while in reality they are describing normal and reciprocal components. Jun 17 at 10:09
Much of the (endless) confusion about this subject can be attributed to the fact that differential geometry can be formulated in several different ways.
One approach goes as follows. We consider a vector space $$V$$ with an inner product provided by a metric tensor $$g:V\times V \rightarrow \mathbb R$$. Given a basis $$\{\hat e_\mu\}$$ for $$V$$, we can expand any vector as $$\mathbf X = X^\mu \hat e_\mu$$. The inner product of two vectors is then $$g(\mathbf X,\mathbf Y) = X^\mu Y^\nu g(\hat e_\mu,\hat e_\nu) \equiv X^\mu Y^\nu g_{\mu\nu}$$.
Noting that $$\{\hat e_\mu\}$$ is generically not orthonormal, we can define a dual basis $$\{\hat \epsilon^\mu\}$$ for $$V$$ which is defined by the condition that $$g(\hat e_\mu, \hat \epsilon^\nu) = \delta_\mu^\nu$$. Note that the upstairs/downstairs index placement is designed to distinguish between the original basis and the reciprocal basis.
Both $$\{\hat e_\mu\}$$ and $$\{\hat \epsilon^\mu\}$$ are bases for $$V$$. As such, a vector could be expanded in either basis: $$\mathbf X = X^\mu \hat e_\mu = \tilde X_\mu \epsilon^\mu$$ where $$\tilde X_\mu$$ are the components of $$\mathbf X$$ in the dual basis. Typically we drop the tilde and simply distinguish these components from the components $$X^\mu$$ purely via index placement. After taking the inner product with $$\hat e_\nu$$ one finds $$\tilde X_\mu = g_{\mu\nu} X^\nu$$ A rank-$$r$$ tensor is a multilinear map $$T:\underbrace{V\times \ldots\times V}_{r\text{ times}} \rightarrow \mathbb R$$, which eats $$r$$ vectors and spits out a number. Its components have $$r$$ indices; it can be expanded in terms of the basis or the dual basis: $$T(\hat e_{\mu_1},\ldots,\hat e_{\mu_r}) \equiv T_{\mu_1 \ldots \mu_r} \qquad T(\hat \epsilon^{\mu_1},\ldots,\hat\epsilon^{\mu_r}) \equiv T^{\mu_1 \ldots \mu_r}$$ or in a combination of both: $$T(\underbrace{\hat\epsilon^{\mu_1},\ldots,\hat\epsilon^{\mu_p}}_{p\text{ times}},\underbrace{\hat e_{\nu_1},\ldots,\hat e_{\nu_q}}_{q\text{ times}}) = T^{\mu_1 \ldots \mu_p}_{\ \ \ \ \qquad \nu_1\ldots \nu_q}, \qquad p+q=r$$
All of these possibilities simply reflect the expansion of the rank-$$r$$ tensor $$T$$ in different possible choices of basis.
In the previous approach, we made no mention whatsoever of the dual space, and considered tensors to be multilinear maps which eat vectors and spit out numbers. As an alternate approach, rather than introducing a dual basis for $$V$$, we consider the algebraic dual space $$V^*$$ which consists of linear maps $$V\rightarrow \mathbb R$$.
It is easily seen that $$V^*$$ is a vector space with the same dimensionality as $$V$$. Furthermore, given any basis $$\{\hat e_\mu\}$$ for $$V$$, there is a unique basis $$\{\xi^\mu\}$$ of $$V^*$$ such that $$\xi^\mu(\hat e_\nu) = \delta^\mu_\nu$$. We call elements of $$V^*$$ covectors, dual vectors, or one-forms depending on context and author's convention.
$$V^*$$ can be endowed with a canonical metric $$\Gamma :V^* \times V^* \rightarrow \mathbb R$$ whose components $$\Gamma^{\mu\nu}$$ in the basis $$\{\xi^\mu\}$$ are the matrix inverse of the components $$g_{\mu\nu}$$ in the basis $$\{\hat e_\mu\}$$ (normally, we simply write $$\Gamma^{\mu\nu}\equiv g^{\mu\nu}$$ and differentiate these components from $$g_{\mu\nu}$$ by index placement).
To each vector $$\mathbf X$$, there corresponds a covector $$\tilde{\mathbf X}:= g(\cdot, \mathbf X)$$ where the $$\cdot$$ denotes an empty slot; because $$g$$ is non-degenerate, this correspondence is injective, and in finite dimensions is surjective as well, meaning that $$g$$ defines a bijection between $$V$$ and $$V^*$$ (though it should be said that any non-degenerate bilinear map would serve the same purpose).
Finally, a $$(p,q)$$-tensor is a multilinear map $$T:\underbrace{V^*\times\ldots\times V^*}_{p\text{ times}}\times\underbrace{V\times\ldots\times V}_{q\text{ times}} \rightarrow \mathbb R$$
which eats $$p$$ covectors and $$q$$ vectors and spits out a number. It has components
$$T(\underbrace{\xi^{\mu_1},\ldots,\xi^{\mu_p}}_{p\text{ times}},\underbrace{\hat e_{\nu_1},\ldots,\hat e_{\nu_q}}_{q\text{ times}}) = T^{\mu_1 \ldots \mu_p}_{\ \ \ \ \qquad \nu_1\ldots \nu_q}$$
The bijection between $$V$$ and $$V^*$$ provided by $$g$$ allows us to "raise" and "lower" indices at will, defining distinct but intimately related tensors. For example, if $$T:V\times V\rightarrow \mathbb R$$ is a $$(0,2)$$ tensor, then we can define $$T':V^* \times V \rightarrow \mathbb R$$ and $$T'':V^* \times V^* \rightarrow \mathbb R$$ via $$T'(\tilde{\mathbf X},\mathbf Y) := T(\mathbf X,\mathbf Y) \qquad T''(\tilde{\mathbf X},\tilde{\mathbf Y}):=T(\mathbf X,\mathbf Y)$$ which means that $$(T')^\mu_{\ \ \nu} = g^{\mu\alpha}T_{\alpha\nu}$$ and $$(T'')^{\mu\nu}=g^{\mu\alpha}g^{\nu\beta} T_{\alpha\beta}$$.
Neither approach described above is wrong. However, the latter is more modern and, in my opinion, ultimately far cleaner (though to be fair, this may not be obvious at first).
Correct is the second 'geometrical' representation.
Note the properties : (1) the vectors of the dual basis are parallel to the heights of the parallelogram formed by the vectors of the original basis with magnitudes inversely proportional to these heights and (2) increasing the magnitude of a vector of the original basis the corresponding component is absolutely decreased (that's the term "contra-variant") while the corresponding component with respect to the dual basis is absolutely increased (that's the term "co-variant").
The first picture does not work in agreement with above properties.
$$=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!$$
$$\boldsymbol{\S}\:$$A. Reciprocal or dual basis in $$\,\mathbb R^2\,$$ - Contravariant and covariant components
Consider a basis $$\,\{\mathbf u_1,\mathbf u_2\}\,$$ in $$\,\mathbb R^2\,$$ not necessarily orthonormal. Given two vectors $$\,\mathbf x,\mathbf y\,$$ expressed by components with respect to this basis \begin{align} \mathbf x & \boldsymbol{=} \mathrm x^1 \mathbf u_1 \boldsymbol{+} \mathrm x^2\mathbf u_2 \tag{01a}\label{01a}\\ \mathbf y & \boldsymbol{=} \mathrm y^1 \mathbf u_1 \boldsymbol{+} \mathrm y^2\mathbf u_2 \tag{01b}\label{01b} \end{align} for the usual inner product we have \begin{align} \langle\mathbf x,\mathbf y\rangle & \boldsymbol{=}\langle\mathrm x^1 \mathbf u_1 \boldsymbol{+} \mathrm x^2\mathbf u_2,\mathrm y^1 \mathbf u_1 \boldsymbol{+} \mathrm y^2\mathbf u_2\rangle \nonumber\\ & \boldsymbol{=} \mathrm x^1\mathrm y^1\langle\mathbf u_1,\mathbf u_1\rangle\boldsymbol{+}\mathrm x^1\mathrm y^2\langle\mathbf u_1,\mathbf u_2\rangle\boldsymbol{+}\mathrm x^2\mathrm y^1\langle\mathbf u_2,\mathbf u_1\rangle\boldsymbol{+}\mathrm x^2\mathrm y^2\langle\mathbf u_2,\mathbf u_2\rangle \nonumber\\ & \boldsymbol{=} \Vert\mathbf u_1\Vert^2\mathrm x^1\mathrm y^1\boldsymbol{+}\langle\mathbf u_1,\mathbf u_2\rangle\mathrm x^1\mathrm y^2\boldsymbol{+}\langle\mathbf u_2,\mathbf u_1\rangle\mathrm x^2\mathrm y^1\boldsymbol{+}\Vert\mathbf u_2\Vert^2\mathrm x^2\mathrm y^2 \nonumber\\ & \boldsymbol{=} g_{11}\mathrm x^1\mathrm y^1\boldsymbol{+}g_{12}\mathrm x^1\mathrm y^2\boldsymbol{+}g_{21}\mathrm x^2\mathrm y^1\boldsymbol{+}g_{22}\mathrm x^2\mathrm y^2 \tag{02}\label{02} \end{align} that is using the Einstein's summation convention $$$$\langle\mathbf x,\mathbf y\rangle \boldsymbol{=}g_{ij}\mathrm x^i\mathrm y^j \qquad \left(i,j \boldsymbol{=}1,2\right) \tag{03}\label{03}$$$$ where $$$$\mathfrak g \boldsymbol{=}\{g_{ij}\}\boldsymbol{=} \begin{bmatrix} g_{11} & g_{12}\vphantom{\dfrac{a}{b}}\\ g_{21} & g_{22}\vphantom{\dfrac{a}{b}} \end{bmatrix}\boldsymbol{=} \begin{bmatrix} \:\Vert\mathbf u_1\Vert^2 & \langle\mathbf u_1,\mathbf u_2\rangle\vphantom{\dfrac{a}{b}}\\ \langle\mathbf u_2,\mathbf u_1\rangle & \:\Vert\mathbf u_2\Vert^2\vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{04}\label{04}$$$$ the metric matrix (tensor).
We know that a matrix is not a linear transformation by itself. To represent correctly a linear transformation of a linear space $$\,V\,$$ on itself by a matrix, the domain space and the image space must be equipped each one with its basis. For example, in our case, suppose that we have a linear transformation $$\,F\,$$ from $$\,\mathbb R^2\,$$ on itself, the space equipped with basis $$\,\{\mathbf u_1,\mathbf u_2\}\,$$ $$$$\bigg(\mathbb R^2\boldsymbol{,}\{\mathbf u_1,\mathbf u_2\}\bigg) \stackrel{F}{\boldsymbol{-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightarrow}}\bigg(\mathbb R^2\boldsymbol{,}\{\mathbf u_1,\mathbf u_2\}\bigg) \tag{05}\label{05}$$$$ then $$\,F\,$$ would be represented by a well-defined matrix $$$$\mathfrak f \left(F\right)\boldsymbol{=}\{f_{ij}\}\boldsymbol{=} \begin{bmatrix} f_{11} & f_{12}\vphantom{\dfrac{a}{b}}\\ f_{21} & f_{22}\vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{06}\label{06}$$$$ but if the image space is equipped with a different basis $$$$\bigg(\mathbb R^2\boldsymbol{,}\{\mathbf u_1,\mathbf u_2\}\bigg) \stackrel{F}{\boldsymbol{-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightarrow}}\bigg(\mathbb R^2\boldsymbol{,}\{\mathbf w_1,\mathbf w_2\}\bigg) \tag{07}\label{07}$$$$ the matrix representation of $$\,F\,$$ would be different $$$$\mathfrak f' \left(F\right)\boldsymbol{=}\{f'_{ij}\}\boldsymbol{=} \begin{bmatrix} f'_{11} & f'_{12}\vphantom{\dfrac{a}{b}}\\ f'_{21} & f'_{22}\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{\ne}\mathfrak f \left(F\right) \tag{08}\label{08}$$$$ Note also that if the transformation in equation \eqref{05} is the identity transformation, $$\,F\boldsymbol{=}I\,$$, then it will represented by the identity matrix $$$$\mathfrak f \left(F\right)\boldsymbol{=}\mathcal I\boldsymbol{=} \begin{bmatrix} \:\: 1\:\: & \:\: 0\:\:\vphantom{\dfrac{a}{b}}\\ \:\: 0\:\: & \:\: 1\:\:\vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{09}\label{09}$$$$ while this is not valid if $$\,F\boldsymbol{=}I\,$$ in equation \eqref{07}.
We give now the following definition :
Definition : A basis $$\,\{\mathbf u^1,\mathbf u^2\}\,$$ in $$\,\mathbb R^2\,$$ is called reciprocal to or dual of a given original basis $$\,\{\mathbf u_1,\mathbf u_2\}\,$$ in $$\,\mathbb R^2\,$$ if the identity transformation $$$$\bigg(\mathbb R^2\boldsymbol{,}\{\mathbf u_1,\mathbf u_2\}\bigg) \stackrel{I}{\boldsymbol{-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!\rightarrow}}\bigg(\mathbb R^2\boldsymbol{,}\{\mathbf u^1,\mathbf u^2\}\bigg) \tag{10}\label{10}$$$$ is represented by the metric matrix $$\,\mathfrak g \,$$ induced by the original basis $$$$\mathfrak g \boldsymbol{=}\{g_{ij}\}\boldsymbol{=} \begin{bmatrix} g_{11} & g_{12}\vphantom{\dfrac{a}{b}}\\ g_{21} & g_{22}\vphantom{\dfrac{a}{b}} \end{bmatrix}\boldsymbol{=} \begin{bmatrix} \:\Vert\mathbf u_1\Vert^2 & \langle\mathbf u_1,\mathbf u_2\rangle\vphantom{\dfrac{a}{b}}\\ \langle\mathbf u_2,\mathbf u_1\rangle & \:\Vert\mathbf u_2\Vert^2\vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{11}\label{11}$$$$
A vector $$\,\mathbf x\,$$ expressed by components with respect to the original basis, see equation \eqref{01a}
$$$$\mathbf x \boldsymbol{=} \mathrm x^1 \mathbf u_1 \boldsymbol{+} \mathrm x^2\mathbf u_2 \tag{12}\label{12}$$$$ would be expressed with respect to the dual basis as $$$$\mathbf x \boldsymbol{=} \mathrm x_1 \mathbf u^1 \boldsymbol{+} \mathrm x_2\mathbf u^2 \tag{13}\label{13}$$$$ and since it is this same vector in $$\,\mathbb R^2\,$$ $$$$\mathrm x^1 \mathbf u_1 \boldsymbol{+} \mathrm x^2\mathbf u_2\boldsymbol{=} \mathbf x \boldsymbol{=} \mathrm x_1 \mathbf u^1 \boldsymbol{+} \mathrm x_2\mathbf u^2 \tag{14}\label{14}$$$$
Essentially we have here a transformation of coordinates given by $$$$\begin{bmatrix} \mathrm x_1\vphantom{\dfrac{a}{b}}\\ \mathrm x_2\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{=} \mathfrak g \begin{bmatrix} \mathrm x^1\vphantom{\dfrac{a}{b}}\\ \mathrm x^2\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{=} \begin{bmatrix} g_{11} & g_{12}\vphantom{\dfrac{a}{b}}\\ g_{21} & g_{22}\vphantom{\dfrac{a}{b}} \end{bmatrix} \begin{bmatrix} \mathrm x^1\vphantom{\dfrac{a}{b}}\\ \mathrm x^2\vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{15}\label{15}$$$$ or $$$$\mathrm x_i\boldsymbol{=} g_{ij}\mathrm x^j \tag{16}\label{16}$$$$ The inner product of equation \eqref{03} is expressed as $$$$\mathrm x^i\mathrm y_i\boldsymbol{=}\langle\mathbf x,\mathbf y\rangle \boldsymbol{=}\mathrm x_j\mathrm y^j \tag{17}\label{17}$$$$ since on one hand $$\,g_{ij}\mathrm y^j\boldsymbol{=}\mathrm y_i\,$$ and on the other hand, due to the symmetry of $$\,\mathfrak g\,$$, we have $$\,g_{ij}\mathrm x^i\boldsymbol{=}g_{ji}\mathrm x^i\boldsymbol{=}\mathrm x_j\,$$.
With respect to the original basis $$\,\{\mathbf u_1,\mathbf u_2\}\,$$ the components with upper index $$\,\mathrm x^k\,$$ are called contravariant while the components with the lower index $$\,\mathrm x_k\,$$ are called covariant.
We'll determine now the relation of the dual basis $$\,\{\mathbf u^1,\mathbf u^2\}\,$$ to the original $$\,\{\mathbf u_1,\mathbf u_2\}\,$$ and based on this we'll provide a geometrical construction-representation.
Formally we have $$$$\begin{bmatrix} \mathbf u_1\vphantom{\dfrac{a}{b}}\\ \mathbf u_2\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{=} \mathfrak g^{\boldsymbol{\top}} \begin{bmatrix} \mathbf u^1\vphantom{\dfrac{a}{b}}\\ \mathbf u^2\vphantom{\dfrac{a}{b}} \end{bmatrix} \stackrel{\mathfrak g^{\boldsymbol{\top}}\!\boldsymbol{=}\mathfrak g}{\boldsymbol{=\!=\!=}} \mathfrak g \begin{bmatrix} \mathbf u^1\vphantom{\dfrac{a}{b}}\\ \mathbf u^2\vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{18}\label{18}$$$$ so $$$$\begin{bmatrix} \mathbf u^1\vphantom{\dfrac{a}{b}}\\ \mathbf u^2\vphantom{\dfrac{a}{b}} \end{bmatrix} \boldsymbol{=} \mathfrak g^{\boldsymbol{-}1} \begin{bmatrix} \mathbf u_1\vphantom{\dfrac{a}{b}}\\ \mathbf u_2\vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{19}\label{19}$$$$ From equation \eqref{11} we have $$$$\mathfrak g^{\boldsymbol{-}1} \boldsymbol{=} \dfrac{1}{\vert\mathfrak g\vert} \begin{bmatrix} \hphantom{\boldsymbol{-}}g_{22} & \boldsymbol{-}g_{12}\vphantom{\dfrac{a}{b}}\\ \boldsymbol{-}g_{21} & \hphantom{\boldsymbol{-}}g_{11}\vphantom{\dfrac{a}{b}} \end{bmatrix}\boldsymbol{=} \dfrac{1}{\Vert\mathbf u_1\boldsymbol{\times}\mathbf u_2\Vert^2} \begin{bmatrix} \hphantom{\boldsymbol{-}} \:\Vert\mathbf u_2\Vert^2 & \boldsymbol{-}\langle\mathbf u_1,\mathbf u_2\rangle\vphantom{\dfrac{a}{b}}\\ \boldsymbol{-} \langle\mathbf u_2,\mathbf u_1\rangle & \hphantom{\boldsymbol{-}}\:\Vert\mathbf u_1\Vert^2\vphantom{\dfrac{a}{b}} \end{bmatrix} \tag{20}\label{20}$$$$ where $$$$\vert\mathfrak g\vert\boldsymbol{=}\det{\mathfrak g} \boldsymbol{=}g_{11}g_{22}\boldsymbol{-}g_{21}g_{12}\boldsymbol{=}\Vert\mathbf u_1\Vert^2\Vert\mathbf u_2\Vert^2\boldsymbol{-} \vert\langle\mathbf u_1,\mathbf u_2\rangle\vert^2\boldsymbol{=}\Vert\mathbf u_1\boldsymbol{\times}\mathbf u_2\Vert^2 \tag{21}\label{21}$$$$ From equations \eqref{19},\eqref{20} \begin{align} \mathbf u^1 & \boldsymbol{=} \hphantom{\boldsymbol{-}}\left(\dfrac{\Vert\mathbf u_2\Vert^2}{\Vert\mathbf u_1\boldsymbol{\times}\mathbf u_2\Vert^2}\right) \mathbf u_1 \boldsymbol{-} \left(\dfrac{\langle\mathbf u_1,\mathbf u_2\rangle}{\Vert\mathbf u_1\boldsymbol{\times}\mathbf u_2\Vert^2}\right)\mathbf u_2 \tag{22a}\label{22a}\\ \mathbf u^2 & \boldsymbol{=} \boldsymbol{-} \left(\dfrac{\langle\mathbf u_2,\mathbf u_1\rangle}{\Vert\mathbf u_1\boldsymbol{\times}\mathbf u_2\Vert^2}\right) \mathbf u_1 \boldsymbol{+} \left(\dfrac{\Vert\mathbf u_1\Vert^2}{\Vert\mathbf u_1\boldsymbol{\times}\mathbf u_2\Vert^2}\right)\mathbf u_2 \tag{22b}\label{22b} \end{align} These expressions take the form \begin{align} \mathbf u^1 & \boldsymbol{=} \dfrac{\Vert\mathbf u_2\Vert^2}{\Vert\mathbf u_1\boldsymbol{\times}\mathbf u_2\Vert^2}\Biggl(\mathbf u_1 \boldsymbol{-}\bigg\langle\mathbf u_1, \dfrac{\mathbf u_2}{\Vert\mathbf u_2\Vert}\bigg\rangle \dfrac{\mathbf u_2}{\Vert\mathbf u_2\Vert}\Biggr) \tag{23a}\label{23a}\\ \mathbf u^2 & \boldsymbol{=} \dfrac{\Vert\mathbf u_1\Vert^2}{\Vert\mathbf u_1\boldsymbol{\times}\mathbf u_2\Vert^2}\Biggl(\mathbf u_2 \boldsymbol{-}\bigg\langle\mathbf u_2, \dfrac{\mathbf u_1}{\Vert\mathbf u_1\Vert}\bigg\rangle \dfrac{\mathbf u_1}{\Vert\mathbf u_1\Vert}\Biggr) \tag{23b}\label{23b} \end{align} Note that \begin{align} \bigg\langle\mathbf u_1, \dfrac{\mathbf u_2}{\Vert\mathbf u_2\Vert}\bigg\rangle \dfrac{\mathbf u_2}{\Vert\mathbf u_2\Vert} & \boldsymbol{=}\bigl(\mathbf u_{1}\bigr)_{\boldsymbol{||}\mathbf u_2} \boldsymbol{=}\bigl[\texttt{vectorial projection of } \mathbf u_1 \texttt{ on } \mathbf u_2\bigr] \tag{24a}\label{24a}\\ \bigg\langle\mathbf u_2, \dfrac{\mathbf u_1}{\Vert\mathbf u_1\Vert}\bigg\rangle \dfrac{\mathbf u_1}{\Vert\mathbf u_1\Vert}& \boldsymbol{=} \bigl(\mathbf u_{2}\bigr)_{\boldsymbol{||}\mathbf u_1} \boldsymbol{=}\bigl[\texttt{vectorial projection of } \mathbf u_2 \texttt{ on } \mathbf u_1\bigr] \tag{24b}\label{24b} \end{align} so
\begin{align} \mathbf u_1 & \boldsymbol{-}\bigg\langle\mathbf u_1, \dfrac{\mathbf u_2}{\Vert\mathbf u_2\Vert}\bigg\rangle \dfrac{\mathbf u_2}{\Vert\mathbf u_2\Vert} \boldsymbol{=}\left(\dfrac{\mathbf u_2}{\Vert\mathbf u_2\Vert}\boldsymbol{\times}\mathbf u_1\right) \boldsymbol{\times}\dfrac{\mathbf u_2}{\Vert\mathbf u_2\Vert} \boldsymbol{=}\bigl(\mathbf u_{1}\bigr)_{\boldsymbol{\perp}\mathbf u_2} \nonumber\\ & \boldsymbol{=}\bigl[\texttt{vectorial projection of } \mathbf u_1 \texttt{ on direction normal to } \mathbf u_2 \bigr] \tag{25a}\label{25a}\\ \mathbf u_2 & \boldsymbol{-}\bigg\langle\mathbf u_2, \dfrac{\mathbf u_1}{\Vert\mathbf u_1\Vert}\bigg\rangle \dfrac{\mathbf u_1}{\Vert\mathbf u_1\Vert} \boldsymbol{=}\left(\dfrac{\mathbf u_1}{\Vert\mathbf u_1\Vert}\boldsymbol{\times}\mathbf u_2\right) \boldsymbol{\times}\dfrac{\mathbf u_1}{\Vert\mathbf u_1\Vert}\boldsymbol{=}\bigl(\mathbf u_{2}\bigr)_{\boldsymbol{\perp}\mathbf u_1} \nonumber\\ & \boldsymbol{=}\bigl[\texttt{vectorial projection of } \mathbf u_2 \texttt{ on direction normal to } \mathbf u_1 \bigr] \tag{25b}\label{25b} \end{align} and equations \eqref{23a},\eqref{23b} yield \begin{align} \mathbf u^1 & \boldsymbol{=} \dfrac{\Vert\mathbf u_2\Vert^2}{\Vert\mathbf u_1\boldsymbol{\times}\mathbf u_2\Vert^2}\bigl(\mathbf u_{1}\bigr)_{\boldsymbol{\perp}\mathbf u_2} \tag{26a}\label{26a}\\ \mathbf u^2 & \boldsymbol{=} \dfrac{\Vert\mathbf u_1\Vert^2}{\Vert\mathbf u_1\boldsymbol{\times}\mathbf u_2\Vert^2}\bigl(\mathbf u_{2}\bigr)_{\boldsymbol{\perp}\mathbf u_1} \tag{26b}\label{26b} \end{align} Note that if $$\,\phi_{12}\in[0,\pi]\,$$ is the angle between the vectors of the original basis $$\,\{\mathbf u_1,\mathbf u_2\}\,$$ then \begin{align} \left\Vert\bigl(\mathbf u_{1}\bigr)_{\boldsymbol{\perp}\mathbf u_2} \right\Vert & \boldsymbol{=} \Vert\mathbf u_1\Vert\sin\phi_{12}\boldsymbol{=}h_1\,,\qquad \dfrac{\Vert\mathbf u_2\Vert^2}{\Vert\mathbf u_1\boldsymbol{\times}\mathbf u_2\Vert^2}\boldsymbol{=}\dfrac{1}{ \Vert\mathbf u_1\Vert^2\sin\phi^2_{12}}\boldsymbol{=}\dfrac{1}{h^2_1} \tag{27a}\label{27a}\\ \left\Vert\bigl(\mathbf u_{2}\bigr)_{\boldsymbol{\perp}\mathbf u_1} \right\Vert & \boldsymbol{=} \Vert\mathbf u_2\Vert\sin\phi_{12}\boldsymbol{=}h_2\,,\qquad \dfrac{\Vert\mathbf u_1\Vert^2}{\Vert\mathbf u_1\boldsymbol{\times}\mathbf u_2\Vert^2}\boldsymbol{=}\dfrac{1}{ \Vert\mathbf u_2\Vert^2\sin\phi^2_{12}}\boldsymbol{=}\dfrac{1}{h^2_2} \tag{27b}\label{27b} \end{align} where $$\,h_1,h_2\,$$ the heights of the parallelogram formed by the vectors of the original basis $$\,\{\mathbf u_1,\mathbf u_2\}$$. From \eqref{26a}-\eqref{27a} and \eqref{26b}-\eqref{27b} we have respectively $$$$\Vert\mathbf u^1 \Vert \boldsymbol{=} \dfrac{1}{h_1}\,,\qquad \Vert\mathbf u^2 \Vert \boldsymbol{=} \dfrac{1}{h_2} \tag{28}\label{28}$$$$ Finally
The vectors $$\,\mathbf u^1,\mathbf u^2\,$$ of the dual basis are orthogonal to the vectors $$\,\mathbf u_2,\mathbf u_1\,$$ of the original basis respectively with magnitudes the inverse of the heights $$\,h_1,h_2\,$$ of the parallelogram formed by the vectors of the original basis $$\,\{\mathbf u_1,\mathbf u_2\}\,$$ respectively.
From above analysis and Figure-01
The vectors $$\,\mathbf u_1,\mathbf u_2\,$$ of the original basis are orthogonal to the vectors $$\,\mathbf u^2,\mathbf u^1\,$$ of the dual basis respectively with magnitudes the inverse of the heights $$\,h^1,h^2\,$$ of the parallelogram formed by the vectors of the dual basis $$\,\{\mathbf u^1,\mathbf u^2\}\,$$ respectively.
The original basis $$\,\{\mathbf u_1,\mathbf u_2\}\,$$ is the dual of its dual $$\,\{\mathbf u^1,\mathbf u^2\}$$.
In Figure-02 we see the geometrical construction of the dual vector $$\,\mathbf u^1\,$$ from the original $$\,\mathbf u_1\,$$ one. This figure works inversely also : since the original basis is the dual of its dual we see the geometrical construction of the original vector $$\,\mathbf u_1\,$$ from the dual $$\,\mathbf u^1\,$$ one.
In Figure-03 we see the analysis of a vector $$\,\mathbf x\,$$ in components with respect to the original basis (contravariant) and with respect to the dual basis (covariant).
|
2021-09-16 23:02:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 145, "wp-katex-eq": 0, "align": 8, "equation": 20, "x-ck12": 0, "texerror": 0, "math_score": 0.9734323620796204, "perplexity": 1800.8906097099527}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053759.24/warc/CC-MAIN-20210916204111-20210916234111-00241.warc.gz"}
|
https://socratic.org/questions/if-a-0-5-l-container-of-juice-contains-15-g-of-sugar-what-is-the-concentration-o
|
# If a 0.5 L container of juice contains 15 g of sugar, what is the concentration of sugar?
Jul 6, 2016
$30 g {L}^{-} 1$
#### Explanation:
$\text{Concentration"="mass"/"volume} = \frac{15 g}{0.5 L} = 30 g {L}^{-} 1$
|
2019-11-15 09:36:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7171681523323059, "perplexity": 6335.537633663349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668618.8/warc/CC-MAIN-20191115093159-20191115121159-00437.warc.gz"}
|
https://www.gradesaver.com/textbooks/engineering/other-engineering/materials-science-and-engineering-an-introduction/chapter-7-dislocations-and-strengthening-mechanisms-questions-and-problems-page-249/7-38b
|
## Materials Science and Engineering: An Introduction
$76$ minutes
Analyzing the figure on page 241 about the the logarithm of grain diameter versus the logarithm of time for grain growth in brass at several temperatures (Figure 7.25), it can be approximated that at $700^{o}$ it takes roughly 0 minutes to get to the 0.03 mark and roughly 76 minutes to reach the 0.3 mark. Therefore, the time required can be found by subtracting the two. $76 minutes$ $- 0minutes$ $= 76minutes$
|
2019-11-18 20:01:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49036017060279846, "perplexity": 617.379549713789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669813.71/warc/CC-MAIN-20191118182116-20191118210116-00438.warc.gz"}
|
https://geometry-central.net/
|
# Welcome to Geometry Central
Geometry-central is a modern C++ library of data structures and algorithms for geometry processing, with a particular focus on surface meshes.
Features include:
• A polished surface mesh class, with efficient support for mesh modification, and a system of containers for associating data with mesh elements.
• Implementations of canonical geometric quantities on surfaces, ranging from normals and curvatures to tangent vector bases to operators from discrete differential geometry.
• A suite of powerful algorithms, including computing distances on surface, generating direction fields, and manipulating intrinsic Delaunay triangulations.
• A coherent set of sparse linear algebra tools, based on Eigen and augmented to automatically utilize better solvers if available on your system.
Sample:
// Load a mesh
std::unique_ptr<SurfaceMesh> mesh;
std::unique_ptr<VertexPositionGeometry> geometry;
// Compute vertex areas
VertexData<double> vertexAreas(*mesh);
geometry->requireFaceAreas();
for(Vertex v : mesh->vertices()) {
double A = 0.;
A += geometry->faceAreas[f] / v.degree();
}
vertexAreas[v] = A;
}
For more, see the tutorials. To get started with the code, see building. Use the sample project to get started with a build system and a gui.
A introductory talk on geometry-central was given at SGP 2020, check it out to get started: www.youtube.com/watch?v=mw5Xz9CFZ7A
Bindings & Plugins:
If you’re interested in creating additional bindings/plugins, feel free to reach out!
Related alternatives: CGAL, libIGL, OpenMesh, Polygon Mesh Processing Library, CinoLib
Credits
Geometry-central is developed by Nicholas Sharp, with many contributions from Keenan Crane, Yousuf Soliman, Mark Gillespie, Rohan Sawhney, Chris Yu, and many others.
If geometry-central contributes to an academic publication, cite it as:
@misc{geometrycentral,
title = {geometry-central},
author = {Nicholas Sharp and Keenan Crane and others},
note = {www.geometry-central.net},
year = {2019}
}
Development of this software was funded in part by NSF Award 1717320, an NSF graduate research fellowship, and gifts from Adobe Research and Autodesk, Inc.
|
2021-07-23 18:57:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17725706100463867, "perplexity": 6933.883996819187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150000.59/warc/CC-MAIN-20210723175111-20210723205111-00118.warc.gz"}
|
https://www.vocal.com/beamforming-2/gcc-for-speech-aoa-detection/
|
Resolution of the angle of arrival of speech signals impinging on a microphone array is critical for beamforming and noise reduction in online applications. Due to computation and memory constraints of embedded DSPs, an efficient but fast algorithm is required to achieve any meaningful gains for beamforming. In speech processing, frame lengths on orders of $10$ milliseconds are typical, further buttressing the need for a quick resolution of algorithms. The problem we wish to tackle is as follow: Given $M$ frame recordings of a single source of speech, one frame recording for each microphone, each frame having $N$ samples, determine the angle of arrival of the signal assuming a far field model. Figure 1 illustrates a typical microphone array.
Figure 1: Square microphone array.
We limit our analysis to the $2$-D case for brevity of presentation. The general approach is to use cross-correlation to find the delay between paired microphones. GCC-PHAT however requires $\mathcal{O}(\hat{N}\log_2{\hat{N}})$ additions and multiplications because the DFT’s of both signals need to be computed. Here, $\underset{m \in \mathbb{Z}}{\mathrm{argmin}}~ \hat{N}=2^{m}\ge N$. A faster approach will be to find the cross-correlations by leveraging the known maximum delay between a pair of microphones. This approach reduces the number of computations to $\mathcal{O}(NL)$, where L is the maximum number of samples that can be delayed. The maximum delay on a typical DSP platform with a microphone spacing of $40mm$ will be as small as 2 samples using a sampling rate of $16kHz$, thus making our approach at least 4 times faster than the so called GCC-PHAT and other DFT based approaches. For the illustrated microphone array in Figure 1, define the time delay between microphone $i$ and microphone $j$ as $\tau_{i,j} =t_j - t_i,\{i,j\} \in \{1,\cdots,4\}$, with $t_i$ and $t_j$ being the arrival times of a common sample data. Define the speed of sound as c. Let $d$ be the distance between consecutive microphones $m_i$ and $m_j$, as labeled on Figure 1, such that $|i-j| = 1$. Also let the distance between $m_i$ and $m_j$ be $\sqrt{2} d$ for $\mod{(|i-j|,3) =2}$. Then the time difference of arrival of signals at the microphone arrays obey the following:
$\begin{bmatrix}\tau_{1,2}, \tau_{1,3}, \tau_{1,4} ,\tau_{2,3} , \tau_{2,4} , \tau_{3,4}\end{bmatrix}^T = \frac{d}{c} \begin{bmatrix}\sin{\theta},\sin{\theta} + \cos{\theta},\cos{\theta} ,\cos{\theta} ,\cos{\theta}-\sin{\theta} ,\sin{\theta}\end{bmatrix}^T$
Here, the $\tau_{i,j}$‘s are estimated using pairwise correlations using: $\underset{k \in [-L,L]}{\mathrm{argmax}}~ crr[k] = \sum\limits_{n=1}^{N} x_i[n]x_j[k+n], i \neq j, \{i,j\} \in \{1, \cdots, M\}$
where the $x_i[n]$‘s are sampled data $n$ at microphone $i$. The inter-sample delays may also be dealt with by simply using the adjacent bins near the peak. For example, suppose the peak of the correlation was at the $k^{th}$ sample, then we will use the intersample value of $\hat{k} =\frac{2k crr[k]-(k-0.5)crr[k+1]-(k+0.5)crr[k-1]}{2crr[k]-crr[k-1]-crr[k+1]}.$
As a custom design house, VOCAL Technologies’ angle of arrival algorithms are applicable to a wide range of the microphone arrays that exist in reverberation environments. The selection of the algorithm is based on the requirements of the application and the available hardware configuration.
|
2019-09-16 12:17:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7597473859786987, "perplexity": 430.64039810254275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572556.54/warc/CC-MAIN-20190916120037-20190916142037-00332.warc.gz"}
|
http://digitalhaunt.net/Texas/calculate-standard-error-estimate-regression-equation.html
|
Address 209 N Mockingbird Ln, Abilene, TX 79603 (325) 676-7113 http://www.cashregisterguy.net
# calculate standard error estimate regression equation Dunn, Texas
Copyright © 2016 Statistics How To Theme by: Theme Horse Powered by: WordPress Back to Top Inference in Linear Regression Linear regression attempts to model the relationship between two variables by This line describes how the mean response y changes with x. The fitted values b0 and b1 estimate the true intercept and slope of the population regression line. Formulas for standard errors and confidence limits for means and forecasts The standard error of the mean of Y for a given value of X is the estimated standard deviation
Jim Name: Olivia • Saturday, September 6, 2014 Hi this is such a great resource I have stumbled upon :) I have a question though - when comparing different models from Minitab Inc. The S value is still the average distance that the data points fall from the fitted values. Thanks for the beautiful and enlightening blog posts.
Thus, for our prediction of 43.6 bushels from an application of 35 pounds of nitrogen, we can expect to predict a yield varying from 41 to 46.2 bushels with approximately 68% In other words, α (the y-intercept) and β (the slope) solve the following minimization problem: Find min α , β Q ( α , β ) , for Q ( α Similarly, an exact negative linear relationship yields rXY = -1. Sign in to make your opinion count.
This error term has to be equal to zero on average, for each value of x. Simple linear regression From Wikipedia, the free encyclopedia Jump to: navigation, search This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. It can be computed in Excel using the T.INV.2T function. The standard error of the forecast for Y at a given value of X is the square root of the sum of squares of the standard error of the regression and
For a simple regression model, in which two degrees of freedom are used up in estimating both the intercept and the slope coefficient, the appropriate critical t-value is T.INV.2T(1 - C, In the regression output for Minitab statistical software, you can find S in the Summary of Model section, right next to R-squared. Being out of school for "a few years", I find that I tend to read scholarly articles to keep up with the latest developments. Confidence intervals were devised to give a plausible set of values the estimates might have if one repeated the experiment a very large number of times.
The correct result is: 1.$\hat{\mathbf{\beta}} = (\mathbf{X}^{\prime} \mathbf{X})^{-1} \mathbf{X}^{\prime} \mathbf{y}.$ (To get this equation, set the first order derivative of $\mathbf{SSR}$ on $\mathbf{\beta}$ equal to zero, for maxmizing $\mathbf{SSR}$) 2.\$E(\hat{\mathbf{\beta}}|\mathbf{X}) = A Hendrix April 1, 2016 at 8:48 am This is not correct! Substituting the fitted estimates b0 and b1 gives the equation y = b0 + b1x*. This textbook comes highly recommdend: Applied Linear Statistical Models by Michael Kutner, Christopher Nachtsheim, and William Li.
You may need to scroll down with the arrow keys to see the result. Sign in Share More Report Need to report the video? Watch QueueQueueWatch QueueQueue Remove allDisconnect Loading... About Press Copyright Creators Advertise Developers +YouTube Terms Privacy Policy & Safety Send feedback Try something new!
Predictor Coef StDev T P Constant 59.284 1.948 30.43 0.000 Sugars -2.4008 0.2373 -10.12 0.000 S = 9.196 R-Sq = 57.7% R-Sq(adj) = 57.1% Significance Tests for Regression Slope The third statisticsfun 93,050 views 3:42 Explanation of Regression Analysis Results - Duration: 6:14. Assumptions: (Same for correlation and regression)
1. Homoscedasticity (Equal variances) Simple linear regression predicts the value of one variable from the value of one other variable.
You'll see S there. A plot of the residuals y - on the vertical axis with the corresponding explanatory values on the horizontal axis is shown to the left. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view Home Tables Binomial Distribution Table F Table PPMC Critical Values T-Distribution Table (One Tail) T-Distribution Table (Two Tails) Chi Squared Table (Right Tail) Z-Table (Left of Curve) Z-table (Right of Curve)
Pearson's Correlation Coefficient Privacy policy. The standard error of regression slope for this example is 0.027. However, I've stated previously that R-squared is overrated. More data yields a systematic reduction in the standard error of the mean, but it does not yield a systematic reduction in the standard error of the model.
So, for models fitted to the same sample of the same dependent variable, adjusted R-squared always goes up when the standard error of the regression goes down. As with the mean model, variations that were considered inherently unexplainable before are still not going to be explainable with more of the same kind of data under the same model Search Statistics How To Statistics for the rest of us! Occasionally the fraction 1/n−2 is replaced with 1/n.
For the BMI example, about 95% of the observations should fall within plus/minus 7% of the fitted line, which is a close match for the prediction interval. The MINITAB output provides a great deal of information. Show more Language: English Content location: United States Restricted Mode: Off History Help Loading... To illustrate this, let’s go back to the BMI example.
The table below shows this output for the first 10 observations. Please try again later. S becomes smaller when the data points are closer to the line. Arguments for the golden ratio making things more aesthetically pleasing How can I kill a specific X window Symbiotic benefits for large sentient bio-machine Missing \right ] Will a void* always
How can I gradually encrypt a file that is being downloaded?' PostGIS Shapefile Importer Projection SRID Postdoc with two small children and a commute...Life balance question Tips for Golfing in Brain-Flak However, S must be <= 2.5 to produce a sufficiently narrow 95% prediction interval. Thank you once again. Phil Chan 25,889 views 7:56 Understanding Standard Error - Duration: 5:01.
b = the slope of the regression line and is calculated by this formula: If the Pearson Product Moment Correlation has been calculated, all the components of this equation are already Prediction Intervals Once a regression line has been fit to a set of data, it is common to use the fitted slope and intercept values to predict the response for a Please enable JavaScript to view the comments powered by Disqus. Assume the data in Table 1 are the data from a population of five X, Y pairs.
The confidence interval for 0 takes the form b0 + t*sb0, and the confidence interval for 1 is given by b1 + t*sb1. In the example above, a 95% confidence interval The coefficients and error measures for a regression model are entirely determined by the following summary statistics: means, standard deviations and correlations among the variables, and the sample size. 2. The intercept of the fitted line is such that it passes through the center of mass (x, y) of the data points.
|
2019-09-16 16:23:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3767445981502533, "perplexity": 1158.9370161495106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572879.28/warc/CC-MAIN-20190916155946-20190916181946-00403.warc.gz"}
|
http://mathoverflow.net/questions/65129/role-of-cone-in-banach-space-closed
|
## role of cone in banach space [closed]
by defination of metric space and cone metric space, what is differene and use of cone
-
Maybe this survey article might be useful: "On cone metric spaces - A survey", dx.doi.org/10.1016/j.na.2010.12.014. – David Loeffler May 16 2011 at 12:19
|
2013-05-21 08:32:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9001155495643616, "perplexity": 2580.49897107084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699798457/warc/CC-MAIN-20130516102318-00083-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/curvature-tensor.704467/
|
# Curvature tensor
hi
In a local inertial frame with $g_{ij}=\eta_{ij}$ and $\Gamma^i_{jk}=0$.
why in such a frame, curvature tensor isn't zero?
curvature tensor is made of metric,first and second derivative of metric.
WannabeNewton
Because it's only at a point. Just because a function vanishes at a point doesn't mean its derivative has to vanish there; this is basic calculus.
so why first derivative become zero?
WannabeNewton
By construction of Riemann normal coordinates, the Christoffel symbols (as represented in these coordinates) vanish identically at the point the coordinates are setup at. This does not imply that the Riemann curvature tensor vanishes identically at said point, for the reasons stated above.
we choose a coordinate system that metric becomes SR, at one point.
I know what's derivative but first derivative become zero!!! and second isn't!!
is it regular?
according your reason, first derivative isn't zero!!
I can't understand you!!!
WannabeNewton
No, Riemann normal coordinates tell us that the metric becomes Minkowski at a given point and separately that the Christoffel symbols vanish identically at said point. The vanishing of the Christoffel symbols at that point is not a direct consequence of the metric being Minkowski at that point just through differentiation of the Minkowski metric; this would make zero sense mathematically.
Last edited:
phyzguy
we choose a coordinate system that metric becomes SR, at one point.
I know what's derivative but first derivative become zero!!! and second isn't!!
is it regular?
according your reason, first derivative isn't zero!!
I can't understand you!!!
Think of the function y = x^2.
The function and it's first derivative are both equal to 0 at x=0, but the second derivative (it's curvature) is not equal to zero at x=0. In the same way, if you set up Riemann normal coordinates at a point, the metric is Minkowski, and the Christoffel symbols(first derivative) are zero, but the curvature tensor (second derivative) is not zero.
Nugatory
Mentor
I know what's derivative but first derivative become zero!!! and second isn't!!
is it regular?
according your reason, first derivative isn't zero!!
I can't understand you!!!
First derivative equal to zero, means that it flat at that point.
Second derivative not equal to zero, means that the first derivative will change (no longer be zero) when you move off that point.
stevendaryl
Staff Emeritus
we choose a coordinate system that metric becomes SR, at one point.
I know what's derivative but first derivative become zero!!! and second isn't!!
is it regular?
according your reason, first derivative isn't zero!!
I can't understand you!!!
Let's take a particular example: The surface of a sphere of radius 1 meter can be described by coordinates $\theta$ and $\phi$. (You can think of $\theta$ as latitude and $\phi$ as longitude, although the mathematical convention is to have $\theta$ run from 0 to $\pi$, rather than from -90 to +90, and $\phi$ runs from 0 to 2$\pi$, rather than from -180 to +180)
The components of the metric tensor in this coordinate system are:
$g_{\theta \theta} = 1$
$g_{\phi \phi} = sin^2(\theta)$
Take a first derivative to get:
$\dfrac{\partial}{\partial \theta} g_{\phi \phi} = 2 sin(\theta) cos(\theta)$
Take a second derivative to get:
$\dfrac{\partial^2}{\partial \theta^2} g_{\phi \phi} = 2 (cos^2(\theta) - sin^2(\theta))$
At $\theta = \dfrac{\pi}{2}$, we have
$g_{\theta \theta} = 1$
$g_{\phi \phi} = 1$
$\dfrac{\partial}{\partial \gamma} g_{\alpha\beta} = 0$
where $\alpha, \beta, \gamma$ are either $\theta$ or $\phi$
So, the metric components and their first derivatives look just like flat space. But
the second derivative is nonzero, which means that the Riemann curvature tensor can be nonzero.
WannabeNewton
Also, you may already know this, but keep in mind that a frame is not a coordinate system and that a coordinate system is not a frame. What you are describing are Riemann normal coordinates, which are unfortunately called "locally inertial frames" in some texts; other texts more appropriately call them "locally inertial coordinates". What one does is use a frame to define a coordinate system, which is how Riemann normal coordinates are constructed (on the physics side anyways).
Last edited:
i can't understand your reasons!! 'stevendyal' and 'phyzguy'
there are another examples that first and second derivative equal zero at point
for example,y=(x-1)^3 +1 at x=1, this function is simple example!
it isn't a reason that if some functions like y=x^2 have this properties, another has also!!!
this is not a prove!!
I think your prove is based according properties of metric tensor!!
I don't know that!! but I think like this.
wannabenewton
i don't know exactly what's difference between "coordinate system" and "frame"!!
is it possible to explain it for me?
thanks.
WannabeNewton
Sure, I'll explain to you the difference between a frame and a coordinate system after resolving the main issue. First things first, have you seen how Riemann normal coordinates (aka "locally inertial coordinates") are actually constructed?
what's Riemann normal coordinates?
phyzguy
i can't understand your reasons!! 'stevendyal' and 'phyzguy'
there are another examples that first and second derivative equal zero at point
for example,y=(x-1)^3 +1 at x=1, this function is simple example!
it isn't a reason that if some functions like y=x^2 have this properties, another has also!!!
this is not a prove!!
I think your prove is based according properties of metric tensor!!
I don't know that!! but I think like this.
What exactly is your question? I thought from your first post that the question was, "In a local inertial frame with a Minkowski metric and Christoffel symbols equal to zero at a point, why isn't the curvature tensor zero?" So I provided an example function (y = x^2) whose value and first derivative are zero at a point, but whose second derivative at that same point is non-zero. I thought this might help you understand, but apparently it didn't.
WannabeNewton
what's Riemann normal coordinates?
Looks like there's a bit of background you have to cover. "Riemann normal coordinates" is the mathematical name for the "locally inertial coordinates" you mentioned in your original post. Can you tell me what textbook you're using for GR? And do you know how "locally inertial coordinates" are actually constructed mathematically? Knowing this will clear up your confusions.
stevendaryl
Staff Emeritus
i can't understand your reasons!! 'stevendyal' and 'phyzguy'
there are another examples that first and second derivative equal zero at point
for example,y=(x-1)^3 +1 at x=1, this function is simple example!
If both the first and the second derivatives of the metric tensor are zero, then the curvature tensor is zero. At least at that single point. A curved space can have a curvature tensor that has different values at different points.
i didn't here it.
yes i know how to construct it.
my question is how to prove second derivative of metric isn't zero, while first is zero.
hoe we can prove it?
yes, there are some examples that first is zero but second isn't, we speak about metric function!!!!
stevendaryl
Staff Emeritus
what's Riemann normal coordinates?
Let me illustrate for the simplest case, of 2D space (no time dimension). Suppose you have some local coordinate system $x, y$, and you have two points
$P_1$ with coordinates $(x,y)$
$P_2$ with coordinates $(x + \delta x, y + \delta y)$
The distance between these points is given approximately (for small $\delta x$ and $\delta y$) by:
$\delta s^2 = g_{xx} \delta x^2 + 2 g_{xy} \delta x \delta y + g_{yy} \delta y^2$
where $g_{xx}, g_{xy}, g_{yy}$ are three functions of position.
The coordinates are Riemann normal coordinates for the point $P_1$ provided that:
$g_{xx} = g_{yy} = 1$ at $P_1$
$g_{xy} = 0$ at $P_1$
$\frac{\partial}{\partial x} g_{xx} = \frac{\partial}{\partial y} g_{xx} = 0$ at $P_1$
$\frac{\partial}{\partial x} g_{xy} = \frac{\partial}{\partial y} g_{xy} = 0$ at $P_1$
$\frac{\partial}{\partial x} g_{yy} = \frac{\partial}{\partial y} g_{yy} = 0$ at $P_1$
Riemann normal coordinates are the closest you can get to Cartesian coordinates.
stevendaryl
Staff Emeritus
i didn't here it.
yes i know how to construct it.
my question is how to prove second derivative of metric isn't zero, while first is zero.
hoe we can prove it?
yes, there are some examples that first is zero but second isn't, we speak about metric function!!!!
Take the second derivative, and see if it's nonzero. That's how you prove that the second derivative is nonzero. I'm not sure I understand what you are asking for. Sometimes the first derivative is zero, but not the second. Sometimes both the first and second derivatives are zero.
The point about the curvature tensor is that if it is nonzero in one coordinate system, then it will be nonzero in every coordinate system. If it is zero in one coordinate system, then it will be zero in every coordinate system. So the curvature tensor is independent of your coordinate system in a way that the Christoffel coefficients $\Gamma^i_{jk}$ are not.
WannabeNewton
Steven has already explained it brilliantly. I really have nothing else to add with regards to that issue.
As far as coordinates and frames go: given an ##n##-manifold ##M##, a frame at an event ##p \in M## is just an orthonormal basis ##\{e_{\mu}\}## for ##T_p M##; a coordinate system is a pair ##(U,x^{\mu})## where ##U \subseteq M## is open and ##x^{\mu}: M \rightarrow \mathbb{R}## are a set of ##n## coordinate functions. In the context of GR, a frame corresponds to the instantaneous rest frame of an ideal observer with ##(e_0) = u##, where ##u## is the 4-velocity of said observer, and with ##(e_i), i = 1,2,3## representing the spatial axes of the instantaneous rest frame. These instantaneous rest frames are also called local inertial frames or local Lorentz frames. One can then use these frames to construct coordinates; a local inertial frame at an event ##p## can be used to construct a set of locally inertial coordinates ##x^{\mu}## in a neighborhood of ##p## using the exponential map (see p.112 of Carroll) in which ##g_{\mu\nu}(p) = \eta_{\mu\nu}## and separately ##\Gamma^{\gamma}_{\mu\nu}(p) = 0##.
Last edited:
hi
In a local inertial frame with $g_{ij}=\eta_{ij}$ and $\Gamma^i_{jk}=0$.
why in such a frame, curvature tensor isn't zero?
curvature tensor is made of metric,first and second derivative of metric.
Maybe the OP is asking about the properties of the metric tensor(a symmetric bilinear form) that lead to having putative vanishing first derivatives in some coordinates and coordinate-independent non-vanishing second derivatives for general manifolds (vanishing too for the special flat case) for any point , not just for special cases like the examples of critical points showed.
I could be wrong but I believe that both being a symmetric and nondegenerate(having nonzero determinant for its associated matrix in matrix language) bilinear form is important since in general the second partial derivatives only depend on the differential structure but are only defined for critical points in the absence of a Riemannian connection induced by a metric tensor. But we want curvature (the Hessian, defined as ∇df with ∇ being the Levi-Civita connection) to be defined for any point in the manifold.
bilinear form is important since in general the second partial derivatives only depend on the differential structure but are only defined for critical points in the absence of a Riemannian connection induced by a metric tensor. But we want curvature (the Hessian said:
any[/I] point in the manifold.
i couldn't understand you!
can you explain it for me more please.
thanks
stevendaryl
Staff Emeritus
i couldn't understand you!
can you explain it for me more please.
thanks
I think you need to formulate more specific questions, because it's not clear what it is that you don't understand.
Do you understand that the components of the metric tensor $g_{\mu \nu}$ change when you change to a different coordinate system?
Do you understand that the connection coefficients $\Gamma^\mu_{\nu \lambda}$ (which are constructed from derivatives of the metric tensor components) change when you change to a different coordinate system? For one coordinate system, it might be zero at a point, but for another coordinate system, it might be nonzero at that point.
Do you understand that the Riemann tensor, constructed from the first and second derivatives of the metric tensor components, has the property that if it is nonzero in one coordinate system, then it is nonzero in every coordinate system?
|
2021-06-12 21:28:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.89192795753479, "perplexity": 559.0784636591715}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586390.4/warc/CC-MAIN-20210612193058-20210612223058-00574.warc.gz"}
|
https://questions.examside.com/past-years/jee/question/for-a-plane-electromagnetic-wave-the-magnetic-field-at-a-po-jee-main-physics-magnetics-4shwtkwa328ogjy4
|
1
JEE Main 2020 (Online) 6th September Evening Slot
+4
-1
For a plane electromagnetic wave, the magnetic field at a point x and time t is
$$\overrightarrow B \left( {x,t} \right)$$ = $$\left[ {1.2 \times {{10}^{ - 7}}\sin \left( {0.5 \times {{10}^3}x + 1.5 \times {{10}^{11}}t} \right)\widehat k} \right]$$ T
The instantaneous electric field $$\overrightarrow E$$ corresponding to $$\overrightarrow B$$ is :
(speed of light c = 3 × 108 ms–1)
A
$$\overrightarrow E \left( {x,t} \right) = \left[ {36\sin \left( {1 \times {{10}^3}x + 1.5 \times {{10}^{11}}t} \right)\widehat i} \right]$$ $${V \over m}$$
B
$$\overrightarrow E \left( {x,t} \right) = \left[ {36\sin \left( {0.5 \times {{10}^3}x + 1.5 \times {{10}^{11}}t} \right)\widehat k} \right]{V \over m}$$
C
$$\overrightarrow E \left( {x,t} \right) = \left[ {36\sin \left( {1 \times {{10}^3}x + 0.5 \times {{10}^{11}}t} \right)\widehat j} \right]{V \over m}$$
D
$$\overrightarrow E \left( {x,t} \right) = \left[ { - 36\sin \left( {0.5 \times {{10}^3}x + 1.5 \times {{10}^{11}}t} \right)\widehat j} \right]{V \over m}$$
2
JEE Main 2020 (Online) 6th September Evening Slot
+4
-1
A charged particle going around in a circle can be considered to be a current loop. A particle of mass m carrying charge q is moving in a plane with speed v under the influence of magnetic field $$\overrightarrow B$$. The magnetic moment of this moving particle:
A
$${{m{v^2}\overrightarrow B } \over {2{B^2}}}$$
B
-$${{m{v^2}\overrightarrow B } \over {2{B^2}}}$$
C
-$${{m{v^2}\overrightarrow B } \over {{B^2}}}$$
D
-$${{m{v^2}\overrightarrow B } \over {2\pi {B^2}}}$$
3
JEE Main 2020 (Online) 6th September Evening Slot
+4
-1
A square loop of side 2$$a$$ and carrying current I is kept in xz plane with its centre at origin. A long wire carrying the same current I is placed parallel to z-axis and passing through point (0, b, 0), (b >> a). The magnitude of torque on the loop about z-axis will be :
A
$${{2{\mu _0}{I^2}{a^2}} \over {\pi b}}$$
B
$${{2{\mu _0}{I^2}{a^2}b} \over {\pi \left( {{a^2} + {b^2}} \right)}}$$
C
$${{{\mu _0}{I^2}{a^2}b} \over {2\pi \left( {{a^2} + {b^2}} \right)}}$$
D
$${{{\mu _0}{I^2}{a^2}} \over {2\pi b}}$$
4
JEE Main 2020 (Online) 6th September Morning Slot
+4
-1
An electron is moving along +x direction with a velocity of 6 $$\times$$ 106 ms–1. It enters a region of uniform electric field of 300 V/cm pointing along +y direction. The magnitude and direction of the magnetic field set up in this region such that the electron keeps moving along the x direction will be :
A
3 $$\times$$ 10–4 T, along –z direction
B
5 $$\times$$ 10–3 T, along –z direction
C
5 $$\times$$ 10–3 T, along +z direction
D
3 $$\times$$ 10–4 T, along +z direction
JEE Main Subjects
Physics
Mechanics
Electricity
Optics
Modern Physics
Chemistry
Physical Chemistry
Inorganic Chemistry
Organic Chemistry
Mathematics
Algebra
Trigonometry
Coordinate Geometry
Calculus
EXAM MAP
Joint Entrance Examination
|
2023-03-31 10:15:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6225343346595764, "perplexity": 1483.0979646312012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00449.warc.gz"}
|
https://stackabuse.com/quicksort-in-java/
|
Quicksort in Java
# Quicksort in Java
### Introduction
Sorting is one of the fundamental techniques used in solving problems, especially in those related to writing and implementing efficient algorithms.
Usually, sorting is paired with searching - meaning we first sort elements in the given collection, then search for something within it, as it is generally easier to search for something in a sorted, rather than an unsorted collection, as we can make educated guesses and impose assumptions on the data.
There are many algorithms that can efficiently sort elements, but in this guide we'll be taking a look at the theory behind as well as how to implement Quicksort in Java.
Fun fact: Since JDK7, the algorithm used for off-the-shelf sorting in the JVM for Arrays is a dual-pivot Quicksort!
### Quicksort in Java
Quicksort is a sorting algorithm belonging to the divide-and-conquer group of algorithms, and it's an in-place (no need for auxillary data structures), non-stable (doesn't guarantee relative order of same-value elements after sorting) sorting algorithm.
The divide-and-conquer algorithms recursively break down a problem into two or more sub-problems of the same type, making them simpler to solve. The breakdown continues until a problem is simple enough to be solved on it's own (we call this the base case).
This algorithm has been shown to give the best results when working with large arrays, and on the other hand when working with smaller arrays an algorithm like Selection Sort might prove more efficient.
Quicksort modifies the base idea of Selection Sort, so that instead of a minimum (or a maximum), in every step of the way an element is placed on the spot it belongs on in the sorted array.
This element is called the pivot. However, if we wanted to use the divide-and-conquer approach and reduce the problem of sorting the array to a smaller group of two sub-arrays we need to abide by the following: while we're placing our pivot at it's spot in the array we need to group the rest of the elements in two smaller groups - the ones left of the pivot are lesser or equal to it, and the ones on the right are bigger than the pivot.
This is actually the key step of the algorithm - called partitioning, and implementing it efficiently is a must if we want our Quicksort to be efficient as well.
Before discussing how Quicksort works, we should address how we choose which element is the pivot. The perfect scenario is that we always choose the element that splits the array in exact halves. However, since this is almost impossible to achieve, we can approach this problem in a few different ways.
For example, the pivot can be the first or the last element in the array (or a sub-array) we're currently processing. We can pick a median element as the pivot, or even choose a random element to play the role.
We have a variety of ways of accomplishing this task, and the approach we'll be taking in this article is to always choose the first (that is, left-most element of the array) as the pivot. Now let's jump into an example and explain how it all works.
### Visualization of Quicksort
Suppose we have the following array:
In this example, the pivot in the first iteration will be 4, since the decision is to pick the first element of the array as the pivot. Now comes in the partitioning - we place need to place 4 at the position it will be found in the sorted array.
The index of that position will be 2, so after the first partitioning our array will look like this:
Note: It's noticeable that the elements located left and right from the pivot aren't sorted as they should be.
This is to be expected - whenever we partition an array that isn't the base case (that is of size 1), the elements are grouped in a random order.
The important thing is what we discussed earlier: the elements left of the pivot are lesser or equal, and the elements on the right are bigger than the pivot. That isn't to say that they can't be sorted in the first grouping - while unlikely it can still happen.
We continue on, and see that here divide-and-conquer kicks in - we can break down our original problem into two smaller ones:
For the problem on the left we have an array of size 2, and the pivot element will be 2. After positioning the pivot at it's place (at the position 1), we get an array [1, 2] after which we have no more cases for the left side of the problem, since the only two subcases of [1, 2] are [1] and [2] which are both base cases. With this we finish with the left side of subcases and consider that part of the array sorted.
Now for the right side - the pivot is 13. Since it's the largest of all of the numbers in the array we're processing, we have the following setup:
## Free eBook: Git Essentials
Check out our hands-on, practical guide to learning Git, with best-practices, industry-accepted standards, and included cheat sheet. Stop Googling Git commands and actually learn it!
Unlike earlier when the pivot broke down our array into two subcases, there's only one case here - [8, 10, 7, 5]. The pivot now is 8 and we need to bring it to the position 5 in the array:
The pivot now splits the array in two subcases: [7, 5] and [10]. Since [10] is of size 1, that is our base case and we don't consider it at all.
The only subarray left is the array of [7, 5]. Here, 7 is the pivot, and after bringing it to it's position (index 4), to the left of it at the position 3 is only 5. We have no more subcases and this is where the algorithm ends.
After running Quicksort, we have the following sorted array:
This approach also accounts for duplicates in the array, since all of the elements left of the pivot are lesser or equal than the pivot itself.
### Implementing Quicksort in Java
With a good intuition of how Quicksort works - we can follow through with an implementation. First of all, we'll go through the main part of the program that will be running Quicksort itself.
Since Quicksort is a divide-and-conquer algorithm, it's naturally implemented recursively, although you could do it iteratively as well (any recursive function can also be implemented iteratively) - though, the implementation is not as clean:
static void quicksort(int[] arr, int low, int high){
if(low < high){
int p = partition(arr, low, high);
quicksort(arr, low, p-1);
quicksort(arr, p+1, high);
}
}
Note: low and high represent the left and right margins of the array that's currently being processed.
The partition(arr, low, high) method partitions the array, and upon it's execution the variable p stores the position of the pivot after the partitioning.
This method is only invoked when we're processing arrays that have more than one element, hence the partitioning only takes place if low < high.
Since Quicksort works in-place, the starting multiset of elements that can be found within the array stays unchanged, but we've accomplished exactly what we aimed to do - group up smaller or equal elements left to the pivot, and bigger than the pivot on the right.
Afterwards, we call the quicksort method recursively twice: for the part of the array from low to p-1 and for the part from p+1 to high.
Before we discuss the partition() method, for the sake of readability we'll implement a simple swap() function that swaps two elements in the same array:
static void swap(int[] arr, int low, int pivot){
int tmp = arr[low];
arr[low] = arr[pivot];
arr[pivot] = tmp;
}
Now, let's dive into the code for the partition() method and see how it does what we've explained above:
static int partition(int[] arr, int low, int high){
int p = low, j;
for(j=low+1; j <= high; j++)
if(arr[j] < arr[low])
swap(arr, ++p, j);
swap(arr, low, p);
return p;
}
When the for loop is done executing, j has value of high+1, meaning the elements on arr[p+1, high] are higher or equal than the pivot. Because of this it's required we do one more swap of the elements on the position low and p, bringing the pivot at it's correct position in the array (that is, position p).
The last thing we need to do is run our quicksort() method and sort an array. We'll be using the same array as we did in the example before, and calling quicksort(arr, low, high) will sort the arr[low, high] part of the array:
public static void main(String[] args) {
int[] arr = {4, 8, 1, 10, 13, 5, 2, 7};
// Sorting the whole array
quicksort(arr, 0, arr.length - 1);
}
This results in:
1, 2, 3, 4, 5, 7, 8, 10, 13
### Complexity of Quicksort
Quicksort, as well other algorithms that apply the divide-and-conquer tactic, has a time complexity of O(nlogn). However, compared to something like Merge Sort, which has the worst-case time complexity of O(nlogn), Quicksort can theoretically have the worst-case of O(n^2).
The complexity depends on how much time we take to efficiently choose a pivot, which can sometimes be as difficult as sorting the array itself, and since we expect the choosing of a pivot to be O(1) we usually can't guarantee that in every step of the way we'll be choosing the best pivot possible.
Even though the worst-case of Quicksort can be O(n^2), most of the pivot-choosing strategies are implemented such so they don't deter the complexity too much, which is why the average complexity of Quicksort is O(nlogn). It's widely implemented and used, and the name itself is an omage to its performance capabilities.
On the other hand, where Quicksort hands-down beats Merge Sort is the space complexity - Merge Sort requires O(n) space because it uses a separate array for merging, while Quicksort sorts in-place and has the space complexity of O(1).
### Conclusion
In this article we've covered how the Quicksort algorithm works, how it's implemented and discussed it's complexity. Even though the choice of the pivot can "make or break" this algorithm, it's usually considered as one of the most efficient sorting algorithms and is widely used whenever we have a need for sorting arrays with huge amount of elements.
Last Updated: January 12th, 2022
Get tutorials, guides, and dev jobs in your inbox.
|
2022-08-12 11:40:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4221770167350769, "perplexity": 816.1902404769007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571692.3/warc/CC-MAIN-20220812105810-20220812135810-00085.warc.gz"}
|
https://unacademy.com/lesson/problems-on-coded-clock-part-1-in-hindi/0YXSX9H2
|
to enroll in courses, follow best educators, interact with the community and track your progress.
5 lessons,
37m 57s
Enroll
2.8k
Problems on Coded Clock Part - 1 (in Hindi)
11,778 plays
More
Coded clock question was asked in SBI po mains 2017 and you can expect this question in IBPS po mains and IBPS clerk mains also.
## Ankush Lamba is teaching live on Unacademy Plus
Ankush Lamba
Banking chronicle YouTube and Telegram channel, cleared several Banking exams like RRB PO/CLERK, IDBI PO, BOB PO, INDIAN BANK PO Got Full ma
U
sir ur course is a true help ..we expect to get videos for IBPS mains as well if possible sir ..thnkuu so much
Kiran
a month ago
I am going to create my first new 4 courses on unacademy free classes:(1)100 puzzles and sitting arrangements for ibps po pre (2) 100 puzzles and sitting arrangements for ibps po mains (3) 100 DI for ibps po pre (4)100 DI for ibps po mains If u want to practise with those videos then follow me on unacademy:https://unacademy.app.link/x99hQUxyLZ
Kiran
a month ago
Ks
sir time kaise minus karte haibsamaj nahi aaya.
best teacher on YouTube available for bank po just because of him I'm able to qualify for IBPS PO MAINS THANK YOU SIR JI
Maumita Kar
2 years ago
@Rishabh Vishal sir, aap Ibps po main pe reasoning section se kitne attempt kia the ?? aap ka kitne marks aye & new type ki puzzle kya aap attempt kia the ?? plz reply me...
KM
Sir please hindi me bi dekhya kro questions ko.
thank you sir samaj me aagaya
1. Ankush Lamba B.Tech. CSE You-Tube Channel BANKING Sanfing Chronicle
2. Symbols represent time in a clock as - # Either the hour or minute hand of clock on 3 $Either the hour or minute hand of clock on12 % Either the hour or minute hand of clock on 4 Either the hour or minute hand of clock on 8 + Either the hour or minute hand of clock on 5 1)A train reaches station at time "$+. If it gets late by 8 hours 15 minute, then what is the time at which at reaches the station? A)+# B)@+ C)@# D)@@ E)+$3. # Either the hour or minute hand of clock on 3$ Either the hour or minute hand of clock on12 % Either the hour or minute hand of clock on 4 @ Either the hour or minute hand of clock on 8 +Either the hour or minute hand of clock on 5 12 2 10 10 50 3 4 H15 20 8 4 25 35 5 30
4. 1)A train reaches station at time St". If it gets late by 8 hours 15 minute, then what is the time at which at reaches the station? A) +# B)@+ C)@# D)@@ E)+$12 10 10 50 15 3 4 20 4 4 25 35 30 5. 1)A train reaches station at time "$+" If it gets late by 8 hours 15 minute, \$+ =12:25 then what is the time at which at reaches the station? A) +# B)@+ C)@# D)@@ E)+S +8:15 8:40 12 10 10 50 15 3 4 20 4 4 25 35 30
6. 2) A person has to catch a train that is scheduled to depart at @9 . It takes the person 4 hours 15 minutes to reach the railway station from his home. At what time should he leave from his home for the railway station to arrive at the station at least 25 minute before the departure of the train? 12 A)%@ B)#@ C)%+ D)+@ E)None of these 10 50 10 15 3 4 20 4 4 25 35 30
7. 2) A person has to catch a train that is scheduled to depart at @9 . It takes the person 4 hours 15 minutes to reach the railway station from his home. At what time should he leave from his home for the railway station to arrive at the station at least 25 minute before the departure of the train? @% = 8:20 4:15 00:25 4:40 8:20-4:40 3:40 3:40 #@ 12 2 10 10 50 A)%@ B)#@ 0%+ D)+@ E)None of these 3 4 15 4 20 4 25 35 5 30
8. 3)A train is scheduled to leave the station at "+@". A person has reached the station 20 minute before the train's scheduled time. At what time the person has reached the station? 12 A)#@ B)+# C)%+ D)4% E)None of these 10 50 10 15 3 4 20 4 4 25 35 30
9. 3)A train is scheduled to leave the station at "+@". A person has reached the station 20 minute before the train's scheduled time. At what time the person has reached the station? +@ = 5:40 5:40-00:20 = 5:20 5:20 = +% 12 10 A)#@ B)+# C)%+ D)+90 E)None of these 10 50 15 3 4 20 4 4 25 35 30
10. THANK-YOU
|
2019-10-21 21:21:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21609307825565338, "perplexity": 6011.679111338549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987787444.85/warc/CC-MAIN-20191021194506-20191021222006-00374.warc.gz"}
|
https://www.link-to.eu/2020/02/shear-and-moment-diagrams.html
|
# Shear And Moment Diagrams
Shear And Moment Diagrams. Solution: Formulas used Bending Moment Diagrams. First draw the free-body-diagram of the beam with sufficient room under it for the shear and moment diagrams (if needed, solve for support.
The calculator is fully customisable to suit most beams; which is a feature unavailable on most other calculators. These diagrams can be utilized to analyze the failure of the structure with the given inputs like loads, structure material, and shape. Since the same x was used for all three sections, the each equation for each section can be easily plotted as shown at the left.
### Welcome to our free online bending moment and shear force diagram calculator which can generate the Reactions, Shear Force Diagrams (SFD) and Bending Moment Diagrams.
These diagrams can be utilized to analyze the failure of the structure with the given inputs like loads, structure material, and shape.
Shear and Moment Diagrams - S.B.A. Invent
Construct the shear force and bending moment diagrams for ...
Shear and moment diagram - Wikipedia
Draw the shear force and bending moment diagrams for the ...
Beam Formulas for Multiple Point Loads. - Structural ...
Simple Supported Beam Formulas with Bending and Shear ...
Shear Force Bending Moment - File Exchange - MATLAB Central
|
2021-08-05 01:57:31
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9140884876251221, "perplexity": 2060.001563950518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155268.80/warc/CC-MAIN-20210805000836-20210805030836-00400.warc.gz"}
|
https://www.physicsforums.com/threads/perceiving-superpositions.853447/
|
# B Perceiving Superpositions
1. Jan 21, 2016
### cube137
Zurek mentions in http://arxiv.org/pdf/quant-ph/9805065v1.pdf :
"It is amusing to speculate that a truly quantum observer (i.e., an observer processing quantum information in a quantum
computer-like fashion) might be able to perceive superpositions of branches which are inaccessible to us, beings limited in our information processing strategies to the record states “censored” by einselection."
Isn't it that the density matrix makes it impossible to have superpositions. This begs the question. Is the density matrix created by humans just to make classical output? In order to turn improper mixture to proper? This means somewhere out there other improper mixture branches or even pure state combination of system plus environment is still in superposition? Hence a quantum observer can still theoretically perceive the superpositions?
2. Jan 21, 2016
### Strilanc
I don't think a "truly quantum" agent's experience would differ much from someone with access to a quantum computer. Their decisions and reasoning and measurements would still be driven by the Born Rule, so really all they gain is the ability to perform quantum information processing on inputs. That does make quantum state tomography a bit easier, but it's a bit "flowery" to think of it as allowing them to "directly perceive superposition". Any "perceiving" is going to require making partial copies, and that's tantamount to measurement.
3. Jan 21, 2016
### cube137
So is the superposition of dead cat + alive cat or half dead cat + 1.5 alive cat still exist? Can we say we filter them using the density matrix? The density matrix just a tool to coincide with our determining the cat is either alive or dead but that doesn't mean the superposition were destroyed?
4. Jan 21, 2016
### Strilanc
Density matrices are just what you get when you marginalize over superpositions. It's interesting that you end up with $n^2$ parameters instead of $n$, but ultimately it's just a consequence of not being able to condition on the whole state. A "truly quantum" observer would have the same problem, unless they went around collecting and un-mixing quite a lot of waste heat.
5. Jan 21, 2016
### Staff: Mentor
Of course it cant. A cat breaths air and interacts with its environment in other ways. Its entangled with it so cant be in a superposition of alive and dead - its impossible - utterly impossible. I will repeat it again - its simply not possible - even theoretically.
Here is the math in a simplified form - see post 22:
Just to reiterate - because its entangled it is not in a pure state hence not in a superposition which only applies to pure states. The analysis above shows its in a mixed state.
Thanks
Bill
6. Jan 22, 2016
### atyy
A density matrix is a vector in a vector space, so a mixed state can still be a superposition of vectors.
7. Jan 22, 2016
### cube137
But is it not the argument of decoherence that the system is entangled with the environment and the whole thing is in pure state? When you measure a subsystem, you see it in mixed state but the entire thing (system + environment) is in superposition and pure state.
Or let's take the case of two electrons that are entangled.. they are in superposition and in pure state. You seem to be saying that when two things are entangled.. they are not in superposition.
8. Jan 22, 2016
### A. Neumaier
This is mathematically correct but conceptually very misleading. Nobody ever in quantum mechanics talks seriously about superpositions of density matrices.
Superposition in quantum mechanics always refers to superposition of state vectors representing pure states in a distinguished basis, and the result is another pure state. One never talks about mixed states in terms of superposition.
9. Jan 22, 2016
### Staff: Mentor
The truth is what Professor Neumaier said in another thread:
There is no way to remove the decoherence for a macroscopic object. You can do it (approximately) only for very tiny objects such as electrons or buckyballs - and the cost for doing it grows drastically with the size of the object.
Even an electron has issues - it interacts with the quantum vacuum. Modelling a system as pure is done not because its actually like that - its done to have a tractable model. But its irrelevant - virtually everything we see around us is entangled - very very rarely do you observe even an approximate pure state and a cat certainly is not one.
That's exactly what I am saying and what my analysis showed. More complex models than the simple one I used are closer to what's actually happening eg the environment is modelled as harmonic oscillators.
Thanks
Bill
Last edited: Jan 22, 2016
10. Jan 22, 2016
### cube137
So everytime there is truly random quantum fluctuations being entangled with any system. It is no longer called pure state? But can't you treat the random quantum fluctuations as part of the collapse (demanding Born rule)?
Superpositions cant be perceived in one of the outcomes. But if you can multiplex all the outcomes.. then isn't it like perceiving superpositions? This is what Zurek was talking about.
11. Jan 22, 2016
### Staff: Mentor
I think you need to elaborate what you mean by that. As written it makes no sense.
Thanks
Bill
12. Jan 22, 2016
### cube137
Or for example the question why you can't model an electron interacting with the quantum vacuum as pure state?
13. Jan 22, 2016
### Staff: Mentor
Electrons interacting with the vacuum are not pure,.
Thanks
Biolol
14. Jan 22, 2016
### cube137
Why is it not pure? Can't you treat the electrons as "system" and the vacuum as "environment". I'm elaborating it.. not evading any. In decoherence, the system and environment are pure. Measuring the subsystem would make it mixed state. So why can't you consider the quantum vacuum the electron interacting as "environment"
15. Jan 22, 2016
### Staff: Mentor
Thanks
Bill
16. Jan 22, 2016
### cube137
backtracking.. it's this conversation:
I said: So everytime there is truly random quantum fluctuations being entangled with any system..
you said: I think you need to elaborate what you mean by that. As written it makes no sense.
Well.. An electron interacts with the vacuum in terms of polarizations and stuff (virtual particles or the lattice equivalent of it that doesn't use the picture of virtual particles). I mentioned this because you mentioned somewhere (I read all the thread about decoherence the whole day) that vacuum fluctuations are truly random. Remember you were debating with Ruth Kastner. You said vacuum fluctuations are really random. This is why I'm asking now if the reason electrons interacting with the quantum vacuum can't be considered pure state because of the truly random vacuum fluctuations you emphasized to ruth in the old thread.
17. Jan 22, 2016
### cube137
I just searched for "pure state vs mixed state" in the archive. So pure state involves phase interference and the reason the electron interacting with the quantum vacuum can't be in pure state is because the phases of the electrons and vacuum fluctuations don't have phase interfereces?
But then superposition is related to pure state.
They say the system and environment are in superposition.. so I assume they are pure state. This is confusing. Again I'm not evading your question. See the message previously to this. Thanks.
18. Jan 22, 2016
### Staff: Mentor
That doesn't really make much sense either, but I think I get your drift. Now that interaction with the vacuum means its entangled with it - that's how an electron will spontaneously emit a photon and drop to a lower energy state. If it was in a pure state that could not happen.
The simplified model in terms of the link I gave is c1*|a1>|b1> + c2*|a2>|b2> where |a1> is the electron in a high energy state, |b1> no photon, |a2> lower energy state, |b2> a photon. Note - this is a simplification - the overall system electron and photon is not really in a pure state. That means if you observe the electron or photon it's not in a pure state - its in a mixed state.
As I said - everything is pretty much entangled, although to good approximation some things can be taken as pure even though it really isn't.
Thanks
Bill
19. Jan 22, 2016
### Staff: Mentor
It's not really a good idea to discuss superposition's unless you know the difference between a pure and a mixed state:
Superposition's are that any two pure states can be summed to form another pure state. Technically that applies to mixed states as well, but as explained in post 8 its not what's usually meant by superposition.
Thanks
Bill
Last edited: Jan 22, 2016
20. Jan 22, 2016
### cube137
In the thread https://www.physicsforums.com/threa...-states-in-laymens-terms.734987/#post-4642601 atyy mentioned this:
"In decoherence, the system consisting of environment + experiment is in a pure state and does not collapse. Here the experiment is a subsystem. Because we can only examine the experiment and not the whole system, the experiment through getting entangled with the environment will evolve from a pure state into an improper mixed state. Since the improper mixed state looks like a proper mixed state that results from collapse as long as we don't look at the whole system, decoherence is said to be apparent collapse."
You said the system (experiment) + environment can't be in pure state. But atyy mentioned it could. I actually learnt it from him when I read it yesterday. So atyy was wrong (hope atyy can defend it).
|
2018-05-21 07:54:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5884788036346436, "perplexity": 1073.7364387170735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863967.46/warc/CC-MAIN-20180521063331-20180521083331-00566.warc.gz"}
|
http://openstudy.com/updates/55ce2826e4b0c5fe9805efff
|
## A community for students. Sign up today
Here's the question you clicked on:
## anonymous one year ago What is the solution of Log_2x+3^125=3?
• This Question is Closed
1. Nnesha
$\huge\rm log_{2x+3}125=3$ like this ?
2. anonymous
yes
3. anonymous
log(2x-3) 125 = 3 (2x - 3)^3 = 125 ........log(baseA)B = x , then A^x = B Taking cube roots on both sides 2x - 3 = 5 2x = 5+3 2x = 8 x = 4
4. Nnesha
alright so you have to convert log to exponential form here is an example |dw:1439574126077:dw|
5. Nnesha
2x+3 is base so convert log to exponential form
#### Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy
|
2016-10-21 18:41:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6395090818405151, "perplexity": 3643.4247844028537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718296.19/warc/CC-MAIN-20161020183838-00080-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://www.ams.org/mathscinet-getitem?mr=1410163
|
MathSciNet bibliographic data MR1410163 11H60 (11H06 52C07) Banaszczyk, W. Inequalities for convex bodies and polar reciprocal lattices in \$\bold R\sp n\$$\bold R\sp n$. II. Application of \$K\$$K$-convexity. Discrete Comput. Geom. 16 (1996), no. 3, 305–311. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
|
2016-07-24 05:14:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9975180625915527, "perplexity": 8243.000276183306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823947.97/warc/CC-MAIN-20160723071023-00086-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://www.doitpoms.ac.uk/tlplib/metal-forming-1/printall.php
|
Dissemination of IT for the Promotion of Materials Science (DoITPoMS)
DoITPoMS Teaching & Learning Packages Stress Analysis and Mohr's Circle Stress Analysis and Mohr's Circle (all content)
# Stress Analysis and Mohr's Circle (all content)
Note: DoITPoMS Teaching and Learning Packages are intended to be used interactively at a computer! This print-friendly version of the TLP is provided for convenience, but does not display all the content of the TLP. For example, any video clips and answers to questions are missing. The formatting (page breaks, etc) of the printed version is unpredictable and highly dependent on your browser.
## Aims
On completion of this TLP you should:
• Recognise the stress and strain tensors and the components into which they can be separated.
• Know how to diagonalise a stress tensor for plane stress, and recognise what a principal stress tensor is and why principal stress tensors are useful.
• Understand what a yield criterion is and how it can be used.
• Have an appreciation of different yield criteria and the materials for which they are appropriate.
## Before you start
• You should understand the concept of slip and the different ways in which materials (and in particular metals) undergo slip. The Slip in Single Crystals teaching and learning package covers the fundamentals.
• In polycrystalline materials, the distribution of grain orientations and the constraint to deformation offered by neighbouring grains gives rise to a simplified overall stress-strain curve in comparison to the curve from a single crystal sample. Crystal structure is also important in polycrystalline samples - the von Mises criterion states that a minimum of five independent slip systems must exist for general yielding.
## Introduction
Metal forming involves a permanent change in the shape of a material as a result of the application of an applied stress. The work done in deforming the sample is not recoverable. This plastic deformation involves a change in shape without a change in volume and without melting.
It is desirable to know the stress level at which plastic deformation begins i.e., the onset of yielding. In uniaxial loading, this is the point where the straight, elastic portion of the line first begins to curve. This point is the yield stress. The animation below shows a typical stress-strain curve for a polycrystalline sample, obtained from uniaxial tensile test.
## Single crystal vs polycrystalline
The theory of slip in single crystals is well established. When an item is made from metal, however, a single crystal is not generally used. A piece of metal used to make a bicycle or a handrail is made of many small crystals or grains. This affects the behaviour of the metal in many ways:
• The grains are not aligned: for example, the [001] axis of one grain might be pointing in a different direction to the [001] axis of its neighbour. This means that different grains slip by different strains when a stress is applied to the whole material, and offer different amounts of resistance to the force. These all contribute to the way that the whole block deforms under stress.
• The grain boundaries formed where the grains meet have distinct properties from the rest of the material. When the two crystals either side of a grain boundary have different orientations, defects such as dislocations cannot pass simply through the boundary. Effects like these also have an affect on the response of a metal to stresses.
For these reasons, it is almost impossible to predict in detail from atomic scale theory how a block of metal will deform plastically when a suitable force is applied to it. We must instead find out what happens from experimental observations and then develop a macroscopic engineering model to describe and predict the behaviour of the polycrystalline sample.
## Representing stress as a tensor
To understand this page, you first need to understand tensors! Good sources are the books by J.F. Nye [1], G.E. Dieter [2], and D.R. Lovett [3] referred to in the section Going Further in this TLP. Many undergraduate university courses in physical science or engineering have a series of lectures on tensors, such as the course at Cambridge University Department of Materials Science and Metallurgy, the handout for which can be found here.
The stress tensor is a field tensor – it depends on factors external to the material. In order for a stress not to move the material, the stress tensor must be symmetric: σij = σji – it has mirror symmetry about the diagonal.
The general form is thus:
$$\left( {\matrix{ {{\sigma _{11}}} & {{\sigma _{12}}} & {{\sigma _{31}}} \cr {{\sigma _{12}}} & {{\sigma _{22}}} & {{\sigma _{23}}} \cr {{\sigma _{31}}} & {{\sigma _{23}}} & {{\sigma _{33}}} \cr } } \right)$$ or, in an alternative notation, $$\left( {\matrix{ {{\sigma _{xx}}} & {{\tau _{xy}}} & {{\tau _{zx}}} \cr {{\tau _{xy}}} & {{\sigma _{yy}}} & {{\tau _{yz}}} \cr {{\tau _{zx}}} & {{\tau _{yz}}} & {{\sigma _{zz}}} \cr } } \right)$$
The general stress tensor has six independent components and could require us to do a lot of calculations. To make things easier it can be rotated into the principal stress tensor by a suitable change of axes.
### Principal stresses
The magnitudes of the components of the stress tensor depend on how we have defined the orthogonal x1, x2 and x3 axes.
For every stress state, we can rotate the axes, so that the only non-zero components of the stress tensor are the ones along the diagonal:
$$\left( {\matrix{ {{\sigma _1}} & 0 & 0 \cr 0 & {{\sigma _2}} & 0 \cr 0 & 0 & {{\sigma _3}} \cr } } \right)$$
that is, there are no shear stress components, only normal stress components.
This is an example of a principal stress tensor of all the tensors we could use to express the stress state that exists. The elements σ1, σ2, σ3 are the principal stresses. The positions of the axes now are the principal axes. While it may be that σ1 > σ2 > σ3, it only matters that the x1, x2 and x3 axes define the directions of the principal stresses.
The largest principal stress is bigger than any of the components found from any other orientation of the axes. Therefore, if we need to find the largest stress component that the body is under, we simply need to diagonalise the stress tensor.
Remember – we have not changed the stress state, and we have not moved or changed the material – we have simply rotated the axes we are using and are looking at the stress state seen with respect to these new axes.
### Hydrostatic and deviatoric components
The stress tensor can be separated into two components. One component is a hydrostatic or dilatational stress that acts to change the volume of the material only; the other is the deviatoric stress that acts to change the shape only.
$$\left( {\matrix{ {{\sigma _{11}}} & {{\sigma _{12}}} & {{\sigma _{31}}} \cr {{\sigma _{12}}} & {{\sigma _{22}}} & {{\sigma _{23}}} \cr {{\sigma _{31}}} & {{\sigma _{23}}} & {{\sigma _{33}}} \cr } } \right) = \left( {\matrix{ {{\sigma _H}} & 0 & 0 \cr 0 & {{\sigma _H}} & 0 \cr 0 & 0 & {{\sigma _H}} \cr } } \right) + \left( {\matrix{ {{\sigma _{11}} - {\sigma _H}} & {{\sigma _{12}}} & {{\sigma _{31}}} \cr {{\sigma _{12}}} & {{\sigma _{22}} - {\sigma _H}} & {{\sigma _{23}}} \cr {{\sigma _{31}}} & {{\sigma _{23}}} & {{\sigma _{33}} - {\sigma _H}} \cr } } \right)$$
where the hydrostatic stress is given by $${\sigma _H}$$ = $${1 \over 3}$$$$\left( {{\sigma _1} + {\sigma _2} + {\sigma _3}} \right)$$.
In crystalline metals plastic deformation occurs by slip, a volume-conserving process that changes the shape of a material through the action of shear stresses. On this basis, it might therefore be expected that the yield stress of a crystalline metal does not depend on the magnitude of the hydrostatic stress; this is in fact exactly what is observed experimentally.
In amorphous metals, a very slight dependence of the yield stress on the hydrostatic stress is found experimentally.
## Finding the principal stress tensor
### Rotating the axes:
The principal stresses are the eigenvalues of the stress tensor. These can be found from the determinant equation:
$$\left| {\begin{array}{*{20}{c}} {{\sigma _{11}} - \xi }&{{\sigma _{12}}}&{{\sigma _{13}}}\\ {{\sigma _{21}}}&{{\sigma _{22}} - \xi }&{{\sigma _{23}}}\\ {{\sigma _{31}}}&{{\sigma _{32}}}&{{\sigma _{33}} - \xi } \end{array}} \right| = 0$$
This determinant is expanded out to produce a cubic equation from which the three possible values of $$\xi$$ can be found; these values are the principal stresses. This is discussed in the book by J.F. Nye [1].
If the stress tensor already has a principal stress along one axis, such as σ33, diagonalising is much simpler:
$$\left| {\begin{array}{*{20}{c}} {{\sigma _{11}} - \xi }&{{\sigma _{12}}}&0\\ {{\sigma _{21}}}&{{\sigma _{22}} - \xi }&0\\ 0&0&{{\sigma _{33}} - \xi } \end{array}} \right| = 0$$
When we expand this out, we find that:
$({\sigma _{33}} - \xi )\left[ {({\sigma _{11}} - \xi )({\sigma _{22}} - \xi ) - \sigma _{12}^2} \right] = 0$
One of the principal stresses must be σ33, and the other two are easy to find by solving the quadratic equation inside the square brackets for $$\xi$$. Alternatively, when there are only two principal stresses to find, such as in this example, we can use Mohr’s circle.
### Mohr’s circle method:
Mohr’s circle in this situation represents a stress state, on two axes – normal (σ) and shear (τ). A Mohr's circle drawn according to the convention in Gere and Timoshenko [4] in shown below.
The normal stresses σx and σy are first plotted on the horizontal σ axis at A and B. Positions C and D are then generated using the magnitude of the shear stress τxy, with the convention for the choice of these positions shown. Different orientations of the axes, and the different stress tensors produced by them, are represented by the different diameters it is possible to take of the circle.
The principal stress state is the state which has no shear components. This corresponds to the diameter of the Mohr’s circle that has no component along the shear axis – it is the diameter that runs along the normal stress axis. The principal stresses are thus the two points where the circle crosses the normal stress axis, E and F:
$$\left( {\begin{array}{*{20}{c}} E&0&0\\ 0&F&0\\ 0&0&{{\sigma _3}} \end{array}} \right)$$
The angle 2θ shown on the Mohr's circle in an anti-clockwise sense is twice the angle θ required to rotate the set of axes in an anti-clockwise sense from the old set of axes to the principal axes with respect to which the principal stresses are defined.
The Mohr’s circle below is for an element under a stress state of σ11 = 80 MPa, σ22 = – 60 MPa, σ12 = 50 MPa and σ3 = 100 MPa. Using the slider, change its inclination angle and compare it to the tensor representing the stress state.
Given below is an interactive tool to plot a Mohr's circle according to user's specified stress states.
## The strain tensor
When stress is applied to the material, strain is produced. Strain is also a symmetric second-rank tensor. Stress and strain are related by:
σij = Cijklεkl
The strain tensor, εkl, is second-rank just like the stress tensor. The tensor that relates them, Cijkl, is called the stiffness tensor and is fourth-rank.
Alternatively:
εij= Sijklσkl
Sijkl is called the compliance tensor and is also fourth-rank.
The strain tensor is a field tensor – it depends on external factors. The compliance tensor is a matter tensor – it is a property of the material and does not change with external factors.
### Expressing the strain in a slip process in terms of displacement
This diagram shows a plane on which slip occurs. A general point P is moved to position P' by the slip. The vectors from the origin to P and P' are r and r' respectively. Also shown is the unit vector n > normal to the plane, and β perpendicular to the plane from O. The length of the perpendicular from O to the plane is simply r·n. Unit vector n has components of n1, n2, n3, and r has components of x1, x2, x3.
Suppose the distance moved in the direction of slip is $$\gamma \left( {\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}}{r} \cdot \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}}{n} } \right)$$. The displacement vector that represents slip is then given by:
$$\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}}{r} ' - \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}}{r} = \gamma \left( {\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}}{r} \cdot \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}}{n} } \right)\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}}{\beta }$$
where β is a unit vector in the direction of slip and has components of b1, b2 and b3.
If the strain angle is small, the components of the deformation tensor eij can be obtained by differentiating the displacements, so that
$${e_{ij}} = {{\partial {u_i}} \over {\partial {x_j}}} = {\partial \over {\partial {x_j}}}\gamma \left( {{\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}}{r} \cdot \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}}{n} } } \right){\beta _i}$$
Hence, for example,
$${e_{11}} = {{\partial {u_1}} \over {\partial {x_1}}} = {\partial \over {\partial {x_1}}}\gamma \left( {{\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}}{r} \cdot \underset{\raise0.3em\hbox{\smash{\scriptscriptstyle-}}}{n} } } \right){\beta _1} = \gamma {\beta _1}{\partial \over {\partial {x_1}}}\left( {{x_1}{n_1} + {x_2}{n_2} + {x_3}{n_3}} \right) = \gamma {n_1}{\beta _1}$$
More generally,
$${e_{ij}} = {{\partial {u_i}} \over {\partial {x_j}}} = \gamma {n_j}{\beta _i}$$
We can then write the tensor like this:
$${e_{ij}} = \gamma \left( {\matrix{ {{n_1}{\beta _1}} & {{n_2}{\beta _1}} & {{n_3}{\beta _1}} \cr {{n_1}{\beta _2}} & {{n_2}{\beta _2}} & {{n_3}{\beta _2}} \cr {{n_1}{\beta _3}} & {{n_2}{\beta _3}} & {{n_3}{\beta _3}} \cr } } \right)$$ $${\varepsilon _{ij}} = \gamma \left( {\matrix{ {{n_1}{\beta _1}} & {{1 \over 2}\left( {{n_2}{\beta _1} + {n_1}{\beta _2}} \right)} & {{1 \over 2}\left( {{n_3}{\beta _1} + {n_1}{\beta _3}} \right)} \cr {{1 \over 2}\left( {{n_1}{\beta _2} + {n_2}{\beta _1}} \right)} & {{n_2}{\beta _2}} & {{1 \over 2}\left( {{n_3}{\beta _2} + {n_2}{\beta _3}} \right)} \cr {{1 \over 2}\left( {{n_1}{\beta _3} + {n_3}{\beta _1}} \right)} & {{1 \over 2}\left( {{n_2}{\beta _3} + {n_3}{\beta _2}} \right)} & {{n_3}{\beta _3}} \cr } } \right)$$
### Separation of the strain tensor
Notice that the tensor derived from the diagram is eij while the strain tensor related to the stress tensor by the stiffness and compliance tensors is εij. This is not a mistake!
The tensor eij derived from the diagram describes the specimen moving relative to the origin. This includes a change in dimension of the specimen, the strain. It also may include a rotation of the specimen. In terms of the properties of the material, the rotation is not of interest, so we must separate it out to be left with the strain alone.
eij = εij + ωij
where εij is the strain tensor and ωij is the rotation tensor.
$${\omega _{ij}} = \gamma \left( {\matrix{ 0 & {{1 \over 2}\left( {{n_2}{\beta _1} - {n_1}{\beta _2}} \right)} & {{1 \over 2}\left( {{n_3}{\beta _1} - {n_1}{\beta _3}} \right)} \cr {{1 \over 2}\left( {{n_1}{\beta _2} - {n_2}{\beta _1}} \right)} & 0 & {{1 \over 2}\left( {{n_3}{\beta _2} - {n_2}{\beta _3}} \right)} \cr {{1 \over 2}\left( {{n_1}{\beta _3} - {n_3}{\beta _1}} \right)} & {{1 \over 2}\left( {{n_2}{\beta _3} - {n_3}{\beta _2}} \right)} & 0 \cr } } \right)$$
A strain tensor must be symmetrical. A rotation tensor must be antisymmetric. The rotation tensor must also have no normal components. The strain tensor εij is the one used in calculations.
### Volumetric strain
The sum of the diagonal elements of the strain tensor is the volumetric strain or dilatation:
$${{\Delta V} \over V} = {\varepsilon _{11}} + {\varepsilon _{22}} + {\varepsilon _{33}} = \Delta$$
The volumetric strain for metals during plastic deformation is zero. Hence, during plastic deformation there are five independent components, rather than six, of the general strain tensor, describing an incremental change of shape.
## Yield criteria for metals
A yield criterion is a hypothesis defining the limit of elasticity in a material and the onset of plastic deformation under any possible combination of stresses.
There are several possible yield criteria. We will introduce two types here relevant to the description of yield in metals.
To help understanding of combinations of stresses, it is useful to introduce the idea of principal stress space. The orthogonal principal stress axes are not necessarily related to orthogonal crystal axes.
Using this construction, any stress can be plotted as a point in 3D stress space.
For example, the uniaxial stress $$\left( {\begin{array}{*{20}{c}} \sigma &0&0\\ 0&0&0\\ 0&0&0 \end{array}} \right)$$ where σ1 = σσ2 = σ3= 0, plots as a point on the σ1axis.
A purely hydrostatic stress σ1 = σ2 = σ3=σH will lie along the vector [111] in principal stress space. For any point on this line, there can be no yielding, since in metals, it is found experimentally that hydrostatic stress does not induce plastic deformation (see hydrostatic and deviatoric components).
The 'hydrostatic line'
We know from uniaxial tension experiments, that if σ1 = Y, σ2 = σ3 = 0 where Y is a uniaxial stress, then yielding will occur.
Therefore, there must be a surface, which surrounds the hydrostatic line and passes through (Y, 0, 0) that defines the boundary between elastic and plastic behaviour. This surface will define a yield criterion. Such a surface has also to pass through the points (0, Y, 0), (0, 0, Y), (–Y, 0, 0) (0, –Y, 0) and (0, 0, –Y).
The plane defined by the three points (Y, 0, 0), (0, Y, 0) and (0, 0, Y) is parallel to the plane defined by the three points (–Y, 0, 0) (0, –Y, 0) and (0, 0, –Y).
The simplest shape for a yield criterion satisfying these requirements is a cylinder of appropriate radius with an axis along the hydrostatic line. This can be described by an equation of the form:
${\left( {{\sigma _1} - {\sigma _2}} \right)^2} + {\left( {{\sigma _2} - {\sigma _3}} \right)^2} + {\left( {{\sigma _3} - {\sigma _1}} \right)^2} = {\rm{constant}}$
From above, if, σ1 = Y, σ2 = σ= 0, then the constant is given by 2Y2. This is the von Mises Yield Criterion.
We can also define a yield stress in terms of a pure shear, k. A pure shear stress can be represented in a Mohr’s Circle, as follows:
Referred to principal stress space, we have σ1 = k, σ2 = –kσ= 0.
The von Mises criterion can therefore be expressed as:
$2{Y^2} = 6{k^2}{\rm{ }} \Rightarrow {\rm{ }}Y = k\sqrt 3$
A mathematically simpler criterion which satisfies the requirements for the yield surface having to pass through (Y, 0, 0), (0, Y, 0) and (0, 0, Y) is the Tresca Criterion.
If we suppose σ1 > σ2 σ3, then the largest difference between principal stresses is given by (σ1 – σ3).
If yielding occurs when σ1 = Y, σ2 = σ3 = 0, then (σ1 – σ3) = Y.
For yield in pure shear at some shear stress k, when referred to the principal stress state we could have
${\sigma _1} = k,{\rm{ }}{\sigma _2} = 0,{\rm{ }}{\sigma _3} = - k{\rm{ }} \Rightarrow {\rm{ }}Y = 2k$
The Tresca criterion is (σ1 – σ3) = Y = 2k.
Viewed down the hydrostatic line, the two criteria appear as:
For plane stress, let the principal stresses be σ1 and σ2, with σ3 = 0.
The yield surfaces for the Tresca yield criterion and the von Mises yield criterion in plane stress are shown below:
The Tresca yield surface is an irregular hexagon and the von Mises yield surface is an ellipse. The ratio of the length of the major and minor axes of this ellipse is $$\sqrt 3 {\rm{ :1}}$$. Click here for a derivation of this result.
Experiments suggest that the von Mises yield criterion is the one which provides better agreement with observed behaviour than the Tresca yield criterion. However, the Tresca yield criterion is still used because of its mathematical simplicity.
## Yield criteria for non-metals
When ceramics deform plastically (usually only at temperatures very close to their melting point, if at all), they often obey the von Mises or Tresca criterion.
However, other materials such as polymers and geological materials (rocks and soils) display yield criteria that are not independent of hydrostatic pressure.
Empirically, it is found that as a hydrostatic pressure is increased, the yield stress increases, and so we do not expect a yield criterion based solely on the deviatoric component of stress to be valid.
The first attempt to produce a yield criterion incorporating the effect or pressure was derived by Coulomb.
### The Coulomb criterion
Failure occurs when the shear stress, τ, on any plane reaches a critical value, τc, which varies linearly with the stress normal to that plane.
${\tau _c} = {\tau ^*} - {\sigma _n}\tan \phi$
where is the positive normal stress on the plane of failure (so that on this convention, a compressive stress or pressure is a negative quantity), is a material parameter and is the angle of shearing resistance. (Note: is not a ‘coefficient of friction’ although often referred to as such).
Failure locus for soil
For soils, tanφ ≈ 0.5 - 0.6 typically. So as a soil is compressed, τc increases.
If the principal components of stress are σ1, σ2, σ3 for a particular stress state at some point within a soil mass, we can draw three Mohr’s circles with diameters specified by σ1 > σ3 ; σ2 > σ3 ; σ1 σ2. For failure, we require only one of these to touch the failure locus, e.g.
Here, failure is determined by $$\left| {{\sigma _1} - {\sigma _3}} \right|$$, not by σ2. This is therefore a variant or modification of the Tresca criterion.
Example Problem 2: Yield criteria for non-metals
A better model for polymers is to assume that the shear stress k at which failure occurs is a function of hydrostatic stress or pressure, e.g.
$$k = {k_0} + \mu P$$
where $$P = -$$$$\frac{1}{3}$$$$\left( {{\sigma _1} + {\sigma _2} + {\sigma _3}} \right) = - {\sigma _H}$$ = hydrostatic pressure and k0 is the value of shear yield stress at zero hydrostatic pressure.
If we do this, we obtain pressure-modified criteria, which work well for polymers. For example, the pressure-modified von Mises criterion has a circular sectional cone with its axis along σ1 = σ2 = σ3:
$${\left( {{\sigma _1} - {\sigma _2}} \right)^2} + {\left( {{\sigma _2} - {\sigma _3}} \right)^2} + {\left( {{\sigma _3} - {\sigma _1}} \right)^2} = 2{Y^2} = 6{k^2} = 6\left( {{k_o} + \mu P} \right)$$
The pressure-modified Tresca criterion has a hexagonal pyramid with axis along σ1 = σ2 = σ3:
$$\left( {{\sigma _1} - {\sigma _3}} \right) = Y = 2k = 2\left( {{k_0} + \mu P} \right)$$
These modified criteria work well for polymers.
## Summary
• Stress and strain and the relationship between them can be expressed in tensor formalism.
• The stress tensor is symmetric and can be separated into hydrostatic and deviatoric components.
• The stress state can be expressed by a tensor that has only diagonal components – the principal stress tensor. This is achieved by rotating the axes of the stress tensor, so that the axes are parallel to the forces on the body.
• The measured strain tensor can be separated into a symmetric real strain tensor and an antisymmetric rotation tensor. The real strain tensor can then be separated into dilatational (volume expansion) and deviatoric (shape change) components.
• We can define combinations of the three principal stress components that will cause yield – yield criteria. Different criteria are best used for different materials. The best one for metals is the von Mises yield criterion:
• $${({\sigma _1} - {\sigma _2})^2} + {({\sigma _2} - {\sigma _3})^2} + {({\sigma _3} - {\sigma _1})^2} = 6{k^2} = 2{Y^2}$$
A mathematically simpler approximation to the von Mises yield criterion is the Tresca yield criterion:
$$\frac{{\left( {{\sigma _1} - {\sigma _3}} \right)}}{2} = k = \frac{Y}{2}$$
• If a yield criterion is plotted in 3D stress space, we have a yield surface.
## Questions
### Quick questions
You should be able to answer these questions without too much difficulty after studying this TLP. If not, then you should go through it again!
1. Use the interactive Mohr's circle below to help you with this question. Which of these stress states is not the same as the others?
a b c d
2. What kind of movement are these tensors - rotation, strain or both? Click on each and check the answer.
a b c d
1. The yield stress of an aluminium alloy in uniaxial tension is 320 MPa. The same alloy also yields under the combined stress state:
Is the behaviour of this alloy better described by the von Mises or the Tresca yield criterion?
## Going further
### Books
[1] J.F. Nye, Physical Properties of Crystals, Oxford, 1985.
[2] G.E. Dieter, Mechanical Metallurgy, 3rd Edition, McGraw-Hill, 1990.
[3] D.R. Lovett, Tensor Properties of Crystals, 2nd Edition, Adam Hilger, 1999.
[4] B.J. Goodno and J,M, Gere, Mechanics of Materials. 9th Edition, Cengage, 2018.
[5] A. Kelly and K.M. Knowles, Crystallography and Crystal Defects, 3rd Edition, Wiley, 2020.
## Derivation of yield ellipse aspect ratio
For plane stress, let the principal stresses be $${\sigma _1}$$ and $${\sigma _2}$$, with $${\sigma _3} = 0$$.
The yield surfaces for the Tresca yield criterion and the von Mises yield criterion are shown below.
The Tresca yield surface is an irregular hexagon and the von Mises yield surface is an ellipse. The ratio of the length of the major and minor axes of this ellipse is $$\sqrt 3 {\rm{ :1}}$$.
In the quadrant where $${\sigma _1} > 0$$ and $${\sigma _2} > 0$$, the Tresca yield surface is a square.
To see this, first suppose $${\sigma _1} > {\sigma _2}$$ for example. Since $${\sigma _3} < {\sigma _2}$$, yield occurs on the Tresca criterion when
${\sigma _1} - {\sigma _3} = Y$
i.e., for $${\sigma _1} = Y$$ because $${\sigma _3} = 0$$. When $${\sigma _1} = {\sigma _2}$$, yield occurs at $${\sigma _1} = {\sigma _2} = Y$$. Similarly, for $${\sigma _1} < {\sigma _2}$$ in this quadrant, yield occurs when $${\sigma _2} = Y$$.
The shape of the Tresca yield surface in the quadrant where $${\sigma _1} > 0$$ and $${\sigma _2} < 0$$ is a straight line because the third principal stress $${\sigma _3}$$ will be the intermediate principal stress. Hence in this quadrant the yield criterion becomes
$\left| {{\sigma _1} - {\sigma _2}} \right| = Y$
whence the straight line linking (-1,0) to (0,1) in the diagram. The shape of the Tresca yield surface in the remaining two quadrants follows similarly.
For plane stress the von Mises yield criterion becomes
$${\left( {{\sigma _1} - {\sigma _2}} \right)^2} + {\sigma _2}^2 + {\sigma _1}^2 = 2{Y^2}$$
which for Y = 1 becomes
$$2{\sigma _1}^2 + 2{\sigma _2}^2 - 2{\sigma _1}{\sigma _2} = 2$$
i.e.,
$${\sigma _1}^2 + {\sigma _2}^2 - {\sigma _1}{\sigma _2} = 1$$
Thus the yield surface for plane stress passes through (1,0), (1,1) (0,1) (-1,0) (-1,-1), and (0,-1). It also passes through the points
$$\left( { - \frac{1}{{\sqrt 3 }},\frac{1}{{\sqrt 3 }}} \right){\rm{ and }}\left( {\frac{1}{{\sqrt 3 }}, - \frac{1}{{\sqrt 3 }}} \right)$$
as can be seen by direct substitution in the yield condition for $${\sigma _1} = -{\sigma _2}$$.
The directions
$$\left[ {{\rm{1,1}}} \right]{\rm{ and }}\left[ {\frac{1}{{\sqrt 3 }}, - \frac{1}{{\sqrt 3 }}} \right]$$
are orthogonal and their magnitudes define the length of the major and minor axes of this ellipse.
Hence, the ratio of the length of the major and minor axes of this ellipse is $$\sqrt 3 {\rm{ :1}}$$.
## Example Problem 2
Academic consultant: Kevin Knowles (University of Cambridge)
Content development: Andrew Bennett, Joanne Sharp
Web development: Jin Chong Tan and Lianne Sallows
DoITPoMS is funded by the UK Centre for Materials Education and the Department of Materials Science and Metallurgy, University of Cambridge
|
2022-06-29 19:02:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7138446569442749, "perplexity": 706.0679730163097}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103642979.38/warc/CC-MAIN-20220629180939-20220629210939-00698.warc.gz"}
|
https://www.econometricsociety.org/publications/econometrica/1996/07/01/cheap-talk-and-sequential-equilibria-signaling-games
|
Cheap Talk and Sequential Equilibria in Signaling Games
https://doi.org/0012-9682(199607)64:4<917:CTASEI>2.0.CO;2-P
p. 917-942
Alejandro M. Manelli
Well-behaved infinite signaling games may have no sequential equilibria. We prove that adding cheap talk to these games "solves" the nonexistence problem: the limit of sequential equilibrium outcomes of finite approximating games is a sequential equilibrium outcome of the cheap-talk extension of the limit game. In addition, when the signaling space has no isolated points, any cheap-talk sequential equilibrium outcome can be approximated by a sequential $\epsilon$-equilibrium of the game without cheap talk.
|
2019-05-25 17:40:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6706691980361938, "perplexity": 2405.51834115965}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258147.95/warc/CC-MAIN-20190525164959-20190525190959-00087.warc.gz"}
|
https://www.semanticscholar.org/paper/Motivic-stable-homotopy-theory-is-strictly-at-the-Bachmann/ef26c54a84025239505addecb5a4cb065609408e
|
• Corpus ID: 241035482
# Motivic stable homotopy theory is strictly commutative at the characteristic
```@inproceedings{Bachmann2021MotivicSH,
title={Motivic stable homotopy theory is strictly commutative at the characteristic},
author={Tom Bachmann},
year={2021}
}```
We show that mapping spaces in the p-local motivic stable category over an Fp-scheme are strictly commutative monoids (whence HZ-modules) in a canonical way.
## References
SHOWING 1-10 OF 53 REFERENCES
The localization theorem for framed motivic spaces
We prove the analog of the Morel–Voevodsky localization theorem for framed motivic spaces. We deduce that framed motivic spectra are equivalent to motivic spectra over arbitrary schemes, and we give
Nilpotence in normed MGL-modules
• Mathematics
• 2019
We establish a motivic version of the May Nilpotence Conjecture: if E is a normed motivic spectrum that satisfies \$E \wedge HZ \simeq 0\$, then also \$E \wedge MGL \simeq 0\$. In words, motivic homology
A1-invarinants in Galois cohomology and a claim of Morel
We establish a variant of the splitting principle of Garibaldi-Merkurjev-Serre for invariants taking values in a strictly homotopy invariant sheaf. As an application, we prove the folklore result of
K-theory of valuation rings
• Mathematics
Compositio Mathematica
• 2021
We prove several results showing that the algebraic \$K\$-theory of valuation rings behaves as though such rings were regular Noetherian, in particular an analogue of the Geisser–Levine theorem. We
From algebraic cobordism to motivic cohomology
Let S be an essentially smooth scheme over a eld of characteristic exponent c. We prove that there is a canonical equivalence of motivic spectra over S MGL=(a1;a2;::: )(1=c)' HZ(1=c); where HZ is the
Relations between slices and quotients of the algebraic cobordism spectrum
We prove a relative statement about the slices of the algebraic cobordism spectrum. If the map from MGL to a certain quotient of MGL introduced by Hopkins and Morel is the map to the zero-slice then
Higher Topos Theory
This purpose of this book is twofold: to provide a general introduction to higher category theory (using the formalism of "quasicategories" or "weak Kan complexes"), and to apply this theory to the
Motivic infinite loop spaces
• Mathematics
Cambridge Journal of Mathematics
• 2021
We prove a recognition principle for motivic infinite P1-loop spaces over an infinite perfect field. This is achieved by developing a theory of framed motivic spaces, which is a motivic analogue of
Hyperdescent and étale K-theory
• Mathematics
• 2019
We study the etale sheafification of algebraic K-theory, called etale K-theory. Our main results show that etale K-theory is very close to a noncommutative invariant called Selmer K-theory, which is
|
2022-01-29 05:26:20
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.894636869430542, "perplexity": 2187.012458289415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299927.25/warc/CC-MAIN-20220129032406-20220129062406-00208.warc.gz"}
|
https://physics.stackexchange.com/questions/242160/solving-an-acrobat-problem-in-3-methods
|
# Solving an acrobat problem in 3 methods
A circus acrobat of mass $M$ leaps straight up with initial velocity $v_{0}$ from a trampoline. As he rises up, he takes a trained monkey of mass $m$ off a perch at a height $h$ above the trampoline. What is the maximum height attained by the pair?
Center of Mass
Initially the center of mass(CM) is at $y_{0} = \dfrac{mh}{M+m}$. The only force on the CM is gravity and its initial velocity is $\dfrac{Mv_{0}}{M+m}$. So it is going to reach its highest point traveling a distance of $\dfrac{M^{2}v_{0}^{2}}{2(M+m)^{2}g}$ from its initial position $y_{0}$. This leaves the pair at a final distance of $\dfrac{mh}{M+m} + \dfrac{M^{2}v_{0}^{2}}{2(M+m)^{2}g}$ above the ground
Conservation of Energy
Initial Energy of the pair, $E_{0} = \frac{1}{2}Mv_{0}^{2} + mgh$ Final Energy of the pair, $E_{1} = (M+m)gH$, where $H$ is the maximum height above the ground that the pair reaches.
By conservation of energy, $E_{0} = E_{1} \implies (M+m)gH = \frac{1}{2}MV_{0}^{2} + mgh \implies H = \dfrac{Mv_{0}^{2} + 2mgh}{2(M+m)g}$
Conservation of momentum
At a height of $h$, the velocity of M is $v = \sqrt{v_{0}^{2} - 2gh}$. Just after picking up the monkey, the velocity of the pair, $v^{'}$ is given by $Mv = (M+m)v^{'}$ (Why should the momentum be conserved for the acrobat-monkey system?), they will go on to a height of $\dfrac{v^{'2}}{2g}$ from here.
The three approaches don't seem to tally. Where am I wrong?
• Mar 8 '16 at 6:56
The two first methods assume that the collision between the monkey and the acrobat is elastic, which is clearly not the case.
The third method uses simple kinematic equations. More importantly, the calculation gets past the collision part of the process using conservation of momentum, which does hold for this collision.
• Though an elastic collision between a monkey and an acrobat would be hilarious.
– Neil
Mar 8 '16 at 7:48
• Subtle point about why momentum is conserved! Mar 8 '16 at 9:22
• Does the conservation of momentum here serve as a best model of approximation given the short interaction time between the objects? Or am I missing a subtlety on the validity of momentum conservation itself? Mar 8 '16 at 14:47
Momentum Conservation can be applied (approximately) along the $$y$$-axis!
The reason is not so obvious. We can never conserve momentum along $$y$$-axis because gravity is always there to provide a force in the $$y$$-axis of our system (jumper+monkey); hence momentum conservation is invalid.
But we know something about gravity, that it does not play a significant role in very short intervals of time, and does not change the momentum of the system appreciably in that small period. But do note that the change is the momentum is very small, but it's not equal to zero!
So with this reasoning, one can apply momentum conservation provided that the time interval is super small.
|
2021-10-22 11:56:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6867668032646179, "perplexity": 229.90549930764254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585507.26/warc/CC-MAIN-20211022114748-20211022144748-00074.warc.gz"}
|
https://www.physicsforums.com/threads/help-to-understand-the-derivation-of-the-solution-of-this-equation.913268/
|
# A Help to understand the derivation of the solution of this equation
Tags:
1. May 1, 2017
### needved
Im reading this article Wave Optics in Gravitational Lensing (T. T. Nakamura, 1999) . In the article start work with
(\nabla ^2 +\omega)\tilde\phi = 4\omega^2U\tilde\phi
where $$\tilde\phi = F(\vec r)\tilde\phi_{0}(r)$$. Using espherical coordinates and the physical condition $$\theta \ll 1$$ and using a two-dimensional vector $$\Theta = \theta(Cos \varphi , Sen \varphi)$$ they can put the last equation in therms of enhancement factor F
\frac{\partial^2 F}{\partial r^2}+2i\omega\frac{\partial F}{\partial r}+\frac{1}{r}\nabla_{\theta}^{2}F=4\omega^2 UF
where $$\nabla_{\theta}^{2}$$ are the partials in polar and azimuthal coordinates.
A second physical condition, that they called **Eikonal Approximation** lead to considering the termn $$\frac{\partial^2 F}{\partial r^2}\approx 0$$ and at the right side of the equation they do $$V=2\omega U$$
rearrengment the remaining equation
\left [ -\frac{1}{2\omega r^2}\nabla_{\theta}^{2}+V \right]F=i\frac{\partial}{\partial r}F
this last equation is like the Schroedinger equation with the variable $$r$$ instead $$t$$ and $$\omega$$ instead $$\mu$$
\left [ -\frac{\hbar}{2\mu}\nabla^{2}+V \right]\Psi=i\hbar\frac{\partial}{\partial t}\Psi
they says, the correspondent Lagrangian is
L(r,\Theta, \dot\Theta) =\omega\left[\frac{1}{2} r^2(\dot\Theta)^2 - 2 U\right]
where $$\dot\Theta=\frac{d\Theta}{dr}$$ .
At this point i have a clue (more or less) about what are they doing. The problem is from this point and on...
The article says, from the path integral formulation of the Quantum mechanic, the solution to the equation
F(\vec r_{0})=\int D \Theta (r) e^{\int_{0}^{r_{0}}L(r,\Theta, \dot\Theta)dr}
eventually, working with this last expresion lead them to a solution given by
F(\omega , y)=-i\omega e^{i\frac{\omega y^2}{2}}\int_{0}^{\infty} xJ_{0}(\omega xy) e^{i\omega[\frac{1}{2}x^2-\psi (x)]} dx
where $$J_{0}$$ is the bessel function of zeroeth order.
please, can help me to understand how work with the path integral so i can obtain the last equation. Any helpful hint will be very preciated and will be welcomed.
Last edited by a moderator: May 6, 2017
2. May 6, 2017
### Paul Colby
Can't help you with the path integral but what happened to $U$? It no longer appears in Equation (7)? Also there are classical techniques to solve equations like (1) which don't use (or aren't called) path integrals. WKB comes to mind. Look at optics texts like Born and Wolf.
|
2018-05-24 10:17:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8903549313545227, "perplexity": 2306.1412266573157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866201.72/warc/CC-MAIN-20180524092814-20180524112814-00016.warc.gz"}
|
https://www.nature.com/articles/s41467-020-18183-4
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Site-dependent reactivity of MoS2 nanoparticles in hydrodesulfurization of thiophene
## Abstract
The catalytically active site for the removal of S from organosulfur compounds in catalytic hydrodesulfurization has been attributed to a generic site at an S-vacancy on the edge of MoS2 particles. However, steric constraints in adsorption and variations in S-coordination means that not all S-vacancy sites should be considered equally active. Here, we use a combination of atom-resolved scanning probe microscopy and density functional theory to reveal how the generation of S-vacancies within MoS2 nanoparticles and the subsequent adsorption of thiophene (C4H4S) depends strongly on the location on the edge of MoS2. Thiophene adsorbs directly at open corner vacancy sites, however, we find that its adsorption at S-vacancy sites away from the MoS2 particle corners leads to an activated and concerted displacement of neighboring edge S. This mechanism allows the reactant to self-generate a double CUS site that reduces steric effects in more constrained sites along the edge.
## Introduction
Catalytic hydrodesulfurization (HDS) is an industrial process applied to remove sulfur heteroatoms from gas oil feeds to reduce sulfur content in fuels and petrochemicals to a <10 ppm level, compatible with legislative specifications. The primary catalysts used for removal of sulfur from crude oil are based on molybdenum disulfide (MoS2) nanoparticles, often promoted with Co or Ni, and supported on an alumina substrate1,2. Extensive studies involving reaction kinetics, atom-resolved microscopy, X-ray and IR spectroscopy, and density functional theory (DFT) have made it evident that the active sites are located on the edge of single-layer MoS2 catalyst3,4,5,6,7,8,9,10. The nature of these sites on the catalytically active edges was studied by selectively adsorbing probe molecules such as CO or NO11,12,13,14. These experiments allowed the titration of the number of these sites and its correlation with catalytic activity. Thiophene (C4H4S) has furthermore often been used as an S-containing reactant molecule to probe HDS activity of a catalyst15,16,17,18,19,20. This simple five-membered aromatic heterocyclic molecule is relatively inert as its aromatic ring makes its C–S bond quite resistant to rupture. Despite the many fundamental studies devoted to a theory- and experiment-based understanding of the HDS of thiophene a comprehensive mechanistic understanding of this process is still lacking. HDS may proceed by two independent routes: the dominant route for thiophene is the direct desulfurization (DDS) pathway, which involves the direct hydrogenolysis of the C–S bond. The generic type of active site considered for the DDS pathway is a coordinately undersaturated site (CUS) located at a sulfur vacancy (VS) on an edge site formed by reaction with H22,19,20,21,22,23,24,25,26,27. The concentration of the CUS sites has been shown to be dependent on the chemical potential of hydrogen and sulfur in the gas phase and the type of MoS2-edge5,28,29,30,31,32,33,34,35. In parallel to DDS, the hydrogenation pathway (HYD) can proceed, in which thiophene adsorption is followed by hydrogenation before C–S bond scission36,37,38,39,40. The HYD route involves a different type of active site4, proposed to be brim sites located at the top of the MoS2 particle edges6,8. The HYD route becomes more pronounced compared to DDS for large S-containing molecules such as dibenzothiophene and prevailing for alkyl-substituted derivatives, such as 2,5-dimethyldibenzothiophene41,42. However, whether the DDS route can still proceed efficiently for such large molecules, which are sterically hindered in their adsorption on a single VS site, remains an open question and a target of intense industrial HDS catalyst development due to the comparatively lower hydrogen consumption in DDS.
Herein, we use a combination of atom-resolved scanning tunneling microscopy (STM) and DFT on corresponding MoS2 nanoparticle structures to determine the precise adsorption configuration of thiophene on all types of CUS sites present on the MoS2-edges and corner sites present under HDS conditions. We first investigate the formation of VS sites by monitoring the distribution of sulfur vacancies and their associated formation energies. We then expose MoS2 to thiophene, which allows quantification of the accessibility of VS sites grouped by their position on the MoS2-edges and the corresponding adsorption configuration of thiophene. Thiophene adsorption is possible on all observed VS sites. However, whereas thiophene adsorbs directly in open VS sites at the corners between edges, the adsorption in VS sites located at the interior of the edge is observed to occur via a displacement of a neighboring S atom leading to simultaneous formation of an S2 dimer and thiophene adsorbed on a double-VS CUS site. The main finding is therefore that the adsorption of an S-containing molecule may generate its own, more accessible adsorption site, which facilitates the subsequent desulfurization of the molecule. This mechanism contrasts prevalent conclusions on the HDS active site as a static vacancy configuration, and can thus explain the apparent DDS reactivity observed for larger molecules than thiophene, such as dibenzothiophene; it further opens up the discussion for a more accurate description of catalytic inhibition in co-processing of e.g. aromatics and O- and N-bearing reactants in hydrotreating25,43,44,45,46.
## Results
### Site-dependent sulfur vacancy formation
The reactivity of thiophene on MoS2 was first experimentally evaluated by atom-resolved STM imaging of single-layer MoS2 nanoparticles. We have specifically characterized S vacancies on hydrogen-activated MoS2-edge structures and then assessed the adsorption of thiophene on these sites by use of a model system composed of well-defined MoS2 nanoparticles synthesized on an Au(111) single crystal surface (see methods section). Figure 1 illustrates an atom-resolved STM image of a MoS2 nanoparticle (Fig. 1a) together with a top view of a structural model (Fig. 1b) based on previous extensive atom-resolved characterization and theoretical modeling27,47,48,49. The MoS2 particle in Fig. 1a represents the HDS active state (denoted by r-MoS2) induced in the experiment by dosing H2 gas at elevated pressure and temperature as presented in ref. 50 (see “Methods” section). The resulting r-MoS2 particle morphology is represented by a truncated triangular shape, which is bounded by two different types of edges referred to as Mo- and S-edges, respectively. The hydrogen activation leads to a reduced overall 50% S-coverage on the Mo-edge, represented by S monomers located at a bridge position between edge Mo atoms50 (Fig. 1b). The same 50% S-coverage on the Mo-edge structure was concluded to be present in situ at 1 bar H2 pressure in a recent reactor STM study51. Importantly, in Fig. 1a, a number of individual S vacancies can be clearly identified along the Mo-edge imaged as sites with an extinguished intensity in the STM image (see red arrows)50. Atomic defects are sometimes present within the basal plane too (e.g. in Fig. 1a), but these are not induced by the hydrogen treatment and their location suggest an impurity atom on the metal lattice of MoS2 rather than a basal plane VS. Based on a statistical analysis, the total number of vacancies at the Mo-edges is modest overall, corresponding to an average probability of 16% for a vacancy being formed at any edge site. This observation is, however, in full accordance with previous theoretical modeling of the S-vacancy formation, which results in a rather large energy cost required for VS formation by removal of S atoms from the 50% S covered Mo-edge34,52.
Importantly, our analysis shows that the specific VS probability varies significantly among the different sites. To see this, we labeled S atomic positions with a letter in the ball model representations (Fig. 1b, c) to identify S vacancies on the corner (C), adjacent corner (A) and in the middle (M) positions. These sites define the possible edge positions on the Mo-edge lengths most abundantly present in the synthesized MoS2 nanoparticle ensemble (Supplementary Fig. 1), corresponding to 4-6S monomers on the edge as shown in the side-view ball models in Fig. 1c. A breakdown of our statistical materials from the experimental images into each edge length (Supplementary Fig. 2) shows only a very slight variation of the vacancy numbers with the edge length, so for the experimental analysis we consider C, A and M sites on differently sized edges to be similar in terms their vacancy formation probability. The example in the STM image in Fig. 1a already points to the preferential formation of VS mainly at A and M locations (see top-view ball model). Figure 2a illustrates the frequency of observing a VS on these specific sites obtained by a direct counting of missing S atoms in all our STM images. Here, the most probable S vacancies are located at the A positions, with a VS probability at that site being 0.25 (i.e. 1 out of 4 of all A sites counted in the STM images were observed to have a VS). It is slightly less probable to find S vacancies at M positions, with 0.18 of all M sites containing a VS. The number of S vacancies observed directly at the corner site between the Mo-edge and S-edge is considerably lower at 0.06, indicating a higher initial stability of the terminal S monomer at the corner site.
DFT calculations were performed to provide additional information on the probability of VS formation. The model chosen to represent experimentally observed MoS2 nanoparticles was a freely standing truncated MoS2 nanoparticle exposing a short S-edge with 100% S and a five S-monomers long Mo-edge (Mo-5S) comprising of sites of the C, A, and M type (Fig. 2b). The MoS2 nanoparticle models are important in our study since they possess higher site heterogeneity and allow a greater degree of S mobility on the edge, as opposed to semi-infinite stripe models used in previous work which only contain M sites5,18,19,24. We did not specifically include the Au substrate here, as recent calculations involving semi-infinite periodic calculations indicated that the substrate has a negligible effect on the bonding of S monomers to the Mo-edge50. As listed in Fig. 2b, the energy required to form S-vacancies (ES) (see “DFT calculations” in Methods) on the C, A, and M locations of the MoS2 nanoparticle is endothermic in all cases. However, the vacancy formation on the corner C site is significantly more unfavorable (ES = 1.97 eV) than the A or M position (ES = 1.19 eV and 1.05 eV, respectively). For reference, our corresponding calculation of ES on a semi-infinite stripe model gave a value of 0.99 eV, which is slightly lower than that of the M position. To quantitatively account for the effect of temperature and partial pressures of H2 and H2S in the experiment, we calculated the Gibbs free energy of VS formation from which the vacancy fraction at each of the three positions considered can be calculated (see “Theory methods”). We note that some H2S is present in the background gas even if pure H2 is used, due to residual gas and the fact that H2S is released during S vacancy formation, corresponding to a H2/H2S ratio of approximately 103–104 or higher (estimated from gas analysis using mass spectroscopy). Figure 2c shows the calculated vacancy fraction on the three locations at the experimental temperature of 673 K and a total pressure of 10−4 mbar as a function of the relative partial pressure of H2 and H2S from sulfur rich (H2/H2S > 1) to sulfur poor conditions (H2/H2S > 107). The model predicts a significant population of S vacancies at the M and A positions in the range of H2/H2S ratio between 103 and 104, in agreement with our experimental findings (Fig. 2a). Going to more sulfur poor conditions (H2/H2S > 105) from this point leads to almost unity vacancy fraction reflecting fully stripped M and A sites equivalent to a naked Mo-edge, which we never observed experimentally. On the other hand, the model predicts a very small fraction of corner (C) vacancies, below 10−7 as opposed to ~6% found in experiments. This difference may be due to the intrinsic errors in DFT (of 0.1–0.2 eV), which can introduce uncertainties in the calculated vacancy population. Furthermore, while the Au substrate is expected to not significantly influence ES on the M and A sites, which is captured well in semi-infinite models50, its influence on the corner site could be a source of error in our calculation of vacancy coverage for C positions in Fig. 2c. The DFT modeling also predicts that the VS formation energies on C sites are dependent on the edge length. Figure 2d shows that the sulfur vacancy formation energy (ES) at the corner site (see also Supplementary Fig. 4) decreases from ~2 eV to 1.8 eV (see Supplementary Fig. 4) upon going from Mo-edge length of four to six. Hence, this dependency will contribute to an increase of the overall VS fraction compared with the 5 S-atom Mo-edge, as shown in Fig. 2c (lightly shaded pink circles). The increasing trend for the VS probability for the corner sites is also noted in the breakdown of the experimental data for different edge lengths in Supplementary Fig. 2. For comparison, the calculated ES values at the M site in Fig. 2d (only present in 5S and 6S Mo-edges) shows that the vacancy coverage should be size independent, which is consistent with our STM observations (Supplementary Fig. 2). Furthermore, for the A sites, DFT predicts a variation where the S atom is hardest to remove on the 5S Mo-edge (1.19 eV for 5S compared to 1.12 eV and 0.99 eV for 4S and 6S respectively in Fig. 2d). We note that the DFT functional tends to trimerize Mo atoms particularly on the 5S Mo-edge which makes it generally easier to remove the S atom on the longer-bridge (e.g. M site of 5S) than others. We posit that this effect is the reason for variation of ES values with Mo-edge length, which is not reflecting the variation in the experiment.
### Thiophene adsorption modes on MoS2-edge and corner sites
To address differences in the bonding affinity of thiophene at C, A, and M-type VS sites, we performed plane wave DFT calculations for thiophene adsorption on pre-formed S-vacancies along the edge of the MoS2 nanoparticle model (see Fig. 2b) and identified the most energetically stable adsorbed states (see Fig. 4 for most stable structures and Supplementary Table 1 for alternative structures). We then compare these states with the corresponding edge structure observed on these sites in atom-resolved STM images, obtained after thiophene exposure at room temperature (Fig. 5). The most stable adsorbed thiophene state corresponds to adsorption directly at the corner S-vacancy site C (Fig. 4a). In this state, thiophene binds strongly (binding energy, BE = −2.50 eV) to the exposed Mo atom available at the C site. This is in line with our experiments (Fig. 3b), showing that the C vacancy sites react with thiophene readily at room temperature. In the STM images shown in Fig. 5a, we see a bright (see also linescan I in Fig. 5d), larger, and slightly displaced protrusion at the corner site (indicated with a C in the superimposed model structure), consistent with the presence of a thiophene molecule adsorbed in configuration shown in Fig. 4a.
The most stable adsorption mode found for thiophene on the A vacancy site is shown in Fig. 4b. In this state, thiophene adsorption is still exothermic by ~ −0.85 eV, but much less favorable than for corner S-vacancies, indicating that steric hindrance is significant on the A site. Importantly, the DFT calculation shows that the adsorption introduces a significant structural modification to “make space” for thiophene, thus displacing the neighboring S atom in the r-MoS2. Specifically, as the thiophene is adsorbed onto a S-vacancy on the A site, it either pushes away a neighboring S atom on the corner from a bridge to a top position (see the green arrow in Fig. 4b) or displaces a neighboring S atom on the interior (Fig. 4c, d) to its neighboring site leading to a relatively less exothermic adsorption (BE = −0.60/−0.65 eV for Mo-5S and Mo-6S edges, respectively). Indeed, this latter S displacement leads to the formation of a stable S2 dimer (formally $$S_2^{2 - }$$) (see red arrow and dashed circle in Fig. 4c) and places the thiophene in coordination with a double VS vacancy located over both the A and M sites. Note in Fig. 4c that the S in thiophene coordinates to one of the undercoordinated Mo atoms, whereas that carbon ring system is placed over the Mo which has become accessible due to the displacement of the S. Our calculations identified additional favorable thiophene adsorption states with the concurrent formation of S dimers and corner S atop a corner Mo atom (with BE ~ −0.5 to −0.6 eV, see structures (i)–(iii) of Supplementary Table 1). Our calculations also showed that thiophene adsorption on a S-vacancy on the M site can generate a favorable adsorption site by displacing all S atoms on the edge to a top position without S2 formation (see structure (iv) of Supplementary Table 1); we expect, however, that such a concerted displacement may involve a large kinetic barrier due to an additive contribution of each S displacement. As shown in Fig. 4e, for adsorption at the M vacancy site, the S2 dimer can also be formed on the other side of the edge closer to the C site, in a configuration with an exothermic BE of −0.39 eV (note here we used a six S-atom-long Mo-edge (Mo-6S)).
The STM images in Fig. 5b, c provide the experimental evidence that adsorption on A and M sites occurs via S displacement. To observe this, we point to the contrast asymmetry around the distinct dark edge region denoted with an A and S2 or M and S2, respectively, in the lower part of Fig. 5b, c (see also linescans II, III and IV in Fig. 5d). Here, the S2 dimer is located at the high contrast protrusion on only one side and the dark spot reflects the rather open region between thiophene and the S2 dimer seen in all cases in the structures in Fig. 4c–e. The edges reflecting linescans II and IV correspond to the adsorption of thiophene where the S2 dimer is located either away from the nearest corner site or towards the corner position. We also observe the modeled displacement of the corner S monomer from a bridge to on top in configuration 4b, as indicated by the position of the protrusion at the green arrow in the superimposed image below Fig. 5b.
Our calculations show that S2 dimerization, per se, is typically endothermic with energies ranging from 0.7–1.9 eV depending on the location of the dimers and the concomitant vacancy (see Supplementary Table 2). Since the adsorption with dimerization is, in all cases, exothermic, we checked if thiophene adsorption and S displacement occurs with a concerted transition state or sequentially. As shown in Supplementary Table 3, the barrier for S-dimerization starting from an edge vacancy and creating two adjacent vacancies without thiophene is very high (~2.1 eV). The barrier for the concerted step is 0.8 eV lower than the sequential pathway (at ~1.3 eV), which is fully consistent with the STM observation in Fig. 3b that thiophene adsorption on A and M sites is favorable, but with a surmountable reaction barrier.
## Methods
### Experimental details
The experimental approach is based on the synthesis method of well-defined nanocrystals of MoS2 on Au(111)27,30,53. The synthesis of MoS2 nanoparticles on Au(111) was carried out on a clean Au(111) single crystal in a standard ultra-high vacuum (UHV) equipped with a homebuilt Aarhus-type variable temperature STM. The sample temperature was measured with a K-type thermocouple in contact with the backside of the Au crystal. MoS2 nanoparticles were synthesized by physical vapor deposition of Mo evaporation onto the gold surface using an e-beam evaporator (Oxford Applied Research EGCO-4). H2S gas (AGA, purity 99.8%) was dosed for sulfidation corresponding to a background pressure of 1.0 × 10−6 mbar for 15 min, while the sample temperature was 400 K. In order to obtain full crystallinity of the MoS2 nanoparticles, the samples were post-annealed in H2S atmosphere to 673 K for 10 min. The MoS2 nanoparticles were subsequently activated under reducing conditions by back-filling the UHV chamber with H2 (99.999%, Praxair)50. The reduced (active) MoS2 nanoparticles (r-MoS2) used as the starting point of the experiments reported here were obtained by annealing the fully sulfided samples to 673 K in of 10−4 mbar of H2. For thiophene (Sigma Aldrich, 99% purity) dosage, the liquid was kept in a glass container and admitted to the UHV chamber through a leak valve connected to a stainless steel tube directed to the sample surface. Prior to dosing, the thiophene liquid was purified by several freeze-pump-thaw cycles to remove dissolved gas. The gas purity was checked and H2S and H2 partial pressures were measured by a quadrupole mass spectrometer (Hiden Analytical). The samples were exposed to thiophene at three different substrate temperatures: 300, 400, and 650 K; a background pressure of 1.0 × 10−7 mbar was used for 5 min during the thiophene dosage. After each thiophene exposure, the sample was cooled down to room temperature and transferred to the STM for a local inspection. STM images were recorded at room temperature using etched W tips in the constant current mode. The tunneling parameters are noted in the image text as Vt (tunneling bias) and It (tunneling current).
### DFT calculations
All calculations (unless specified otherwise) were carried out for a single-layer hexagonal nanoparticle model of MoS2 with Mo-edge containing two to six molybdenum atoms while the S-edge contains four atoms, reflecting the typical truncated triangular shape of a single-layer particle observed in STM experiments. Two layers of sulfur atoms sandwich the molybdenum layer such that they are in the trigonal prismatic positions characteristic of the 2H phase of MoS2. Further, most of the calculations involve a Mo-edge of six molybdenum atoms (and 5S monomers, denoted Mo-5S), also consistent with our observations that metal edges are longer than the sulfur edges. All figures in this document with MoS2 structures and adsorption configurations show Mo atoms in blue, S atoms in yellow, C atoms in brown, and H atoms in gray. The Au(111) support was not included in these calculations; while the support can influence the stability of the edges, the single-layer stripe model used in periodic calculations have successfully captured the correct edge morphologies under sulfidation and reducing conditions, as inferred from STM studies We, therefore, suggest that the given model captures the local electronic structures adequately to provide a comparative analysis between different locations along the periphery of the MoS2 nanoparticles used in experiments. We further assume that the sulfur edge is 100% S-decorated while the metal edge is 50% S-decorated, consistent with ab initio phase diagrams and STM observations50. Importantly, we introduce S-vacancies on the Mo-edge on the center (C), adjacent (A) and middle (M) positions, as needed, to elucidate the energetics of S-vacancy formation and of thiophene adsorption on these vacancies.
The calculations were carried out with VASP54,55, a plane wave periodic DFT code. Generalized gradient approximation and projected augmented wave (PAW) potentials56 were used with PBE exchange correlation functional57 and the D3 Grimme dispersion correction58. All calculations were carried out in a box that had at least 10 Å of vacuum between two images in all directions. Spin polarization is included in all calculations: the effect of spin was found to be negligible (~0.01 eV on the vacancy formation energy). Plane wave and density wave cutoffs of 400 and 645 eV were used, respectively. A Gaussian smearing of 0.05 eV was used, and the energies were extrapolated to 0 K. Only gamma-point sampling was used in view of the large dimensions of the supercell. The convergence criterion for geometric relaxation was set to 0.02 eV/A.
The energy of vacancy formation, ES, is given by
$$E_{\rm{S}} = E_{{\rm{NP}},{\rm{with}}\,{\rm{CUS}}} \, + \, E_{{\rm{H}}_2{\rm{S}}}-E_{{\rm{NP}}}-E_{{\rm{H}}_2},$$
(1)
where ENP,with CUS is the energy of the nanoparticle with coordinative unsaturation (CUS or vacancy), $${E_{{\rm{H}}_2 {\rm{S}}}}$$ is the energy of H2S in the gas phase, ENP is the energy of the nanoparticle at the equilibrium termination (50% metal edge, 100% sulfur edge), and $${E_{{\rm{H}}_2 }}$$ is the energy of hydrogen gas. The free energy of vacancy formation, required to estimate vacancy fraction, was computed by calculating temperature-dependent enthalpy and entropy59. The entropy of surface S atoms was assumed to be of vibrational origin while rotational and translational components from statistical mechanics for an ideal polyatomic species were additionally included for gaseous molecules. Vibrational entropy was computed using the Harmonic approximation, which is sufficient for all systems considered here. The binding energy of thiophene, BE, is given by
$${\rm{BE}} = E_{{\rm{NP}},\,{\rm{with}}\,{\rm{CUS}} + {\rm{Thiophene}}}-E_{{\rm{NP}},\,{\rm{with}}\,{\rm{CUS}}}-E_{{\rm{Thiophene}}},$$
(2)
where ENP, with CUS + Thiophene is the energy of the nanoparticle with thiophene adsorbed on the vacancy (with and without sulfur rearrangements), ENP, with CUS is the energy of the nanoparticle with the vacancy (without rearrangements), and EThiophene is the energy of gas-phase thiophene.
For transition state calculations in Supplementary Table 1, a periodic single-layer stripe model with four rows of six Mo atoms each and commensurate S atoms was created based on our previous work10,25,59. We start with a 50% S covered Mo-edge and create a single vacancy in the middle of the edge to represent an internal vacancy (corresponding to an M vacancy site in the main text). The supercell was periodic in one direction (along the length of the edge) where an Mo-Mo distance of 3.19 Å was set based on our optimized lattice constant; the cell had more than 15 Å of vacuum in the other directions. The k-point sampling was 1 × 2 (along the edge length) x1 based on the Monkhorst Pack method60. All other settings for these DFT calculations were the same as above. The transition states were calculated using the climbing image nudged elastic band method61 such that the forces on the images were less than 0.1 eV/Å.
## Data availability
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
## References
1. 1.
Toulhoat, H., Raybaud, P. (eds). Catalysis by Transition Metal Sulphides (Editions Technip, Paris, 2013).
2. 2.
Topsøe, H., Clausen, B. S., Massoth, F. E. Hydrotreating Catalysis Vol. 11 (Springer Verlag, Berlin, Heidelberg, 1996).
3. 3.
Salmeron, M., Somorjai, G. A., Wold, A., Chianelli, R. & Liang, K. S. The adsorption and binding of thiophene, butene and H2S on the basal plane of MoS2 single crystals. Chem. Phys. Lett. 90, 105–107 (1982).
4. 4.
Daage, M. & Chianelli, R. R. Structure-function relations in molybdenum sulfide catalysts: the “rim-edge” model. J. Catal. 149, 414–427 (1994).
5. 5.
Raybaud, P., Hafner, J., Kresse, G., Kasztelan, S. & Toulhoat, H. Ab initio study of the H2-H2S/MoS2 gas-solid interface: the nature of the catalytically active sites. J. Catal. 189, 129–146 (2000).
6. 6.
Lauritsen, J. V. et al. Hydrodesulfurization reaction pathways on MoS2 nanoclusters revealed by scanning tunneling microscopy. J. Catal. 224, 94–106 (2004).
7. 7.
Hansen, L. P. et al. Atomic-scale edge structures on industrial-style MoS2 nanocatalysts. Angew. Chem. Int. Ed. 50, 10153–10156 (2011).
8. 8.
Lauritsen, J. V. & Besenbacher, F. Atom-resolved scanning tunneling microscopy investigations of molecular adsorption on MoS2 and CoMoS hydrodesulfurization catalysts. J. Catal. 328, 49–58 (2015).
9. 9.
Baubet, B. et al. Quantitative two-dimensional (2D) morphology-selectivity relationship of CoMoS nanolayers: a combined high-resolution high-angle annular dark field scanning transmission electron microscopy (HR HAADF-STEM) and density functional theory (DFT) study. ACS Catal. 6, 1081–1092 (2016).
10. 10.
Rangarajan, S. & Mavrikakis, M. On the preferred active sites of promoted MoS2 for hydrodesulfurization with minimal organonitrogen inhibition. ACS Catal. 7, 501–509 (2017).
11. 11.
Topsøe, N. Y. et al. Spectroscopy, microscopy and theoretical study of NO adsorption on MoS2 and Co-Mo-S hydrotreating catalysts. J. Catal. 279, 337–351 (2011).
12. 12.
Topsøe, N.-Y. & Topsøe, H. Characterization of the structures and active sites in sulfided Co-Mo/Al2O3 and Ni-Mo/Al2O3 Catalysts by NO Chemisorption. J. Catal. 84, 386–401 (1983).
13. 13.
Travert, A. et al. CO adsorption on CoMo and NiMo sulfide catalysts: a combined IR and DFT study. J. Phys. Chem. B 110, 1261–1270 (2006).
14. 14.
Chen, J., Maugé, F., El Fallah, J. & Oliviero, L. IR spectroscopy evidence of MoS2 morphology change by citric acid addition on MoS2/Al2O3 catalysts—a step forward to differentiate the reactivity of M-edge and S-edge. J. Catal. 320, 170–179 (2014).
15. 15.
McCarty, K. F. & Schrader, G. L. Deuterodesulfurization of thiophene: an investigation of the reaction mechanism. J. Catal. 103, 261–269 (1987).
16. 16.
Raybaud, P., Hafner, J., Kresse, G. & Toulhoat, H. Adsorption of thiophene on the catalytically active surface of MoS2: an local-density-functional study. Phys. Rev. Lett. 80, 1481–1484 (1998).
17. 17.
van der Meer, Y., Hensen, E. J. M., van Veen, J. A. R. & van der Kraan, A. M. Characterization and thiophene hydrodesulfurization activity of amorphous-silica-alumina-supported NiW catalysts. J. Catal. 228, 433–446 (2004).
18. 18.
Cristol, S., Paul, J.-F., Schovsbo, C., Veilly, E. & Payen, E. DFT study of thiophene adsorption on molybdenum sulfide. J. Catal. 239, 145–153 (2006).
19. 19.
Moses, P. G., Hinnemann, B., Topsøe, H. & Nørskov, J. K. The hydrogenation and direct desulfurization reaction pathway in thiophene hydrodesulfurization over MoS2 catalysts at realistic conditions: a density functional study. J. Catal. 248, 188–203 (2007).
20. 20.
Jin, Q. et al. A theoretical study on reaction mechanisms and kinetics of thiophene hydrodesulfurization over MoS2 catalysts. Catal. Today 312, 158–167 (2018).
21. 21.
Bataille, F. et al. Alkyldibenzothiophenes hydrodesulfurization-promoter effect, reactivity, and reaction mechanism. J. Catal. 191, 409–422 (2000).
22. 22.
Grange, P. Catalytic hydrodesulfurization. Catal. Rev. 21, 135–181 (1980).
23. 23.
Li, S., Liu, Y., Feng, X., Chen, X. & Yang, C. Insights into the reaction pathway of thiophene hydrodesulfurization over corner site of MoS2 catalyst: a density functional theory study. Mol. Catal. 463, 45–53 (2019).
24. 24.
Paul, J. F. & Payen, E. Vacancy formation on MoS2 hydrodesulfurization catalyst: DFT study of the mechanism. J. Phys. Chem. B 107, 4057–4064 (2003).
25. 25.
Rangarajan, S. & Mavrikakis, M. DFT insights into the competitive adsorption of sulfur- and nitrogen-containing compounds and hydrocarbons on Co-promoted molybdenum sulfide catalysts. ACS Catal. 6, 2904–2917 (2016).
26. 26.
Byskov, L. S., Nørskov, J. K., Clausen, B. S. & Topsøe, H. DFT calculations of unpromoted and promoted MoS2-based hydrodesulfurization catalysts. J. Catal. 187, 109–122 (1999).
27. 27.
Helveg, S. et al. Atomic-scale structure of single-layer MoS2 nanoclusters. Phys. Rev. Lett. 84, 951–954 (2000).
28. 28.
Schweiger, H., Raybaud, P., Kresse, G. & Toulhoat, H. Shape and edge sites modifications of MoS2 catalytic nanoparticles induced by working conditions: a theoretical study. J. Catal. 207, 76–87 (2002).
29. 29.
Bollinger, M. V., Jacobsen, K. W. & Nørskov, J. K. Atomic and electronic structure of MoS2 nanoparticles. Phys. Rev. B 67, 085410 (2003).
30. 30.
Lauritsen, J. V. et al. Atomic-scale insight into structure and morphology changes of MoS2 nanoclusters in hydrotreating catalysts. J. Catal. 221, 510–522 (2004).
31. 31.
Dinter, N. et al. Temperature-programmed reduction of unpromoted MoS2-based hydrodesulfurization catalysts: first-principles kinetic Monte Carlo simulations and comparison with experiments. J. Catal. 275, 117–128 (2010).
32. 32.
Sharma, L., Upadhyay, R., Rangarajan, S. & Baltrusaitis, J. Inhibitor, Co-catalyst, or Co-reactant? Probing the different roles of H2S during CO2 hydrogenation on the MoS2 catalyst. ACS Catal. 9, 10044–10059 (2019).
33. 33.
Rosen, A. S., Notestein, J. M. & Snurr, R. Q. Comprehensive phase diagrams of MoS2 edge sites using dispersion-corrected DFT free energy calculations. J. Phys. Chem. C. 122, 15318–15329 (2018).
34. 34.
Bruix, A., Lauritsen, J. V. & Hammer, B. Effects of particle size and edge structure on the electronic structure, spectroscopic features, and chemical properties of Au(111)-supported MoS2 nanoparticles. Faraday Discuss. 188, 323–343 (2016).
35. 35.
Saric, M., Rossmeisl, J. & Moses, P. G. Modeling the active sites of Co-promoted MoS2 particles by DFT. Phys. Chem. Chem. Phys. 19, 2017–2024 (2017).
36. 36.
Topsøe, H. et al. The role of reaction pathways and support interactions in the development of high activity hydrotreating catalysts. Catal. Today 107-08, 12–22 (2005).
37. 37.
Hermann, N., Brorson, M. & Topsøe, H. Activities of unsupported secand transition series sulfides for hydrodesulfurization of sterically hinderede 4,6-dimethylbenzothiophene and of unsubtituted dibenzothiophene. Catal. Lett. 65, 169–174 (2000).
38. 38.
Morales-Valencia, E. M., Castillo-Araiza, C. O., Giraldo, S. A. & Baldovino-Medrano, V. G. Kinetic assessment of the simultaneous hydrodesulfurization of dibenzothiophene and the hydrogenation of diverse polyaromatic structures. ACS Catal. 8, 3926–3942 (2018).
39. 39.
Li, X., Wang, A. J., Egorova, M. & Prins, R. Kinetics of the HDS of 4,6-dimethyldibenzothiophene and its hydrogenated intermediates over sulfided Mo and NiMo on γ-Al2O3. J. Catal. 250, 283–293 (2007).
40. 40.
Schachtl, E., Yoo, J. S., Gutiérrez, O. Y., Studt, F. & Lercher, J. A. Impact of Ni promotion on the hydrogenation pathways of phenanthrene on MoS2/γ-Al2O3. J. Catal. 352, 171–181 (2017).
41. 41.
Knudsen, K. G., Cooper, B. H. & Topsøe, H. Catalyst and process technologies for ultra low sulfur diesel. Appl. Catal. A 189, 205–215 (1999).
42. 42.
Choudhary, T. V., Parrott, S. & Johnson, B. Unraveling heavy oil desulfurization chemistry: targeting clean fuels. Environ. Sci. Technol. 42, 1944–1947 (2008).
43. 43.
Rota, F. & Prins, R. Role of hydrogenolysis and nucleophilic substitution in hydrodenitrogenation over sulfided NiMo/γ-Al2O3. J. Catal. 202, 195–199 (2001).
44. 44.
Prins, R. Catalytic hydrodenitrogenation. Adv. Catal. 46, 399–464 (2002).
45. 45.
Gutiérrez, O. Y. et al. Effects of the support on the performance and promotion of (Ni)MoS2 catalysts for simultaneous hydrodenitrogenation and hydrodesulfurization. ACS Catal. 4, 1487–1499 (2014).
46. 46.
Albersberger, S. et al. Simultaneous hydrodenitrogenation and hydrodesulfurization on unsupported Ni-Mo-W sulfides. Catal. Today 297, 344–355 (2017).
47. 47.
Lauritsen, J. V. et al. Size-dependent structure of MoS2 nanocrystals. Nat. Nanotechnol. 2, 53–58 (2007).
48. 48.
Bruix, A. et al. In situ detection of active edge sites in single-layer MoS2 Catalysts. ACS Nano 9, 9322–9330 (2015).
49. 49.
Lauritsen, J. V. et al. Size-dependent structure of MoS2 nanocrystals. Nat. Nanotechnol. 2, 53–58 (2007).
50. 50.
Grønborg, S. S. et al. Visualizing hydrogen-induced reshaping and edge activation in MoS2 and Co-promoted MoS2 catalyst clusters. Nat. Commun. 9, 2211 (2018).
51. 51.
Mom, R. V., Louwen, J. N., Frenken, J. W. M. & Groot, I. M. N. In situ observations of an active MoS2 model hydrodesulfurization catalyst. Nat. Commun. 10, 2546 (2019).
52. 52.
Schweiger, H., Raybaud, P., Kresse, G. & Toulhoat, H. Shape and edge sites modifications of MoS2 catalytic nanoparitcles induced by working conditions: a theoretical study. J. Catal. 207, 76–87 (2002).
53. 53.
Lauritsen, J. V. & Besenbacher, F. Model catalyst surfaces investigated by scanning tunneling microscopy. Adv. Catal. 50, 97–143 (2006).
54. 54.
Kresse, G. & Furthmuller, J. Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set. Comput. Mater. Sci., 6, 15–50 (1996).
55. 55.
Kresse, G. & Furthmuller, J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys. Rev. B 54, 11169–11186 (1996).
56. 56.
Kresse, G. & Joubert, D. From ultrasoft pseudopotentials to the projector augmented-wave method. Phys. Rev. B 59, 1758–1775 (1999).
57. 57.
Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865–3868 (1996).
58. 58.
Grimme, S., Antony, J., Ehrlich, S. & Krieg, H. A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu. J. Chem. Phys. 132, 154104 (2010).
59. 59.
Rangarajan, S. & Mavrikakis, M. Adsorption of nitrogen- and sulfur-containing compounds on NiMoS for hydrotreating reactions: a DFT and vdW-corrected study. AIChE J. 61, 4036–4050 (2015).
60. 60.
Monkhorst, H. J. & Pack, J. D. Special points for Brillouin-zone integrations. Phys. Rev. B 13, 5188–5192 (1976).
61. 61.
Henkelman, G., Uberuaga, B. P. & Jónsson, H. A climbing image nudged elastic band method for finding saddle points and minimum energy paths. J. Chem. Phys. 113, 9901–9904 (2000).
## Acknowledgements
The computational work was supported by the U.S. Department of Energy (DOE)-Basic Energy Sciences (BES), Office of Chemical Sciences, Catalysis Science Program, under Grant No. DE‐FG02‐05ER15731. Part of the calculations were conducted using supercomputing resources from the National Energy Research Scientific Computing Center (NERSC) and the Center for Nanoscale Materials (CNM) at Argonne National Laboratory (ANL). CNM and NERSC are supported by the U.S. Department of Energy, Office of Science, under contracts DE‐AC02‐06CH11357 and DE‐AC02‐05CH11231, respectively. S.R. acknowledges the use of Extreme Science and Engineering Discovery Environment (XSEDE) resources, which is supported by National Science Foundation grant number ACI-1548562. J.V.L., J.R.F., and N.S. acknowledge financial support from the Danish Research Council—Technology and Production (HYDECAT), Villumfonden and Haldor Topsøe A/S. J.V.L. acknowledges the Aarhus University Centre for Integrated Materials Research (iMAT).
## Author information
Authors
### Contributions
N.S. and J.R.F. performed the experiments. N.S. analyzed the experimental data. S.R. performed the theory. J.V.L. and M.M. planned and organized the studies. N.S. wrote the first draft. J.V.L. wrote the final version. All authors contributed to the final version of the manuscript.
### Corresponding authors
Correspondence to Manos Mavrikakis or Jeppe V. Lauritsen.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Salazar, N., Rangarajan, S., Rodríguez-Fernández, J. et al. Site-dependent reactivity of MoS2 nanoparticles in hydrodesulfurization of thiophene. Nat Commun 11, 4369 (2020). https://doi.org/10.1038/s41467-020-18183-4
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41467-020-18183-4
• ### Adsorption performance of CeO2@NS-HNbMoO6 for ethyl mercaptan in methane gas
• Lifang Hu
• Xianyun Zheng
• Jie He
Chemical Papers (2022)
|
2022-01-25 23:08:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.671167254447937, "perplexity": 6172.589427441728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304876.16/warc/CC-MAIN-20220125220353-20220126010353-00009.warc.gz"}
|
http://mathematica.stackexchange.com/questions?page=9&sort=newest
|
All Questions
1answer
90 views
How do you refine the elements of a 3D mesh?
Response to Answer below. Original Question: I am trying to refine a 3D mesh in a certain region. This seems relevant but does not work for me. I start by making the region which consists of a box ...
1answer
34 views
Looping over a function with several parameters
Folks I am a beginner in Mathematica and have a simple question regarding loops: Here's the problem, I have a dataset of with four columns and I need to loop over each row. So the matrix in which my ...
0answers
54 views
What's inside Dataset? [duplicate]
It seems that Dataset has rich structures in them, for example: ds = Dataset[{<|"a" -> 1, "b" -> "x", "c" -> {1}|>}]; TreeForm[dataset] Could ...
2answers
727 views
Why is Mathematica destroying this graph?
Here I have a picture of a function I graphed: reg[x_,y_]:=(x^2+y^2)Cos[4ArcTan[y/x]]; Plot3D[reg[x,y],{x,-2,2},{y,-2,2},AxesLabel->Automatic] And here is ...
0answers
61 views
Can a single Sum with multiple iterators be different from nested Sums?
Multiple sums are documented with two or more iterators, for example: Sum[1/(j^2 (i + 1)^2), {i, 1, Infinity}, {j, 1, i}] however the same answer can be ...
1answer
30 views
Export a 3Dplot keeping the zoom, rotation properties
I want to export a 3Dplot but when I export the image remains quiet. I want to the image can rotate and make zoom and paste in to a Powerpoint, for the presentation I can explain to the public the ...
2answers
154 views
How to access Dataset's metadata
A Dataset object has internal metadata. For example, let ds = Dataset[{<|"a" -> 1, "b" -> "x", "c" -> {1}|>}]; ...
1answer
47 views
Find numerical solution to this system of DE
I am trying to solve this system \left( \begin{array}{ccccc} 2 k & -k & 0 & 0 & 0 \\ -k & 2 k & -k & 0 & 0 \\ 0 & -k & 2 k & -k & 0 \\ 0 & 0 ...
2answers
39 views
Solving several inequalities in Mathematica
I am a very beginner of Mathematica and the following will probably be a very stupid question. I have a bunch of inequalities in 6 unknowns and I want to find the set of values satisfying them. Can I ...
0answers
5 views
Prove that the converse of a strong digraph is also strong [migrated]
I would like to know how I prove this. The converse of a digraph D is obtained from D by reversing the direction of every arc of D. Show that a digraph D is strong if and only if its converse is ...
1answer
64 views
Plotting 3-vectors over a 3D lattice [closed]
I am trying to plot a magnetic structure, which is basically using Mathematica to read and plot a .dat file. In the file, each row which has six numbers, the first 3 represents the atomic postion, and ...
0answers
42 views
Is there any point to using a variable as a function parameter?
Normally one only sees patterns as function parameters: fun1[x_]:=x^2 Is there any point to using a variable in a function definition: ...
1answer
46 views
Extract part of list by reading it in a cyclic manner
I have trouble extracting part of a list in a specific order, say, increasingly in a cyclic manner. For example, I have the following list list={0,10,6,-5,-10} ...
1answer
57 views
How to complete boundary in masked image?
I am trying to generate a mask of the elliptical boundaries in this object. Any ideas on how I could complete the edge of a partially filled elliptical object? Thus far, I've obtained a mask that is ...
0answers
40 views
How do I insert Unicode into a MySQL database?
I'm having trouble transferring Unicode strings from one MySQL database to another using DatabaseLink functions. Here is my failed attempt to read a string from ...
2answers
63 views
Interpolation of a vector [closed]
How can we use a sample data for interpolation of a function within a domain. But I need all interpolated values at once in a vector format. For instance, below I create a range of values for ...
0answers
23 views
ParallelDo issue with SQLExecute (SQLite) [closed]
This code run as expected if I use Do[...], but if I use instead ParallelDo[...], as in the code bellow, the SQL INSERT statements seems to be completely ignored. In both cases, the filename is ...
1answer
63 views
NonlinearModelFit memory problem
I have a problem while trying to make a nonlinear fit of the following data: qn = NSolve[Tan[q] == 3*q/(3 + 1870715*q^2) && 0 < q <= 65, q] ...
0answers
18 views
How to plot eigenvalues as a function of parameters in a dynamic module?
I am trying to make a fancy interactive plot that shows how the eigenvalues of a matrix mat change under variation of parameters ...
2answers
71 views
FindInstance is very slow on an apparently simple problem
I wish to use FindInstance to find a set of solutions of this problem: ...
1answer
61 views
Plot option Filling conflicts with option PlotLabels
The regulars here discuss approaches to numerically unstable systems of higher-order PDEs, and I cannot even paint by numbers … (sigh) A Plot[] of simple curves with ...
1answer
230 views
How to make datasets display correctly?
NB: The problem illustrated below is truly ubiquitous. I hope the examples given below are sufficiently different to demonstrate this fact, and to discourage answers that hinge on the details of the ...
2answers
161 views
Save as PDF without cell numbers
I want to save a notebook file as PDF, but without cell numbers. For instance, take this document, then select “save as PDF” and the result is, How can I remove the cell numbers ...
0answers
43 views
Grid of plots - alignment [duplicate]
I'm trying to plot several plots and graphics into Grid, but I have no success with proper alignment. Here is my code: ...
0answers
42 views
How can I plot 2 concentric fixed circles with a polygon inscribed in one and circumscribed about the 2nd
Can anyone help me with the simplicit code to plot 2 concentric circles and a polygon of sides n (that I can manipulate) inscribed in one circle and circumscribed about the other. However I want the ...
0answers
59 views
Find and Replace in Notebook using a Pattern [closed]
I would love to be able to use the power of Mathematica's patterns to find and replace within my code. I have searched for this on google, and so far I've come up with this: ...
0answers
33 views
How to Select the elements of a list so that GCD equal to 1
I have a list ...
2answers
106 views
Function does not execute commands and fails to evaluate [closed]
The code I am trying to execute: hh[x_, y_] := x + y; hh[{3, 4}] The expected output: 7 However, on output I get ...
1answer
1k views
How does a Pringle lose its curvature?
Nom! As part of a bigger project, I've was writing some code to calculate the scalar curvature of surfaces of the form $z = f(x,y)$. This uses a general calculation of the scalar curvature to produce ...
1answer
42 views
How to set MaxStepSize for spatial variable in NDSolve?
When applying NDSolve to a 1-D transient heat equation, the following code appears to set MaxStepSize for the temporal variable t. ...
0answers
44 views
How to make sure that Mathematica doesn't hog memory for intermediate steps while using functions like FindRoot? [closed]
I am using Mathematica 10.4. I am trying to solve a set of around 10000 non-linear algebraic equations that I have computed upon discretization of a set of coupled non-linear differential equations. ...
1answer
65 views
Why does Position` by default return a list of lists of positions instead of a simple list of positions?
Position[{1,2,3,4,2,3,4,5},2] (* Output: {{2},{5}} *) Why does Position work this way? I understand this form might be ...
0answers
38 views
Flatten works in output but not in Global Workspace?
I am trying Michael Trott's Sierpinski Triangle. ...
0answers
38 views
How to remove a bar that blocks some upper space that says “WOLFRAM STUDENT EDITION” so forth? [duplicate]
There is a bar field on the upper screen of Mathematica that occupies some space, that says "Wolfram Mathematica : STUDENT EDITION" so forth. I would like to utilize the field instead of letting it ...
2answers
87 views
How to reduce each column of a dataset?
Given a simple dataset like this one: ...I want to apply some reduction function (like Total or MinMax) to each column ...
0answers
32 views
the graphic doesn't appear
The data is in this link: https://www.dropbox.com/s/vlvcwdszsiglce9/texto.txt?dl=0 ...
0answers
44 views
High Precision Plots of Eisenstein Series [closed]
When plotting the Eisenstein Series (great information here Eisenstein Series in Mathematica?) you observe highly non-trivial branch cut behavior close to the real axis. This makes the numerics break ...
1answer
53 views
Why is head evaluation inconsistent in pattern replacement?
I noticed the following unexpected result: In[90]:= "foo" /. x_String -> Head[x] Out[90]= Symbol Why is it that Head[x] ...
1answer
49 views
Errors only when code is iterated
Hello I'm trying to have two for loops sweep two parameters and for each 2-tuple of those parameters I want to solve a differential equation. To do this I have the following code: ...
3answers
92 views
Find extremal values while plotting
I have a parametric function {x[t],y[t]}. I then do ...
1answer
82 views
Tedious integral
I'm new to Mathematica so I'm not familiar with its potential use to solve tedious symbolically integrals. I'm trying to solve the following one: ...
1answer
39 views
An error comes up while exchanging stylesheets
A notebook contains ...
2answers
326 views
Draw a circle through 4 points on parabola [closed]
I have two dynamic perpendicular parabolas: $y=ax^2+c, x=by^2+d$, which intersect in four points (the coefficients are chosen that way). There is a fact that we can construct a circle through these ...
0answers
39 views
3D Plots and vector fields [duplicate]
I'm a beginner. How I make a 3D Plot from zero? (like a graphics with the potencial of the electric field). I have to make vector fields too. (vectors of the electric field). I search at internet but ...
0answers
17 views
find minimum spanning tree,but not define starting and ending vertice?
find minimum spanning tree,but not define starting and ending vertice..
0answers
33 views
From “NMinimize::nosat: Obtained solution does not satisfy the following constraints within Tolerance”, can prove something?
I got a very very complicated function G[r1,r2,F1,F2] to minimize. It has four variables, r1,r2,F1,F2. When I introduce the ...
0answers
19 views
General question about solving a large set of linear equations efficiently
In my research, I got a large set of linear equations, about 10 000 equations with more than 10 000 variables. It is not efficient to use "Solve", so does anyone know any way to solve the equations in ...
2answers
132 views
Running Mathematica Notebook files in command mode
The question is, how can I run a .nb file in the kernel mode of Mathematica? I am not an expert in Mathematica, but one of our users who use this program says that ...
0answers
33 views
GeoRegionValuePlot does not show Legends
I have a problem with GeoRegionValuePlot. These are the settings: ...
1answer
73 views
Given Latitude, Longitude, and Elevation, how to illustrate a three-dimensional path?
I've extracted detailed path information from a .FIT file and would like to plot this in three dimensions. I assumed GeoPositionXYZ[] would be useful, and have translated the data into a list that ...
15 30 50 per page
|
2016-05-29 03:57:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7196403741836548, "perplexity": 1475.212917095824}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278385.58/warc/CC-MAIN-20160524002118-00240-ip-10-185-217-139.ec2.internal.warc.gz"}
|
http://www.geog.com.cn/CN/Y1965/V31/I3/194
|
• 论文 •
### 乌鲁木齐河上游第四纪冰川与冰后期气候波动
1. 南京大学地理系
• 出版日期:1965-07-15 发布日期:1965-07-15
• 基金资助:
### QUARTERNARY GLACIATION AND THE POSTGLACIAL CLIMATIC FLUCTUATIONS IN THE REGION OF UPPER URUMCHI VALLEY, SINKIANG
YANG HUAI-]EN, CHIU SHU-CHANG
1. Department of Geography, Nanking University
• Online:1965-07-15 Published:1965-07-15
Abstract: An attempt is made in the following pages to show the evidences of Pleistocene and recent climatic fluctuations in the region of upper Urumchi valley, Sinkiang, based on the field work carried out during the summer of 1960. Most of the 55 present glaciers owe their existence to an exceptionally favourable exposition (Plate 1).The present glaciers occupying their ancesters' cirques are heavily loaded with ablation moraines. We observed moraine-covered dead ice masses immediately downstream from the tongue of a present glacier (Plate 2).A few hundred meters from the tongue of the "No. 1 glacier' we observed successive crescent-like end moraines which lack plant cover and weathering rings (Plate 5).Above features prove that the glaciers are in a state of re- treat and thinning. Between the `fresh' moraines and the oxidized, plant covered and moderately dissected Pleistocene moraines, there is a marked difference. A detailed study into their weathering rings and depositional forms of these new and old moraine will throw much light on the understanding of the climatic variations in eastern Tien Shan: (1) Some of the present glaciers having shrunk to a considerable degree even under the present condition seem improbable to endure the warm period of the Climatic Optimum and consequently they cannot be considered as the direct survivor of the last glacial period. (2) The fresh moraines overlying unconformably and thrusting upon the older glacial deposits were the products of regeneration and expansion of glaciers during the Little Ice Age. The present glaciers, are the direct survivors of the Little Ice Age which was a dominent cold phase in eastern Tien Shan after the Climatic Optimum. (3) From comparative studies of the older glacial moraines in the mountainous region, three main phases of shadia or readvances during the general retreat of the last glacial period can be established. The Pleistocene glaciations were characterised by their greatly developed outwashes. A series of cones were built up with their apices situated at the breaches of mountain gaps of northern Tien Shan, where large quantities of Pleistocene melting water were issuing from. The Tien Shan outwash plains with their loess covers provide the basis of Pleistocene chronology and the ground of geomorphological hypothesis concerning the mechanics of river deposition and erosion and the origin of the formation of depositional terraces beyond the glacial borders. Fluvial, fluvioglacial and periglaeial terraces have been beautifully developed in the intermountain basin of Ho-shia and the uplifted out-wash plains in northern Tien-Shan (Fig.6, 7).Due to tectonic movements as well as climatic conditions the outwashes are of very large scale and great thickness. The oldest outwash plain being more than 400 meters thick was uplifted to 2200 m above sea level ( Fig. 7 ).Even the lowest terrace deposits had been faulted. The Pleistocene tectonic movement being in active operation in recent time, has been playing an important role in the climatic and structural geomorphology of the Tien Shan. As a result of Pleistocene differential uplifting, floors of cirques and other features of the same glacial period have been placed at different elevations in different places. Repeated glaciations of the mountainous regions result in the building of successive outwash fans and valley trains, separately identifiable in surface form and in exposed sections. The available evidences indicates that the oldest glaciation has built the largest outwash plain while the younger glaciations have built smaller ones. The outwash masses and fluvial deposits were beautifully terraced owing to the changes of the mechanics of fluvial erosion and deposition during the advance and retreat of mountain glaciers. The genetic relationship of Pleistocene depositional terraces to the advance and retreat of mountain glaciers has been critically examined by the authors.
|
2022-08-12 03:38:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23951299488544464, "perplexity": 8635.503470824375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00142.warc.gz"}
|
https://docs.sciml.ai/dev/modules/ReservoirComputing/esn_tutorials/deep_esn/
|
# Deep Echo State Networks
Deep Echo State Network architectures started to gain some traction recently. In this guide we illustrate how it is possible to use ReservoirComputing.jl to build a deep ESN.
The network implemented in this library is taken from [1]. It works by stacking reservoirs on top of each other, feeding the output on one in the next. The states are obtained by merging all the inner states of the stacked reservoirs. For a more in depth explanation refer to the paper linked above. The full script for this example can be found here. This example was run on Julia v1.7.2.
## Lorenz Example
For this example we are going to reuse the Lorenz data used in the Lorenz System Forecasting example.
using OrdinaryDiffEq
#define lorenz system
function lorenz!(du,u,p,t)
du[1] = 10.0*(u[2]-u[1])
du[2] = u[1]*(28.0-u[3]) - u[2]
du[3] = u[1]*u[2] - (8/3)*u[3]
end
#solve and take data
prob = ODEProblem(lorenz!, [1.0,0.0,0.0], (0.0,200.0))
data = solve(prob, ABM54(), dt=0.02)
#determine shift length, training length and prediction length
shift = 300
train_len = 5000
predict_len = 1250
#split the data accordingly
input_data = data[:, shift:shift+train_len-1]
target_data = data[:, shift+1:shift+train_len]
test_data = data[:,shift+train_len+1:shift+train_len+predict_len]
Again, it is important to notice that the data needs to be formatted in a matrix with the features as rows and time steps as columns like it is done in this example. This is needed even if the time series consists of single values.
The construction of the ESN is also really similar. The only difference is that the reservoir can be fed as an array of reservoirs.
reservoirs = [RandSparseReservoir(99, radius=1.1, sparsity=0.1),
esn = ESN(input_data;
variation = Default(),
reservoir = reservoirs,
input_layer = DenseLayer(),
reservoir_driver = RNN(),
states_type = StandardStates())
As it is possible to see, different sizes can be chosen for the different reservoirs. The input layer and bias can also be given as vectors, but of course they have to be of the same size of the reservoirs vector. If they are not passed as a vector, the value passed is going to be used for all the layers in the deep ESN.
In addition to using the provided functions for the construction of the layers the user can also choose to build their own matrix, or array of matrices, and feed that into the ESN in the same way.
The training and prediction follows the usual framework:
training_method = StandardRidge(0.0)
output_layer = train(esn, target_data, training_method)
output = esn(Generative(predict_len), output_layer)
Note that there is a known bug at the moment with using WeightedLayer as the input layer with the deep ESN. We are in the process of investigating and solving it. The leak coefficient for the reservoirs has to always be the same with the current implementation. This is also something we are actively looking into expanding.
## Documentation
• 1Gallicchio, Claudio, and Alessio Micheli. "Deep echo state network (deepesn): A brief survey." arXiv preprint arXiv:1712.04323 (2017).
|
2022-10-01 01:56:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6826472282409668, "perplexity": 1198.9511386106394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335514.65/warc/CC-MAIN-20221001003954-20221001033954-00527.warc.gz"}
|
https://physics.stackexchange.com/tags/newtonian-mechanics/new
|
Tag Info
2
1) You can indeed treat them as a single system, but it is important to see why. The pulley, absence of friction, and the taut string imply that there are no unbalanced internal forces between them(if there were, it would be like 2 smaller interacting subsystems). 2) The problem is exactly one dimensional now-the one dimension being dictated by the taut ...
-1
when two objects are in contact with each other and are at rest, then the friction force is zero, because the direction of friction force is always parallel to the surface of contact, and when the two bodies are in contact and also at rest than only two forces act on the body i:e, weight of the body which is downward and the normal force which is upward, ...
0
The only reason is that we are dealing with vector equations and treating a single system in the second equation violates vector laws. True, it works in this simple case, but to describe in a general case whether it works would not be easy.
2
The $\vec\omega\times \vec r_b$ term appear in the rotating (non-inertial) frame. It is there because the motion of a particle in a rotating frame expressed in the lab contains a "pure" motion on the rotating (non-inertial) frame, plus a term to account for the rotation of the non-inertial frame. In equations, $\vec r'(t)=U(t) \vec r$ where $\vec r'$ is ...
0
You're not being entirely unreasonable for wanting to calculate whether the motor will be strong enough to do what you're trying to do. But, having quite a bit of experience in this area you're talking about, there are a couple of stumbling points I think you need to be aware of. Motor ratings by the manufacturer are merely there to allow you to compare ...
1
Here we completely ignore the comfort of the rider. The rider is rising not to spare his body from the impact, but to spare the rims. Actually, it's not quite right to ignore the comfort of the rider. Think of this from the point of view of Newton's third law: if the rider's body doesn't get high impact, then it doesn't give as high an impact on the bike. ...
0
Of course, Newton's laws are valid on all forces provided that an inertial frame is considered (can apply in non inertial frames too, but let's not discuss that now as it is irrelevant). So okay, if you want to apply 3rd law on friction, there will be an equal and opposite force on the surface too. The best way to see this is a two-block problem where you ...
0
Sorry to say, but I see no connection between those two definitions. All $N$ collinear points in a rigid body by definition gives only 1 same line. However, three orthogonal vectors with origin are semantically equal to 4 non-collinear points forming vertexes of Tetrahedron :
1
1) The Lagrangian is simply a function of generalised coordinates and velocities(see also @sanaris above), which when put into an action and extremised gives you the time evolution of the coordinates(and thus, the velocities via differentiation). You do not know the trajectory, and thus the velocity, and thus the kinetic energy, before you have actually ...
0
Lagrangian is defined as the kinetic energy minus the potential energy You are wrong. Lagrangian has nothing to do with kinetic and potential energy. For example Einstein-Hilbert action is $S=\frac{1}{2k}\int R\sqrt{-g}\,d^4x$. There is no fast-hand way to see in there something like "potential" and "kinetic" energy. There is only one true definition of ...
3
The difference in energy between the two static equilibrium positions may only be some potential energy difference. You may assume the friction force is $F=\mu N$ during sliding, where $\mu$ is the kinetic friction coefficient (taken equal to the static friction coefficient) but since this force is non conservative, the work done this force will not account ...
1
We know that : $$W=\vec{F} .\vec{r}=F.v.\text{cos}(\widehat{\vec{F}\mathbf \ {;}\ \vec{r}})$$ The work of the force is the scalar product of itself and the path : You have from your definition : $$P=\frac{dW}{dt} =\frac{\vec{F}.\vec{r}}{dt}=\vec{F}.\frac{d\vec{r}}{dt}=\vec{F}.\vec{v}$$ If the work is independent of the path, then the work is given by : $$W=\... 12 First, I am assuming that there is no kinetic friction acting on the insect as it moves up the bowl. If kinetic friction were involved, you would have energy dissipation, but I will not consider that here. Your mistake is in assuming that the static friction force is equal to its maximum value during the entire process. \mu N only determines the maximum ... 0 In Lagrangian mechanics, we take the kinetic & potential terms as axiomatic, i.e. we don't use Newton's second law to justify K=\frac12m\dot{x}^2, we just claim K=\frac12m\dot{x}^2. Newtonian, Lagrangian and Hamiltonian mechanics (and a few other options) are equivalent, but they assume different things. 0 You are right. The Newtonian and Lagrangian formulations are equivalent. Either can be used as a starting point for mechanics. Which you think more fundamental is largely a matter of personal choice. My own preference is to formulate Newton's laws from conservation of momentum (the third law contains the physical content, the second just a definition of ... 1 The entries at time t are \psi(x,y,z,t). The arrangement of as a column vector, or as a three dimensional array, is essentially arbitrary. You may find it easier to think of it as a three dimensional array. It is more usual to use vector notation \mathbf x = (x,y,z) = (x^1,x^2,x^3), then the correspondence is \psi(\mathbf x) = \langle \mathbf x |\... 0 You need to know that a motion can be accelerated in two independent ways or any combination of these; either when there is a change in velocity, or when there is a change in direction of motion. The above problem has a constant velocity but the direction of car at every point of the loop changes, which mean the motion is accelerated. Newton's laws tell ... 0 It depends on what exactly you mean by kinetic friction, since there are several forces that may be referred by this term. If in this context it means sliding frictiob, i.e. what sratic friction force becomes when object begins to slide, then by its very definition it is directed against the direction of motion. This does not exclude the possibility that ... 0 Let's consider one molecule that hits the piston. The kinetic energy of that molecule is supplied by breaking a chemical bond during combustion. Just before the molecule hits the piston the molecule's momentum is p = mv, and momentum of piston is p = 0. After the collision the law of conservation of momentum says that sum of momentum must be the same. Hence, ... 2 If you have 3 different points on rigid body you can create orthonormal coordinate system with this equations$$\vec Z(t)=\dfrac{\overrightarrow{R}_{13}\times \overrightarrow{R}_{12}}{\left| \overrightarrow{R}_{13}\times \overrightarrow{R}_{12}\right| }\vec Y(t)=\dfrac{\overrightarrow{R}12}{\left| \overrightarrow{R}_{12}\right| } \overrightarrow{...
0
Your derivation ends up with a correct result, but I couldn't quite follow your derivation. A few remarks that may help you: The forces acting on a small element of a continuuous 1D medium is $-P(x+dx)S$ on one side and $P(x)S$ on the other sides so the net force is $\frac{\partial P}{dx}S$. Of course you may drop the $S$ term since everyting will be ...
1
On each hand, you really have two forces: one from the other hand (10N, pushing out) and one from the arm (10N, pushing in). These are the equal and opposite forces, hence your hands do not move.
2
I think you are confusing two ideas (commonly confused). Equilibrium forces (which cancel) and reactive forces, which may or may not cancel depending on the situation. Think of a book on the table. You push on the book. The book pushes back on you with an equal and opposite reactive force. If you push hard enough, the book will move. According to Newton's ...
2
When you push on your right hand with your left you feel a force back on your right hand. This is the 'equal and opposite' force that comes from Newton's third law. However, at the same time you are pushing with your right hand you are also pushing with your left and feel a force back on your left hand, again due to Newton's third law. While it is true that ...
3
Fix a reference frame $R$ with axes $x,y,z$ and origin $O$ and suppose that the solid body $B$ is moving with a planar motion, let's say, parallel to the plane $x,y$. If $O'$ is a fixed point of $B$, but generally moving in $R$, the velocity of a point $P \in B$ in $R$ satisfies $$\vec{v}_P(t) = \vec{v}_{O'}(t) + \vec{\omega}(t)\times \vec{O'P}(t)\:.$$ ...
0
There is a force at work here: gravity. The argument is only that no force needs to start the sphere moving in one direction rather than another. But this only works because a perfect sphere (like a perfect hemisphere) has no flat surfaces. If you think of two dodecahedrons (or rather one dodecahedron and a hemi-dodecahedron) rather than two spheres, you'...
0
This does indeed work out. However, we have to be careful with how we define "from an object's perspective." You're defining different frames, and conversions between frames can change the values of velocities. The key question is what happens to the frames at the collision. If the frame keeps going along on the path m1 or m2 would have taken, then you ...
0
For statically indeterminate systems you need to consider both the material behavior and the kinematics of deformation in addition to the equations of equilibrium in order to determine the forces. So, when you ask about a "rigid body simulation tool", such a tool would have to use some deformable material model, e.g. linear elasticity, and then take a limit ...
2
I'll let the animation speak for itself. The blue arrow shows the force on the object. For this scenario to happen it is important that the collision is elastic (all energy is conserved). I used a force that is proportional to penetration depth. This way the balls feel a force that is the same during the deceleration and recoil. In inelastic collisions it ...
1
Try considering momentum conservation. The wall won’t be moving before or after the collision with the ball, and presuming the collision is elastic (no energy lost to heat, sound etc.) then the sum of the momenta will be conserved before and after the collision and you should find that the ball must have the same momentum before and after the collision. So ...
0
We'll first write out the equation in its "full" form. $$m\vec{a} =m\vec{g} -cv^{2}\hat{v}$$ We will be using the convention that down is negative. We will also be assume we are dealing with two dimensions here. Therefore, we can expand this out as ma_{x}\hat{x} +ma_{y}\hat{y} =-mg\hat{y} -c||\vec{v} ||^{2}\hat{v} ...
1
Because the reaction force is larger. The force needed to make an object reach a certain speed is the same as the force needed to slow it down from that speed to zero over the same time. This is Newton's 2nd law. So, when two equal and movable objects collide, the (action) force that is required to make one object speed up to a certain speed is exactly the ...
0
The centrifugal force mw^2r will act radially outwards. And this force will be balanced by a component of the Normal force. This is because of the strong grip made by our Jimmy. Now equating these two forces , you will get the value of Normal. Insert this N in the Frictional force formula and balance it by equating it with component of mg and bam.... Is ...
3
Would the centripetal force still provided by the weight of the mass? Why or why not? When you spin it vertically, you would most probably spin it with a constant angular velocity (try it out), which means that that the centripetal acceleration required at any point will be the same. However, in this case, the radial component of force due to gravity keeps ...
0
For an idealised system like whole air in the room is at the back of the person who stands at the door is apparently too high and this link 1. Link has already shows the necessary calculations so I didn't put them here and for more broad calculations you may refer this link 2. Link.
-1
The problem comes from the sign of the drag term. The first form of the equation you have is not exactly correct. When you say $$m \dot v = g - c v^2$$ there is an assumption that $v$ is positive, so the negative sign in front of $c v^2$ ensures that the drag for opposes the motion. Often, sign conventions can get confusing when replacing vectors with ...
-2
I was thinking of this also when I was reading the book "Reality is not what it seems" and I realized that if the speed of light were infinite, the universe would expand to infinity instantly after the big bang so that the universe would just "evaporate" right away instead of diffusing slowly. Realistic mass increase formula also supports this. Because the ...
1
Now this could be naive, but I'm inclined to think that any of the shifting around as a result of changing the angle of declination for a pushup shouldn't matter in terms of the net external force the hands have to produce. I would wager any change of measurement you find on a scale has more to do with fluids shifting in the body, and thus the center of mass ...
1
While other answers are not wrong, I'd like to add that the main reason why someone's feet can't stay on the ground in free fall is that any time there is a slight pressure or bump between feet and floor due to body movement, the contact force will push the feet upwards. Since no other forces are involved, the feet will keep floating away from the floor. ...
0
If all three pulleys are fixed to a wall or something you get some friction depending how easily your pulley turns, and if it is heavy you need some force to turn it , so if the pulleys are light and work well it should not ad much, but it always adds a little.
0
I don't think it particularly important that the classical potential is not exactly equal to the expectation of the quantum potential in all cases. Classical potential is defined in the first instance by integrating force. To derive the force from the potential is circular. Really we have two slightly different potentials, because the classical notion of a ...
0
It is simple: F = T*2*Pi*eff/l Where: F: Force, newtons; T: Torque, newton-meters; l: lead, meters; eff: ballscrew efficiency In most cases, efficiency can be set safely to 90% with enough margin. So, for example, a ball screw with 5mm lead, driven by 2Nm motor will give: 2*2*Pi*0.9/0.005 = 2262N = 231kg
0
$P = \rho g h$ is true in static situations. Your wave example is dynamic and pressures at a given height may not be uniform in such cases. If the water just below the surface did actually have a very high pressure, that pressure would rapidly accelerate the water until the pressure equalized. It wouldn't form a barrier to entry.
0
I have raised a similar question here and finally I ended up answering it. My answer is on the derivation of the $2^{nd}$ law which is the lengthiest of 3. The bonus is that I have done the complete proof using cartesian co-ordinates, so even a high school student with calculus knowledge can understand it.
1
Ensuring that $n$ and $k$ are mutually prime ensures the ball gets out of the well at its first opportunity, and not after. Originally I thought that if $n$ and $k$ are solutions, then $2n$ and $2k$ should be solutions as well. But we need to recognize that by that point, the ball is already out of the well! It got out after $n$ and $k$ horizontal/vertical ...
2
Calculation the Car Wheels Load $F_G=m\,g$ car weight $F_V$ wheel front load $F_H$ wheel rear load $a_A$ car acceleration $a_D$ car deceleration take the sum of the torques about point V you get: $$\sum \tau_V=F_H\,l-F_G\,l_V-m\,a_A\,h_s+m\,a_D\,h_s=0\tag 1$$ or about point H $$\sum \tau_H=F_V\,l-F_G\,l_H+m\,a_A\,h_s-m\,a_D\,h_s=0\tag 2$$ from ...
1
The axiomatic system you are looking for needs the mathematical axioms and theorems used for establishing differential equations and calculus, to be complete, Newton's law's are not enough. Euclidean geometry is simple, as it needs only algebraic relations as a mathematical framework. Classical mechanics needs a more sophisticated mathematical framework, ...
2
The point of the first paper is that $F = ma$ doesn't actually indicate how matter moves, at least not without specifying what $F$ is, which is correct. You can't really get much of use out of Newtonian mechanics, outside of general theorems, if you don't specify the forces. I don't think this is quite the takedown it is, because an axiom system can get by ...
0
For each object for energy conservation you will have $\Delta\text{KE}=-\Delta\text{PE}$. Since both of these terms are directly proportional to the mass, the mass variables on each side of the equation will cancel out. Therefore, you do not need to know the mass of each object to solve this problem. Also, in this case speed means the translational speed of ...
5
First, note that you can't "misdefine" something. Angular acceleration is defined to be the time derivative of angular velocity; that is it. Instead of questioning the definition then, you should be questioning your understanding of the relationship between this definition and other physically relevant definitions. The error is in assuming that the moment ...
Top 50 recent answers are included
|
2020-04-07 08:42:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7242786288261414, "perplexity": 277.2201052163501}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371675859.64/warc/CC-MAIN-20200407054138-20200407084638-00072.warc.gz"}
|
https://www.springerprofessional.de/advances-in-network-based-information-systems/16076288
|
main-content
## Über dieses Buch
This book presents the latest research findings and innovative theoretical and practical research methods and development techniques related to the emerging areas of information networking and their applications.
Today’s networks and information systems are evolving rapidly, and there are several new trends and applications, such as wireless sensor networks, ad hoc networks, peer-to-peer systems, vehicular networks, opportunistic networks, grid and cloud computing, pervasive and ubiquitous computing, multimedia systems, security, multi-agent systems, high-speed networks, and web-based systems. These networks have to deal with the increasing number of users, provide support for different services, guarantee the QoS, and optimize the network resources, and as such there are numerous research issues and challenges that need to be considered and addressed.
## Inhaltsverzeichnis
### A Fuzzy-Based System for Actor Node Selection in WSANs for Improving Network Connectivity and Increasing Number of Covered Sensors
Wireless Sensor and Actor Network (WSAN) is formed by the collaboration of micro-sensor and actor nodes. The sensor nodes have responsibility to sense an event and send information towards an actor node. The actor node is responsible to take prompt decision and react accordingly. In order to provide effective sensing and acting, a distributed local coordination mechanism is necessary among sensors and actors. In this work, we consider the actor node selection problem and propose a fuzzy-based system that based on data provided by sensors and actors selects an appropriate actor node. We use 4 input parameters: Size of Giant Component (SGC), Distance to Event (DE), Remaining Energy (RE) and Number of Covered Sensors (NCS) as new parameter. The output parameter is Actor Selection Decision (ASD). The simulation results show that by increasing SGC to 0.5 and 0.9, the ASD is increased 12% and 68%, respectively.
Donald Elmazi, Miralda Cuka, Makoto Ikeda, Leonard Barolli
### A Delay-Aware Fuzzy-Based System for Selection of IoT Devices in Opportunistic Networks
In opportunistic networks the communication opportunities (contacts) are intermittent and there is no need to establish an end-to-end link between the communication nodes. The enormous growth of devices having access to the Internet, along the vast evolution of the Internet and the connectivity of objects and devices, has evolved as Internet of Things (IoT). There are different issues for these networks. One of them is the selection of IoT devices in order to carry out a task in opportunistic networks. In this work, we implement a Fuzzy-Based System for IoT device selection in opportunistic networks. For our system, we use four input parameters: IoT Message Timeout Ratio (MTR), IoT Contact Duration (IDCD), IoT Device Storage (IDST) and IoT Device Remaining Energy (IDRE). The output parameter is IoT Device Selection Decision (IDSD). The simulation results show that the proposed system makes a proper selection decision of IoT devices in opportunistic networks. The IoT device selection is increased up to 18% and 28% by increasing IDST and IDRE, respectively.
Miralda Cuka, Donald Elmazi, Keita Matsuo, Makoto Ikeda, Leonard Barolli
### A Fuzzy-Based Approach for Improving Peer Awareness and Group Synchronization in MobilePeerDroid System
In this work, we present a distributed event-based awareness approach for P2P groupware systems. The awareness of collaboration will be achieved by using primitive operations and services that are integrated into the P2P middleware. We propose an abstract model for achieving these requirements and we discuss how this model can support awareness of collaboration in mobile teams. We present a fuzzy-based system for improving peer coordination quality according to four parameters. This model will be implemented in MobilePeerDroid system to give more realistic view of the collaborative activity and better decisions for the groupwork, while encouraging peers to increase their reliability in order to support awareness of collaboration in MobilePeerDroid Mobile System. We evaluated the performance of proposed system by computer simulations. From the simulations results, we conclude that when AA, SCT and GS values are increased, the peer coordination quality is increased. With increasing of NFT, the peer coordination quality is decreased.
Yi Liu, Kosuke Ozera, Keita Matsuo, Makoto Ikeda, Leonard Barolli
### A Hybrid Simulation System Based on Particle Swarm Optimization and Distributed Genetic Algorithm for WMNs: Performance Evaluation Considering Normal and Uniform Distribution of Mesh Clients
The Wireless Mesh Networks (WMNs) are becoming an important networking infrastructure because they have many advantages such as low cost and increased high speed wireless Internet connectivity. In our previous work, we implemented a Particle Swarm Optimization (PSO) based simulation system, called WMN-PSO, and a simulation system based on Genetic Algorithm (GA), called WMN-GA, for solving node placement problem in WMNs. In this paper, we implement a hybrid simulation system based on PSO and distributed GA (DGA), called WMN-PSODGA. We analyze the performance of WMNs using WMN-PSODGA simulation system considering Normal and Uniform client distributions. Simulation results show that the WMN-PSODGA has good performance for Normal distribution compared with the case of Uniform distribution.
Admir Barolli, Shinji Sakamoto, Leonard Barolli, Makoto Takizawa
### Evaluation of Table Type Reader for 13.56 MHz RFID System Considering Distance Between Reader and Tag
RFID system is one of the key technology to bring the efficiency to the automatic rental of goods or the automatic adjustment of shopping. In this paper, we evaluate the basic performance of a table type RFID reader which is a key device to offer these services. Furthermore, in order to increase the communication performance, we consider the use of a parasitic element on the table type RFID reader and show its usefulness.
Kiyotaka Fujisaki
### A Position Detecting System Using Supersonic Sensors for Omnidirectional Wheelchair Tennis
The wheelchair with good performance for the aged and disabled is attracting attention from the society. Also, the wheelchair can provide the user with many benefits, such as maintaining mobility, continuing or broadening community social activities, conserving energy and enhancing quality of life. The wheelchair body must be compact enough and should be able to make different movements in order to have many applications. In our previous work, we presented the design and implementation of an omnidirectional wheelchair. In this paper, we propose a position detecting system using supersonic sensors. The proposed system can find correctly the wheelchair position for collision avoidance.
Keita Matsuo, Leonard Barolli
### Distributed Approach for Detecting Collusive Interest Flooding Attack on Named Data Networking
Recently, network consumers use Internet for getting contents: videos, musics, photos, and other contents created by many producers. Those contents accelerate the increasing traffic volumes. For reducing the increasing traffic volume to keep stabilities of broadband network, realizing the concept of CCN (Contents Centric Networking) is strongly required. The NDN (Named Data Networking) which is the most popular network architecture have been proposed to realize the concept of CCN. However, it have been also reported that the NDN is vulnerable to CIFA (Collusive Interest Flooding Attack). In this paper, we propose a novel distributed algorithm for detecting CIFA for keep availabilities of NDN. The results of computer simulations confirm that our proposal can detect and mitigate the effects of CIFA, effectively.
Tetsuya Shigeyasu, Ayaka Sonoda
### An Energy-Efficient Dynamic Live Migration of Multiple Virtual Machines
In this paper, we propose an algorithm to migrate virtual machines to reduce the total electric energy consumption of servers. Here, virtual machines are dynamically resumed and suspended so that the number of processes on each virtual machine can be kept fewer. In addition, multiple virtual machines migrate from a host server to a more energy-efficient guest server. In our previous studies, time to migrate virtual machines is assumed to be zero. The more often a virtual machine migrates, the longer time it takes to perform processes on the virtual machine. We propose a model to estimate the electric energy consumption of servers by considering the migration time of each virtual machine. By using the model, virtual machines to migrate and to perform processes are selected so that the total electric energy consumption can be reduced. In the evaluation, we show the total electric energy consumption of servers can be reduced compared with other algorithms.
Dilawaer Duolikun, Shigenari Nakamura, Tomoya Enokido, Makoto Takizawa
### Evaluation of an Energy-Efficient Tree-Based Model of Fog Computing
A huge number of devices like sensors are interconnected in the IoT (Internet of Things). In the cloud computing model, processes and data are centralized in a cloud. Here, networks are congested and servers are overloaded due to heavy traffic from sensors. In order to reduce the delay time and increase the performance, data and processes to handle the data are distributed to not only servers but also fog nodes in fog computing models. On the other hand, the total electric energy consumed by fog nodes increases to process a sensor data. In this paper, we newly propose a tree-based fog computing model to distribute processes and data to servers and fog nodes so that the total electric energy consumption of nodes can be reduced in the IoT. In the evaluation, we show the total electric energy consumption of nodes in the tree-based model is smaller than the cloud computing model.
Ryuji Oma, Shigenari Nakamura, Dilawaer Duolikun, Tomoya Enokido, Makoto Takizawa
### Object-Based Information Flow Control Model in P2PPS Systems
In the P2PPS (P2P (peer-to-peer) type of topic-based PS (publish/subscribe)) model, each peer process (peer) publishes and subscribes event messages which are characterized by topics with no centralized coordinator. An illegal information flow occurs if an event message $$e_j$$ej published by a peer $$p_j$$pj carries information on some topics into the peer $$p_i$$pi, which the target peer $$p_i$$pi is not allowed to subscribe. In our previous studies, the SBS, TBS, and FS-H protocols are proposed to prevent illegal information flow among peers by banning event messages. In the protocols, the number of topics kept in every peer monotonically increases. Hence, most of the event messages are banned. In this paper, we newly consider the P2PPSO (P2PPS with object concept) model where the number of topics kept in every peer increases and decreases each time objects obtained by every peer are updated. In order to prevent illegal information flow from occurring in the P2PPSO system, we newly propose a TOBS (topics of objects-based synchronization) and TSOBS (topics and states of objects-based synchronization) protocols. In the TOBS protocol, it is simpler to detect illegal information flow than the TSOBS protocol. On the other hand, the fewer number of event messages are banned in the TSOBS protocol than the TOBS protocol.
Shigenari Nakamura, Tomoya Enokido, Makoto Takizawa
### Performance Evaluation of Energy Consumption for Different DTN Routing Protocols
In this paper, we evaluate the energy consumption of different routing protocols in a Delay Tolerant Network (DTN). Seven groups with three stationary sensor nodes for each group sense the temperature, humidity and wind speed and send these data to a stationary destination node that collects them for statistical and data analysis purposes. The opportunistic contacts will exchange the sensed data to different relay nodes that are pedestrians and cyclist equipped with smart devices moving in Tirana city roads, until the destination node is reached. For simulations we use the Opportunistic Network Environment (ONE) simulator. Nodes in this DTN are energy constrained and play an important role in the success of delivering messages. When the energy of a node is low chance to deliver messages across the network. In this work, we evaluate and compare the performance of different routing protocols in order to find the energy-efficient routing protocol to be used for message transmission for our DTN application. We evaluate the nodes average remaining energy, the number of dead nodes, delivery probability and overhead ratio for different routing protocols.
Evjola Spaho, Klodian Dhoska, Kevin Bylykbashi, Leonard Barolli, Vladi Kolici, Makoto Takizawa
### A Robot Gesture Framework for Watching and Alerting the Elderly
Watching and medical care for the elderly is one of promising application fields of IoT. Humanoid robots are considered to be useful agents for not only relaxing the elderly but also watching and alerting them in a daily life. Detecting and preventing the risk of indoor heat stroke is an important issue especially for the elderly who live alone. A method for reliably conveying the possible danger of the indoor heat stroke to the elderly is a crucial factor to implement a practical system. In this paper, we describe the system for informing unusual conditions to the elderly by using a communication robot that normally gives users healing. We designed and implemented a set of normal and special motions for a desktop humanoid robot and evaluated whether the robot motions effectively make users aware of abnormal situations.
Akihito Yatsuda, Toshiyuki Haramaki, Hiroaki Nishino
### A Robot Assistant in an Edge-Computing-Based Safe Driving Support System
In this paper, we describe a robot-based interface for presenting important information to assist safety driving. We have been developing a safe driving support system consisting of various devices for sensing the in-vehicle environment and driver’s vital signals, a set of edge computing nodes for analyzing the sensed data, and actuators for presenting the analyzed results to the driver. Because visual and auditory messages are commonly used in an instrumental panel, an audio system, and a navigation system in the car, adding similar notification methods may hinder the driver’s safety driving operations. We, therefore, propose to use robot motions with voice messages as a new way of delivering important information to the driver. We designed and implemented two sets of the driver assisting methods using a real robot placed in a vehicle and a visual robot aid moving on a monitor screen. We conducted a comparative experiment among the methods to verify their effectiveness and practicality.
Toshiyuki Haramaki, Akihito Yatsuda, Hiroaki Nishino
### The Improved Transmission Energy Consumption Laxity Based (ITECLB) Algorithm for Virtual Machine Environments
Various types of distributed applications are realized in server cluster systems equipped with virtual machines like cloud computing systems. On the other hand, a server cluster system consumes a large amount of electric energy since a server cluster system is composed of large number of servers and each server consumes the large electric energy to perform application processes on multiple virtual machines. In this paper, the improved transmission energy consumption laxity based (ITECLB) algorithm is proposed to allocate communication processes to virtual machines in a server cluster so that the total electric energy consumption of a server cluster and the average transmission time of each communication process can be reduced. We evaluate the ITECLB algorithm in terms of the total electric energy consumption of a server cluster and the average transmission time of each process compared with the transmission energy consumption laxity based (TECLB) algorithm.
Tomoya Enokido, Dilawaer Duolikun, Makoto Takizawa
### Continuous k-Nearest Neighbour Strategies Using the mqrtree
In this paper, two strategies for processing a continuous k-nearest neighbor query for location-based services are proposed. Both use a spatial access method, the mqrtree, for locating a safe region. The mqrtree supports searching within the structure, so searches from the root are not required - a property which is exploited in the strategies. However, the proposed strategies will work with most spatial access methods. The strategies are evaluated and compared against a repeated nearest neighbor search. It is shown that both approaches achieve significant performance gains in reducing the number of times a new safe region must be identified, in both random and exponentially distributed points sets.
Wendy Osborn
### Developing a Low-Cost Thermal Camera for Industrial Predictive Maintenance Applications
This paper presents the development and evaluation of a low-cost thermal camera based on off-the-shelf components that can be used to predict failures of industrial machines. On the sensing side the system is based on a LWIR thermal camera (FLiR Lepton), whereas for the data acquisition it uses an ARM Cortex-M4F micro-controller (Texas Instruments MSP432) running FreeRTOS. For the data communications the system uses a Wi-Fi transceiver with an embedded IPV4 stack (Texas Instruments CC3100), which provides seamless integration with the Cloud back-end (Amazon AWS) used to retrieve, store and process the thermal images. The paper also presents the calibration method used to obtain the relation between the camera raw output and the actual object temperature, as well the measurements that have been conducted to determine the overall energy consumption of the system.
Alda Xhafa, Pere Tuset-Peiró, Xavier Vilajosana
### Globally Optimization Energy Grid Management System
This paper focuses on the smart grid with cloud and fog computing to reduce the wastage of electricity. A traditional grid is converted into a smart grid to reduce the increase of temperature. A smart grid is the combination of traditional grid and information and communication technology (ICT). The Micro Grid (MG) is directly connected with fog and has small scale power. MG involves multiple sources of energy as a way of incorporating renewable power. The Macro Grid has a large amount of energy, and it provides electricity to the MG and to the end users. Clusters are the number of buildings having multiple homes. Some load balancing algorithms are used to distribute the load efficiently on the virtual machines and also helps the maximum utilization of the resources. However, a user is not allowed to communicate directly with MG, a smart meter is used with each cluster for the communication purpose. If the MG is unable to send as much energy as needed then fog will ask cloud to provide energy through macro grid. Optimized bubble sort algorithm is used and it is actually a sorting algorithm. The sorting, in this case, means that the virtual machine sorts on the basis of a load. The virtual machine which has the least load will serve the demand. In this way, the virtual machine works and this mechanism give least response time with high resources utilization. Cloud analyst is used for simulations.
### Efficient Resource Distribution in Cloud and Fog Computing
Smart Grid (SG) is a modern electrical grid with the combination of traditional grid and Information, Communication and Technology. SG includes various energy measures including smart meters and energy-efficient resources. With the increase in the number of Internet of Things (IoT) devices data storage and processing complexity of SG increases. To overcome these challenges cloud computing is used with SG to enhance the energy management services and provides low latency. To ensure privacy and security in cloud computing fog computing concept is introduced which increase the performance of cloud computing. The main features of fog are; location awareness, low latency and mobility. The fog computing decreases the load on the Cloud and provides same facilities as Cloud. In the proposed system, for load balancing we have used three different load balancing algorithms: Round Robin (RR), Throttled and Odds algorithm. To compare and examine the performance of the algorithms Cloud Analyst simulator is used.
### Resource Allocation over Cloud-Fog Framework Using BA
Edge computing or fog computing (FG) are introduced to minimize the load on cloud and for providing low latency. However, FG is specified to a comparatively small area and stores data temporarily. A cloud-fog based model is proposed for efficient allocation of resources from different buildings on fog. FG provides low latency hence, makes the system more efficient and reliable for consumer’s to access available resources. This paper proposes an cloud and fog based environment for management of energy. Six fogs are considered for six different regions around the globe. Moreover, one fog is interconnected with two clusters and each cluster contains fifteen numbers of buildings. All the fogs are connected to a centralized cloud for the permanent storage of data. To manage the energy requirements of consumers, Microgrids (MGs) are available near the buildings and are accessible by the fogs. So, the load on fog should be balanced and hence, a bio-inspired Bat Algorithm (BA) is proposed which is used to manage the load using Virtual Machines (VMs). Service broker policy considered in this paper is closest data center. While considering the proposed technique, results are compared with Active VM Load Balancer (AVLB) and Particle Swarm Optimization (PSO). Results are simulated in the Cloud Analyst simulator and hence, the proposed technique gives better results than other two load balancing algorithms.
### Cloud-Fog Based Smart Grid Paradigm for Effective Resource Distribution
Smart grid (SG) provides observable energy distribution where utility and consumers are enabled to control and monitor their production, consumption, and pricing in almost, real time. Due to increase in the number of smart devices complexity of SG increases. To overcome these problems, this paper proposes cloud-fog based SG paradigm. The proposed model comprises three layers: cloud layer, fog layer, and end user layer. The 1st layer consists of the cluster of buildings. The renewable energy source is installed in each building so that buildings become self-sustainable with respect to the generation and consumption. The second layer is fog layer which manages the user’s requests, network resources and acts as a middle layer between end users and cloud. Fog creates virtual machines to process multiple users request simultaneously, which increases the overall performance of the communication system. MG is connected with the fogs to fulfill the energy requirement of users. The top layer is cloud layer. All the fogs are connected with a central cloud. Cloud provides services to end users by itself or through the fog. For efficient allocation of fog resources, artificial bee colony (ABC) load balancing algorithm is proposed. Finally, simulation is done to compare the performance of ABC with three other load balancing algorithms, particle swarm optimization (PSO), round robin (RR) and throttled. While considering the proposed scenario, results of these algorithms are compared and it is concluded that performance of ABC is better than RR, PSO and throttled.
### Effective Resource Allocation in Fog for Efficient Energy Distribution
Fog computing is used to distribute the workload from cloud, decrease Network Latency (NL) and Service Response Time (SRT). Cloud have the capability to respond to too many requests from consumer side, however, the physical distance between a consumer and cloud is far than the consumer and fog. Fog is limited to a specific location, moreover, fog is meant to deal requests locally and helps out in processing the Consumer’s Requests (CRs) and provide efficient response. A fog holds the consumer’s data temporarily, processes it and provides response then sends it to cloud for permanent storage. Apart from this, it also sends the consumer’s data when Micro Grids (MGs) are not able to fulfill the consumer’s energy demand. Cloud communicates with Macro Grid. Fog and cloud computing concepts are integrated to create an environment for effective energy management of a building in a residential cluster. Fog deals with requests at the consumer’s end, because it is nearer to the consumer than a cloud. The theme of this paper is efficient allocation of Virtual Machines (VMs) in a fog, therefore, Insertion Sort Based Load Balancing Algorithm (ISBLBA) is used for this purpose. Simulations have been conducted, comparing ISBLBA to Round Robin (RR) technique and results regarding fog performance, cluster performance and cost are elucidated in the Sect. 5.
### Optimized Load Balancing Using Cloud Computing
The concept of fog computing is initiated to mitigate the load on cloud. Fog computing assists cloud computing services. It extends the services of cloud computing. The permanent storage of the data is come to pass in cloud. An environment based on fog and cloud is providd to manage the energy demand of the consumers. It deals with the data of buildings which are linked with clusters. To assist cloud, six fogs are deployed in three regions, which are found on three continents of the world. In addition, each fog is connected with clusters of buildings. There are eighty buildings in each cluster. These buildings are Smart Grid (SG) buildings. For the management of consumers energy demand, Micro Grids (MGs) are available near by buildings and reachable by fogs. The central object is to manage the energy requirements, so, fog assists consumers to attain their energy requirements by using MGs and cloud servers that are near to them. However, for balancing the load on cloud the implementation of an algorithm is needed. Virtual Machines (VMs) are also required. Pigeon hole algorithm is used for this purpose. Using proposed techniques results are compared with Round Robin (RR) which gives better results. The proposed technique in this paper is showing better results in terms of response time.
### A Cloud-Fog Based Smart Grid Model Using Max-Min Scheduling Algorithm for Efficient Resource Allocation
Cloud-fog infrastructure revolutionized the modern world, providing, low latency, high efficiency, better security, faster decision making, while lowering operational cost [1]. However, integration of Smart Grid (SGs) with cloud-fog platform provides high quality supply and secure generation, transmission and distribution of power; uninterrupted demand-supply chain management. In this paper, integration of SG uses cloud-fog based environment is proposed, for better resource distribution. Six fogs are considered in different geographical regions. Whereas, each fog is connected with clusters, each cluster consists of 500 smart homes. In order to fulfill energy demand of homes, fogs receive a number of requests, where different load balancing algorithms are used on Virtual Machines (VMs), in order to provide efficient Response Time (RT) and Processing Time (PT). However, in this paper, Max-Min algorithm is proposed, for load balancing with advanced service broker policy. Considering the proposed load balancing algorithm, results are compared with Round Robin (RR), from simulations, we conclude, proposed load balancing algorithms outperform than RR.
Sadia Rasheed, Nadeem Javaid, Saniah Rehman, Kanza Hassan, Farkhanda Zafar, Maria Naeem
### Efficient Energy Management Using Fog Computing
Smart Grid (SG) is a modern electricity network that promotes reliability, efficiency, sustainability and economic aspects of electricity services. Moreover, it plays an essential role in modern energy infrastructure. The main challenges for SG are, how can different types of front end smart devices, such as smart meters and power sources, be used efficiently and how a huge amount of data is processed from these devices. Furthermore, cloud and fog computing technology is a technology that provides computational resources on request. It is a good solution to overcome these obstacles, and it has many good features, such as cost savings, energy savings, scalability, flexibility and agility. In this paper, a cloud and fog based energy management system is proposed for the efficient energy management. This frame work provides the idea of cloud and fog computing with the SG to manage the consumers requests and energy in efficient manner. To balance load on fog and cloud a selection Base Scheduling Algorithm is used. Which assigns the tasks to VMs in efficient way.
### A New Generation Wide Area Road Surface State Information Platform Based on Crowd Sensing and V2X Technologies
Yoshitaka Shibata, Goshi Sato, Noriki Uchida
### Self-Outer-Recognition of OpenFlow Mesh in Cases of Multiple Faults
Recently, renewable systems have received increasing attention. We propose a metabolic architecture that is suitable for the construction of renewable systems. A metabolic architecture-based system is one that can exchange all of its elements dynamically, similar to a multicellular organism. In this way, the system not only maintains homeostasis, but also adapts to environmental changes. We are developing OpenFlow Mesh as a system based on a metabolic architecture. OpenFlow Mesh is a 2 + 1D mesh of OpenFlow switches that recognizes the outer shape, i.e., physical allocations of elements, on the basis of the network structure. Previously, we proposed a propagation-based method for determining the outer shape in the case of single faults. In this paper, we describe a method that can determine the outer shape in the case of multiple faults.
Minoru Uehara
### Deterrence System for Texting-While-Walking Focused on User Situation
In recent years, users of smartphones are increasing explosively, and accidents and troubles due to texting-while-walking (TWW) are rapidly increasing along with this. Previous research on the TWW deterrence system was to display a warning on the screen, notify by voice or vibration, or make the screen dark and invisible when detecting TWW from the acceleration sensor. Although it is possible to detect TWW, but if the user feels the application troublesome, the application may be erased. This is because the TWW is forcibly suppressed even if user judges that the TWW is not dangerous at present or the short and important operation is required. In this research, we develop a suppression system focusing on user’s situation, especially walking speed. During the TWW, the user is notified that the walking speed is low, and the user is urged to increase the walking speed within a safe range. In order to evaluate the effectiveness of this system, we prepared a conventional TWW deterrence system and our proposed system, and asked 17 peoples to do TWW. In the questionnaire after the experiment, 11 people answered that they would like to continue using the system that focused on walking speed.
Megumi Fujiwara, Fumiaki Sato
### Joint Deployment of Virtual Routing Function and Virtual Firewall Function in NFV-Based Network with Minimum Network Cost
It is essential for economical NFV-based network design to determine the place where each network function should be located in the network and what its capacity should be. The authors proposed an algorithm of virtual routing function allocation in the NFV-based network for minimizing the network cost, and provided effective allocation guidelines for virtual routing functions. This paper proposes the joint deployment algorithm of virtual routing function and virtual firewall function for minimizing the network cost. Our evaluation results have revealed the following: (1) Installing a packet filtering function, which is a part of the firewall function, in the sending-side area additionally can reduce wasteful transit bandwidth and routing processing and thereby reduce the network cost. (2) The greater the number of packets filtered by packet filtering function in the sending-side area, the more the reduction of network cost is increased. (3) The proposed algorithm would be approaching about 95% of the deployment with the optimal solution.
Kenichiro Hida, Shin-ichi Kuribayashi
### Consideration of Policy Information Decision Processes in the Cloud Type Virtual Policy Based Network Management Scheme for the Specific Domain
In the current Internet system, there are many problems using anonymity of the network communication such as personal information leaks and crimes using the Internet system. This is why TCP/IP protocol used in Internet system does not have the user identification information on the communication data, and it is difficult to supervise the user performing the above acts immediately. As a study for solving the above problem, there is the study of Policy Based Network Management (PBNM). This is the scheme for managing a whole Local Area Network (LAN) through communication control for every user. In this PBNM, two types of schemes exist. As one scheme, we have studied theoretically about the Destination Addressing Control System (DACS) Scheme with affinity with existing internet. By applying this DACS Scheme to Internet system management, we will realize the policy-based Internet system management. In this paper, to realize management of the specific domain with some network groups with plural organizations, the policy information decision processes applied for this scheme are considered and described.
Kazuya Odagiri, Shogo Shimizu, Naohiro Ishii, Makoto Takizawa
### Lessons Learned in Tokyo Public Transportation Open Data APIs
Open data is a vital part of new digital economy. It facilitates value creation from combining data from multiple sources. It also important to utilize a massive data flow from emerging IoT (Internet of Things) devices. Future smart cities will consist of a large aggregation of open data APIs. The size and variety of open data APIs provide challenges for usability, consistency, and integrity. The author analyzes issues in open data APIs in the Tokyo public transportation. The author discusses multiple aspects of issues. Then, the author presents a framework with three view models to deal with open data APIs: outcome, cause, and fixes. Finally, the author discusses lessons learned in the Tokyo public transportation open data APIs.
Toshihiko Yamakami
### Slovak Broadcast News Speech Recognition and Transcription System
We have developed a working prototype of automatic subtitling system for transcription, archiving, and indexing of Slovak audiovisual recordings, such as lectures, talks, discussions or broadcast news. To go further in the development and research, we had to incorporate more and more modern speech technologies and embrace nowadays deep learning techniques. This paper describes transition and changes made to our working prototype regarding speech recognition core replacement, architecture changes and new web-based user interface. We have used the state-of-the art speech toolkit KALDI and distributed architecture to achieve better responsivity of the interface and faster processing of the audiovisual recordings. Using acoustic models based on time delay deep neural networks we have been able to lower the system’s average word error rate from previously reported 24% to 15%, absolutely.
Martin Lojka, Peter Viszlay, Ján Staš, Daniel Hládek, Jozef Juhár
### Web Based Interactive 3D Educational Material Development Framework Supporting 360VR Images/Videos and Its Examples
This paper treats one of the activities of ICER (Innovation Center for Educational Resources) in Kyushu University Library of Kyushu University, Japan. It is the development of educational materials using recent ICT for enhancing the educational efficiency in the university. Especially, this activity focuses on the development of attractive and interactive educational materials using 3D CG. So, the authors have already proposed a framework dedicated for the development of web-based interactive 3D educational materials and introduced a couple of practical educational materials actually developed using the proposed framework. For developing more and more attractive educational materials using Virtual Reality (VR)/Augmented Reality (AR), the authors have added newly functionalities to the framework that allows the development of web-based VR/AR applications. Recently, 360VR images/videos have become popular because 360VR recorders are released from several companies. Therefore, the authors also introduced new functionalities that support 360VR images/videos into the framework. This paper describes the details of the introduced functionalities and also shows a couple of example materials developed using the framework.
Yoshihiro Okada, Akira Haga, Wei Shi
### On Estimating Platforms of Web User with JavaScript Math Object
Browser fingerprinting is a technique to identify a device using a combination of information that a server can gather from a browser. In general, the user-agent is known to be one of the most useful features for identification via browser fingerprinting. However, users can easily change the user-agent. This may lead to a decrease in the device identification accuracy. In this paper, we conducted two experiments. First, we proposed a method to estimate the platforms without using the user-agent. In particular, we used the fact that the computational result of a JavaScript math object varies depending on the platform. Using this method, we could classify 14 platforms into nine groups. Five of these uniquely identify the OS and the browser, two uniquely identify the OS and one uniquely identifies the browser. Second, we compared the accuracy of the browser fingerprint with user-agent (FP-A) and the browser fingerprint with our proposed method (FP-B). As a result, the identification accuracy rate with FP-B was only 0.4% lower than that with FP-A.
Takamichi Saito, Takafumi Noda, Ryohei Hosoya, Kazuhisa Tanabe, Yuta Saito
### A Study on Human Reflex-Based Biometric Authentication Using Eye-Head Coordination
Biometric information can be easily leaked and/or copied. Therefore, the biometric information used for biometric authentication should be kept secure. To cope with this issue, we have proposed a user authentication system using a human reflex response. It is assumed that even if people know someone’s reflex characteristics, it is difficult to impersonate that individual, as anyone cannot control his/her reflexes. In this study, we discuss a biometric authentication system using eye-head coordination as a particular instance of reflex-based authentication. The availability of the proposed authentication system is evaluated through fundamental experiments.
Yosuke Takahashi, Masashi Endo, Hiroaki Matsuno, Hiroaki Muramatsu, Tetsushi Ohki, Masakatsu Nishigaki
### Intrusion Detection Method Using Enhanced Whitelist Based on Cooperation of System Development, System Deployment, and Device Development Domains in CPS
Cyber-physical systems (CPS), as fusions of virtual and real worlds, have attracted attention in recent years. CPS realize rationalization and optimization in various domains, by collecting real-world data in the virtual world, analyzing the information, then reacting to the real world. Moreover, attacks on CPS can cause direct damage in the real world. Therefore, with the NIST CPS framework in mind, this paper discusses an approach to designing countermeasures to risks, considering the interactions with a system. Applying the approach of intrusion detection for control systems, this paper also proposes an intrusion detection method using an enhanced whitelist.
Nobuhiro Kobayashi, Koichi Shimizu, Tsunato Nakai, Teruyoshi Yamaguchi, Masakatsu Nishigaki
### Innovative Protocols for Data Sharing and Cyber Systems Security
In this paper will be presented new classes of cryptographic secret sharing procedures dedicated for secure information division and transmission. In particular will be presented two classes of protocols, which allow to share information with application of grammar solutions, as well as personal or behavioral parameters. Some possible application of such technologies will also be presented especially with relation to secure data or services management in distributed structures or Cloud environment.
Urszula Ogiela, Makoto Takizawa, Lidia Ogiela
### Implementation of Mass Transfer Model on Parallel Computational System
At present, parallel computational systems are being used more often for implementation of solutions of many phenomena. One of such phenomena is the mass transfer from one location to another that occurs in many porous building materials and the knowledge about this transfer is significant at least in civil engineering practice. In this paper, we consider a complex mass transfer diffusion model that involves beside water and water vapor, the air presence, and moreover, the phase transition of water to vapor or vice versa. The model was developed recently and its exact solution was found. In this paper, our intention is to implement this exact solution of the model on the parallel computational system HybriLIT. We suggest the sequential and parallel CUDA algorithms for the implementation of the solution. Thus, the speed-up of the parallel implementation is obtained and compared to the sequential implementation as a ratio of 44 s for sequential and 0.004 s for parallel implementation. GPU capacity is compared with total capacity used for problem solution on the sequential level. Input parameters of the problem are found so that the GPU capacity limits are reached.
Miron Pavluš, Tomáš Bačinský, Michal Greguš
### Server System of Converting from Surface Model to Point Model
Handling the 3D CG model as a point clouds uniformly, it will be possible to easily perform data conversion, 3D shape search, and data downsizing. We have developed a system to convert a surface model to a point model and a system to downsize the point model. Then in order to publish these tools, we are currently proceeding to release these software as a service. In this paper, we introduce the service system.
Hideo Miyachi, Isamu Kuroki
### Proposal of a Virtual Traditional Crafting System Using Head Mounted Display
Due to the spread of computers and Internet technologies in recent years, the traditional craft industry is presenting information using personal computer terminals for the purpose of market development. However, consumers must rely on imagination of the tastefulness, texture and scale of traditional crafts because information is presented on a flat display. Therefore, in this research, we have constructed the high presence immersive virtual traditional crafting presentation system. This system provides the high presence immersive virtual traditional crafting presentation experience by fusing “Japanese” and “Western”. Furthermore, this system provides collaborative work functions by realizing remote sharing of space using network technology.
Tomoyuki Ishida, Yangzhicheng Lu, Akihiro Miyakwa, Kaoru Sugita, Yoshitaka Shibata
### Development of AR Information System Based on Deep Learning and Gamification
Recently, several AR systems have been developed and used in various fields. However, in most AR systems, there are some restrictions caused by the usage of AR marker or location information. In this research, in order to solve these problems, AR information system that can recognize object itself based on deep learning was developed. In particular, this system was constructed using client-server model so that the machine learning can be updated while operating the system. In addition, the method of gamification was introduced to gather the learning data automatically from the users when they use the system. The prototype was applied to the AR zoo information system and the effectiveness of the proposed system was validated in the evaluation experiment.
Tetsuro Ogi, Yusuke Takesue, Stephan Lukosch
### A Study on Practical Training with Multiple Large-Scale Displays in Elementary and Secondary Education
To promote effective ICT utilization in the practical training to learn the basics of the industry subjects in elementary and secondary education, we believe that it is important to study the effective utilization of large-scale display systems. In this paper, we studied on the effective practical training by utilization of multiple large-scale display systems in conjunction with the interface for screen operation developed in our previous study. As the approach, we tried the practical training with g multiple large-scale displays using the interface for screen operation in combination with the SAGE2. From this trial, we showed that there is a high possibility that the effective practical training realize to conduct by combining the interface for screen operation and multiple displays environment.
Yasuo Ebara, Hiroshi Hazama
### Implementation and Evaluation of Unified Disaster Information System on Tiled Display Environment
In the event of large disaster, local government will be charged of responsible activity over counter disaster operation. However, conventional procedure for understanding and sharing of disaster information at the headquarters is performed by paper-based documents and indication on paper-based map, it could be difficult to represent of understanding of disaster statuses for multiple headquarters personnel. This paper proposes design and implementation of disaster information system based on ultra high definition display environment using GIS. We have designed the system consists of large and ultra definition display environment as unified shared display at headquarters. Our system considers to implement directly status reporting function, detachment of displaying location and media content and user-system interaction method using smart devices as content controller. We had hands-on based experiment in order to assess usability of our system. The result shows relatively positive feedback for usability of the system by participant consist of non-specialist for counter disaster activity.
Akira Sakuraba, Tomoyuki Ishida, Yoshitaka Shibata
### 3D Measurement and Modeling for Gigantic Rocks at the Sea
In this research, we digitally archived large rocks at the sea, which are called “Sanouiwa” in Miyako city. We conducted two types of three-dimensional (3D) measurement techniques. The first is to take pictures by using drone. The second is to use Global Navigation Satellite System (GNSS). The point cloud data was generated from the high resolution camera images by using 3D shape reconstruction software. Finally, we integrated all point cloud data, and we constructed 3D triangular model by using these point cloud data.
Zhiyi Gao, Akio Doi, Kenji Sakakibara, Tomonori Hosokawa, Masahiro Harada
### Adaptive Array Antenna Controls with Machine Learning Based Image Recognition for Vehicle to Vehicle Networks
With the developments of ITS technology, it is considered that the V2V communication is necessary for the new kinds of applications in the future. However, there are actually some subjects of wireless networks between vehicles caused by the fast movements or the radio noise of the moving vehicles. Thus, this paper proposes the Delay Tolerant Network System with the Adaptive Array Antenna controlled by the image recognition for the V2V Networks. In the proposed system, the target vehicle is recognized by the Machine Learning based image recognition system, the Kalman Filter algorithm to modify the influence of the vehicle’s speed or the obstacles in the way of the road controls the direction of the Adaptive Array Antenna. The paper especially deals with the implemented image recognition system and the antenna direction controls from the experimental results of the prototype system, and the results indicate the effectiveness of the proposed system for the V2V networks.
Noriki Uchida, Ryo Hashimoto, Goshi Sato, Yoshitaka Shibata
### Performance Evaluation of a Smartphone-Based Active Learning System for Improving Learning Motivation During Study of a Difficult Subject
In our previous work, we presented an interactive learning process in order to increase the students learning motivation and the self-learning time. We proposed an Active Learning System (ALS) for student’s self-learning. We evaluated the proposed system for each level (low, middle and high level) class and checked the student concentration using our proposed ALS. We found that for a difficult subject, there are only few high level students. The other students are middle or low level students. However, it is important that when using ALS all students should keep their learning motivation. In this paper, we described the results of the performance evaluations of the proposed ALS for a difficult subject. The evaluation results show that the ALS increased the learning efficiently of the students for the difficult subject.
Noriyasu Yamamoto, Noriki Uchida
### A Causally Precedent Relation Among Messages in Topic-Based Publish/Subscribe Systems
Event-driven publish/subscribe (PS) systems are widely used in various types of applications. In this paper, we consider a peer-to-peer (P2P) model of a topic-based PS system which is composed of peer processes (peers) with no centralized coordinator. Here, each peer publishes a message with publication topics while receiving messages whose publication topics are in the subscription topics of the peer. Each peer has to deliver every pair of messages related with respect to topics in the causal order of publication events. In this paper, a message is considered to carry objects whose meanings are denoted by topics. Based on the meanings of objects, we define an object-based-causally (OBC) precedent relation among messages. Based on the OBC precedent relation, we newly propose a protocol to topic-based-causally (TBC) deliver messages to peers. Here, each peer causally delivers event messages which are related with respect to topics. If a message $$m_{1}$$m1 OBC-precedes a message $$m_{2}$$m2, the message $$m_{1}$$m1 TBC-precedes $$m_{2}$$m2.
Takumi Saito, Shigenari Nakamura, Tomoya Enokido, Makoto Takizawa
### Fog-Cloud Based Platform for Utilization of Resources Using Load Balancing Technique
Fog based computing concept is used in smart grid (SG) to reduce the load on the cloud. However, fog covers the small geographical area by storing data temporarily and send furnished data to the cloud for long-term storage. In this paper, a fog and cloud base platform integrated is proposed for the effective management of energy in the smart buildings. A request generated from a cluster of building at demand side end is to be managed by Fog. For this purpose six fogs are considered for three different regions including Europe, Africa an North America. Moreover, each cluster is connected to fog, comprises of the multiple number of buildings. Each cluster contains thirty buildings and these buildings consisted 10 homes with multiple smart appliances. To fulfill the energy demand of consumer, Microgrids (MGs) are used through fog. These MGs are placed nearby the buildings. For effective energy utilization in smart buildings, the load on fog and cloud is managed by load balancing techniques using Virtual Machines (VMs). Different algorithms are used, such as Throttled, Round Robin (RR) and First Fit (FF) for load balancing techniques. These techniques are compared for closest data center service broker policy. This service broker policy is used for best fog selection. Although using the proposed policy, three load balancing algorithms are used to compare the result among them. The results showed that proposed policy outperforms cost wise.
### A Practical Indoor Localization System with Distributed Data Acquisition for Large Exhibition Venues
In this paper, we focus on the Wi-Fi based indoor localization in large exhibition venues. We identify and describe the real-world problems in this scenario and present our system. We adopt a passive way to detect mobile devices with the consideration of users’ preference and iOS devices’ privacy issue, and collect signal strength data in a distributed manner which meets the practical demand in exhibition venues and save the power consumption of mobile devices. Since exhibition venues have many restrictions on traditional localization approaches, we propose our approach and solution to fit these special conditions. We propose the clustering and Gaussian process regression (GPR) method to improve localization accuracy. Series of experiments in Hong Kong Convention and Exhibition Centre (HKCEC) show our system’s feasibility and effectiveness. Our approach has significant improvement in the localization accuracy when compared with traditional trilateration, fingerprinting and the state-of-the-art approaches.
Hao Li, Joseph K. Ng, Shuwei Qiu
### Secure Visible Light Communication Business Architecture Based on Federation of ID Management
With the progress of visible light communication (VLC) technology, opportunities for new mobile communication infrastructures and business creation are increasing. Specifically, there are many proposals for spot type broadcasting services linked to positions such as on-the-spot explanations at art museums and information delivery linked with digital signage. However, due to their limited capabilities, investigations into their security measures has been insufficient. On the other hand, in promoting VLC for business in the future, security measures are an important issue. We have previously proposed a secure business architecture through cooperation with VLC, public key encryption technology, and power line communication technology. This architecture provides an “position authentication ID” from a light source, such as an LED, and it is characterized by strict position authentication that encrypts these data using a public key infrastructure (PKI). In this paper, we propose a new business architecture to enable a more secure cooperation through cooperation with the ID management infrastructure, and contribute to the creation of new business by using VLC.
Shigeaki Tanimoto, Chise Nakamura, Motoi Iwashita, Shinsuke Matsui, Takashi Hatashima, Hitoshi Fuji, Kazuhiko Ohkubo, Junichi Egawa, Yohsuke Kinouchi
### Peer-to-Peer Data Distribution System with Browser Cache Sharing
Many services that provide video contents such have been available. Traffic caused by watching videos has been increasing and the traffic will consume network resources on the Internet or networks of contents delivery provides. In this paper, the authors proposed peer-to-peer data distribution system with browser cache sharing to decrease network resource consumption. Nodes that are watching the same video exchange fragments of video data by using WebRTC protocol.
Kazunori Ueda, Yusei Irifuku
### A Dynamism View Model of Convergence and Divergence of IoT Standardization
As IoT (Internet of Things) continues to penetrate everyday life, we witness the increase in the number of IoT standardization activities. This is a kind of business and political conflicts. It can be also viewed as an emergence of new types of standardization. In a world where complicated cyber-physical systems come to exist, legacy view models such as layered view models are not adequate to the new landscape of IoT standardization. As a departure of static structural view of standardization, the author proposes a dynamism model for divergence and convergence.
Toshihiko Yamakami
### Optimized Resource Allocation in Fog-Cloud Environment Using Insert Select
Energy management in modern way is done using cloud computing services to fulfill the energy demands of the users. These amenities are used in smart buildings to manage the energy demands. Entertaining maximum requests in minimum time is the main goal of our proposed system. To achieve this goal, in this paper, a scheme for resource distribution is proposed for cloud-fog based system. When the request is made by the user, the allocation of Virtual Machines (VMs) to the Data Centers (DCs) is required to be done timely for DSM. This model helps the DCs in managing the VMs in such a way that the request entertainment take minimum Response Time (RT). The proposed Insert Select Technique (IST) tackle this problem very effectively. Simulation results depicts the cost effectiveness and effective response time (RT) achievement.
### Smart Grid Management Using Cloud and Fog Computing
Cloud computing provides Internet-based services to its consumer. Multiple requests on cloud server simultaneously cause processing latency. Fog computing act as an intermediary layer between Cloud Data Centers (CDC) and end users, to minimize the load and boost the overall performance of CDC. For efficient electricity management in smart cities, Smart Grids (SGs) are used to fulfill the electricity demand. In this paper, a proposed system designed to minimize energy wastage and distribute the surplus energy among energy deficient SGs. A three-layered cloud and fog based architecture described for efficient and fast communication between SG’s and electricity consumers. To manage the SG’s requests, fog computing introduced to reduce the processing time and response time of CDC. For efficient scheduling of SG’s requests, proposed system compare three different load balancing algorithms: Round Robin (RR), Active Monitoring Virtual Machine (AMVM) and Throttled for SGs electricity requests scheduling on fog servers. Dynamic service broker policy is used to decide that which request should be routed on fog server. For evaluation of the proposed system, results performed in cloud analyst, which shows that AMVM and Throttled outperform RR by varying virtual machine placement cost at fog servers.
### Evaluation of Self-actualization Support System by Using Students Independence Rubric
To actualize own life, proactive action is one of essential skills. However, most students are reactive. Reactive students are characterized in the following no challenge, no thinking, awaiting instructions, and avoidance of trial and error. Our contributions are aiming to provide a proactive action support system for students life, and to evaluate significant of the system. The 7 habits is one of the most powerful schemes for proactive action choice. We are developing proactive action support system by visualization of quadrant II activities called Self-reflector-Plus. Self-reflector-Plus was systematized the first three habits in the 7 habits. Periodic and long-term practice is necessary to gain the significant effect of the 7 habits. In this paper, to examine our system, we have designed rubric to evaluate effect of Self-reflector-Plus. There are 9 components corresponding habit 1 to 3 in the 7 habits.
Yoshihiro Kawano
### Characterizations of Local Recoding Method on k-Anonymity
k-Anonymity is one of the most widely used techniques for protecting the privacy of the publishing datasets by making each individual not distinguished from at least k-1 other individuals. The local recoding method is an approach to achieve k-anonymization through suppression and generalization. The method generalizes the dataset at the cell level. Therefore, the local recoding could achieve the k-anonymization with only a small distortion. As the optimal k-anonymity has been proved as the NP-hard problem, the plenty of optimal algorithm local recoding has been proposed. In this research, we study the characteristics of the local recoding method. In addition, we discover the special characteristic dataset that all generalization hierarchies of each quasi-identifier are identical, called an “Identical Generalization Hierarchy” (IGH) data. We also compare the efficiency of the well-known algorithms of the local recoding method on both $$non-IGH$$non-IGH and IGH data.
Waranya Mahanan, Juggapong Natwichai, W. Art Chaovalitwongse
### A Cloud-Fog Based Environment Using Beam Search Algorithm in Smart Grid
Smart Grid (SG) monitor, analyze and communicate to provide electricity to consumers. In this paper, a cloud and fog computing environment is integrated with SG for efficient energy management. In this scenario world is divided into six regions having twelve fogs and eighteen clusters. Each cluster has multiple buildings and each building comprises of eighty to hundred apartments. Multiple Micro Grids (MG’s) are available for each region. The request for energy is sent to fog and load balancing algorithm is used for balancing the load on Virtual Machines (VMs). Service broker policies are used for the selection of fog. Round Robin (RR), throttled and Beam Search (BS) algorithms are used with service proximity policy. Results are compared for these three algorithms and from this BS algorithm gives better result.
Komal Tehreem, Nadeem Javaid, Hamida Bano, Kainat Ansar, Moomina Waheed, Hanan Butt
### A Microservices-Based Social Data Analytics Platform Over DC/OS
With increasing popularity of cloud services, the microservices architecture has been gaining more attention in the software development industry. The idea of the microservices architecture is to use a collection of loosely coupled services to compose a large-scale software application. In traditional monolithic architecture, by contrast, every piece of code is put together, and the application is developed, tested, and deployed as a single application. Obviously, it is challenging for the traditional architecture to scale properly. In this research, we implemented a social data analytics platform based on the microservices architecture over DC/OS. Specifically, our data analytics service is built by composing many open-source software including Spark, Kafka, and Node.js. On streaming processing, our platform offers a visual interface to show the hottest hashtags of the most popular user posts from an online forum. On batch processing, our platform is able to show the statistics such the top-10 liked or commented posts and the gender counts of the posters. The experimental results show that our data analytics platform can do streaming processing and batch processing successfully and reveal useful analytical results.
Ming-Chih Hsu, Chi-Yi Lin
### Application of Independent Component Analysis for Infant Cries Separation
The research on analysing infant crying has received many attentions in recent years. In our prior work, a baby crying translation method called infant crying translator was proposed and showed high recognition accuracy. However, in a real environment, there may be more than one baby crying. These mixed cries will seriously affect the accuracy of recognition. In order to isolate these mixed cries, the independent component analysis was adopted herein. Experimental results show that the proposed method can separate out the mixed cries and greatly improves the recognition rate of infant crying translator. The recognition rate increased from 34% to 68%.
Chuan-Yu Chang, Chi-Jui Chen, Ching-Ju Chen
### BLE Beacon Based Indoor Position Estimation Method for Navigation
Bluetooth Low Energy (BLE) beacons are useful to estimate a user’s location in indoor situations. Because our university has numerous BLE beacons, they are easy to use for indoor route navigation. For this study, we propose a position estimation method using BLE beacons. Using this method, we eliminate the existing difficulty of error accumulation of earlier systems and improve indoor location estimation accuracy. We designed an indoor position estimation method and conducted an estimation accuracy evaluation experiment comparing the proposed method and existing methods. The experiment results underscore the effectiveness of the proposed method.
Takahiro Uchiya, Kiyotaka Sato, Shinsuke Kajioka
### Metaheuristic Optimization Technique for Load Balancing in Cloud-Fog Environment Integrated with Smart Grid
Energy Management System (EMS) is necessary to maintain the balance between electricity consumption and distribution. The huge number of Internet of Things (IoTs) generate the complex amount of data which causes latency in the processing time of Smart Grid (SG). Cloud computing provides its platform for high speed processing. The SG and cloud computing integration helps to improve the EMS for the consumers and utility. In this paper, in order to enhance the speed of cloud computing processing edge computing is introduced, it is also known as fog computing. Fog computing is a complement of cloud computing performing on behalf of cloud. In the proposed scenario numbers of clusters are taken from all over the world based on six regions. Each region contains two clusters and two fogs. Fogs are assigned using the service broker policies to process the request. Each fog contains four to nine Virtual Machines (VMs). For the allocation of VMs Round Robin (RR), throttle and Ant Colony Optimization (ACO) algorithms are used. The paper is based on comparative discussion of these load balancing algorithms.
### The Optimal Beacon Placement for Indoor Positioning
In recent years, based on the development of low-power transmission technologies, the issue of indoor positioning has also received increasing attention. This paper combines grid technology to transform the indoor positioning of Beacon deployment problems into optimization problems. Considering the RSSI signal drift problem, this optimization model converts the RSSI signal strength to Signal Power Ranking (SPR), which is a combination of Simulated Annealing (SA). In order to obtain the location and transmission power of Beacon, to achieve the objective of providing complete identification and the best identification rate with the minimum number of Beacons. At the same time, we also use the IBM ILOG CPLEX optimization tool to verify the SA algorithm. The simulation results show that under different topologies, the SA algorithm can reach the results with the CPLEX tool in a shorter time.
Ching-Lung Chang, Chun-yen Wu
### A Secure Framework for User-Key Provisioning to SGX Enclaves
Intel Software Guard Extensions (SGX) protects user software from malware by maintaining the confidentiality and integrity of the software executed in secure enclaves on random access memory. However, the confidentiality of its stored executable is not guaranteed. Therefore, secret information, e.g. user keys, should be provided to the enclaves via appropriate secure channels. Although one of the solutions is to use remote attestation function of SGX, there is a potential risk that user keys are exposed to malicious insiders at the service provider of remote attestation. In this paper, we propose a novel and secure framework for user-key provisioning to SGX enclaves. Our framework utilizes sealing function of SGX, and consists of two phases: the provisioning phase and the operation phase. In the provisioning phase, a user key is encrypted by sealing function, and it is stored in storage. Our assumption is that this phase is performed in a secure environment. In the operation phase, the encrypted blob is read from the storage and decrypted. Then, SGX applications can use the user key without exposing it to attackers. We implemented a prototype of our framework using a commercial Intel CPU and evaluated its feasibility.
Takanori Machida, Dai Yamamoto, Ikuya Morikawa, Hirotaka Kokubo, Hisashi Kojima
### Evaluation of User Identification Methods for Realizing an Authentication System Using s-EMG
At the present time, mobile devices such as tablet-type PCs and smart phones have widely penetrated into our daily lives. Therefore, an authentication method that prevents shoulder surfing is needed. We are investigating a new user authentication method for mobile devices that uses surface electromyogram (s-EMG) signals, not screen touching. The s-EMG signals, which are detected over the skin surface, are generated by the electrical activity of muscle fibers during contraction. Muscle movement can be differentiated by analyzing the s-EMG. Taking advantage of the caracteristics, we proposed a method that uses a list of gestures as a password in the previous study. In this paper, we employed support vector machines and attempted to improve the gesture recognition method by introducing correlation coefficient and cross-correlation. A series of experiments was carried out in order to evaluate the performance of the method.
Hisaaki Yamaba, Kentaro Aburada, Tetsuro Katayama, Mirang Park, Naonobu Okazaki
### Person Tracking Based on Gait Features from Depth Sensors
Gait information is a useful biometric because it is a user-friendly property and gait is hard to mimic exactly, even by skillful attackers. Most conventional gait authentication schemes assume cooperation by the subjects being recognized. Lack of cooperation could be an obstacle for automated tracking of users and many commercial users require new gait identification schemes that do not require the help of target users. In this work, we study a new person-tracking method based on the combination of some gait features observed from depth sensors. The features are classified into three groups: static, dynamic distances, and dynamic angles. We demonstrate with ten subjects that our proposed scheme works well and the accuracy of equal error ratio can be improved to 0.25 when the top five features are combined.
Takafumi Mori, Hiroaki Kikuchi
### Privacy-Preserving All Convolutional Net Based on Homomorphic Encryption
Machine learning servers with mass storage and computing power is an ideal platform to store, manage, and analyze data and support decision-making. However, the main issue is providing security and privacy to the data, as the data is stored in a public way. Recently, homomorphic data encryption has been proposed as a solution due to its capabilities in performing computations over encrypted data. In this paper, we proposed an encrypted all convolutional net that transformed traditional all convolutional net into a net based on homomorphic encryption. This scheme allows different data holders to send their encrypted data to cloud service, complete predictions, and return them in encrypted form as the cloud service provider does not have a secret key. Therefore, the cloud service provider and others cannot get unencrypted raw data. When applied to the MNIST database, privacy-preserving all convolutional based on homomorphic encryption predict efficiently, accurately and with privacy protection.
Wenchao Liu, Feng Pan, Xu An Wang, Yunfei Cao, Dianhua Tang
### Verification of Persuasion Effect to Cope with Virus Infection Based on Collective Protection Motivation Theory
It has been reported that many Internet users do not recover personal computers (PCs) that have been infected by computer viruses. To address this problem, we have investigated how to motivate users to remove viruses from their PCs based on the Protection Motivation Theory. Previously, we have reported that the cognitive factors related to response efficacy, responsibility, percentage of performers, and group norm could affect the intention to recover an infected PC. In this study, we created the experimental content to stimulate these cognitive factors and conducted an experiment to verify whether the content would be effective to persuade Internet users to recover infected PCs. Our research confirmed that the content stimulated some cognitive factors and effectively persuaded users to recover their PCs. We also found that some users did not intend to cope with the virus infection because they did not consider the information about the virus infection to be credible.
Kana Shimbo, Shun-ichi Kurino, Noriaki Yoshikai
### Zero-Knowledge Proof for Lattice-Based Group Signature Schemes with Verifier-Local Revocation
In group signature schemes, signers prove verifiers, their validity of signing through an interactive protocol in zero-knowledge. In lattice-based group signatures with Verifier-local revocation (VLR), group members have both secret signing key and revocation token. Thus, the members in VLR schemes should show the verifiers, that he has a valid secret signing key and his token is not in the revoked members list. These conditions are satisfied in the underlying interactive protocol provided in the first lattice-based group signature scheme with VLR suggested by Langlois et al. in PKC 2014. In their scheme, member revocation token is a part of the secret signing key and has an implicit tracing algorithm to trace signers. For a scheme which generates member revocation token separately, the suggested interactive protocol by Langlois et al. is not suitable. Moreover, if the group manager wants to use an explicit tracing algorithm to trace signers instead the implicit tracing algorithm given in VLR schemes, then the signer should encrypt his index at the time of signing, and the interactive protocol should show signer’s index is correctly encrypted. This work presents a combined interactive protocol that signer can use to prove his validity of signing, his separately generated revocation token is not in the revocation list, and his index is correctly encrypted required for such kind of schemes.
Maharage Nisansala Sevwandi Perera, Takeshi Koshiba
### A Path Search System Considering the Danger Degree Based on Fuzzy Logic
There are many disasters happening in the world and in general it is difficult to predict them. For this reason, there are many disaster prevention centers where the people learn about information, techniques and the ability to take action in relation to disasters and simulates various disasters in the case of emergencies. It is better that people avoid danger as much as possible in everyday life. The conventional path search systems, such as car navigation systems, mainly consider the length of the path. Thus, the system may recommend a dangerous route such as a place easy to a landslide. In this work, we propose a path search system considering the danger degree by using Fuzzy logic. In our proposed system, we use the data of the hazard map as input parameters to decide the danger degree.
Shinji Sakamoto, Shusuke Okamoto, Leonard Barolli
### A Recovery Method for Reducing Storage Usage Considering Different Thresholds in VANETs
Technologies have been developed for providing higher functionality of on-board unit and providing a communication function with other vehicles and roadside units. Nowadays, vehicles can be called as one communication terminal. However, end-to-end communication is difficult because of the lack of end-to-end connectivity. Delay/Disruption/Disconnection Tolerant Networking (DTN) are used as one of a key alternative network for Vehicular Ad-hoc Networks (VANETs). In this paper, we propose a recovery method for reducing storage usage considering different thresholds in VANETs. From the simulation results, we found that our proposed recovery method has a good performance even for sparse or dense network environment.
Shogo Nakasaki, Yu Yoshino, Makoto Ikeda, Leonard Barolli
### Butt-Joint Assembly of Photonic Crystal Waveguide Units for Large Scale Integration of Circuits with Variety of Functions
Photonic crystal (PhC) structure is useful for fabrication of highly integrated optical circuit on single substrate. However, it was reported that repetition of chemical etching process to mount various functions damages the structure itself. Then, a new assembly process of functional units is proposed in this paper. For the assembly of butt joint, transmission and reflection on joint surface is critical. Then, measurement results of butt joint in PhC waveguide is evaluated to show the transmission and reflection characteristics. The waveguide shift $$\varDelta$$Δ was situated in the middle of PhC waveguide by composing two separated basement plate of aluminum. It was found that the waveguide shift $$\varDelta$$Δ up to half of waveguide width is not seriously critical for the transmission and reflection. Application of proposed assembly technique by butt joint is hopeful for fabrication of PhC integrated circuit.
Hiroshi Maeda, Keisuke Haari, Xiang Zheng Meng, Naoki Higashinaka
### Clustering in VANETs: A Fuzzy-Based System for Clustering of Vehicles
In recent years, inter-vehicle communication has attracted attention because it can be applicable not only to alternative networks but also to various communication systems. In this paper, we propose a Fuzzy-based system for clustering of vehicles in VANETs. We evaluate the proposed system by simulations. From the simulation results, we found that when DCC parameter is small and SC is high, the possibility that vehicle remains in the cluster is increased.
Kosuke Ozera, Kevin Bylykbashi, Yi Liu, Makoto Ikeda, Leonard Barolli
### Movement Detection Methods with Wireless Signals and Multiple Sensors on Mobile Phone for Traffic Accident Prevention Systems
In the recent traffic accidents, it has been focused on the pedestrians and the bicycles as well as the automobiles. Especially, it is widely considered that the texting while walking and riding bicycles is extremely dangerous. Thus, this research proposed the mobile traffic accident prevention system with observing the radio signals and various sensors on the smartphone. In the methods, the RSSI levels of IEEE802.11a/b/g/n from others are firstly observed, and the sensors such as the gyro sensor on the smartphone are secondly applied for the modifications of the detection process based on the Markov Chain algorithm. Then, this paper reports the prototype system of the proposed methods, and the experimental results are discussed for the future studies.
Shoma Takeuchi, Noriki Uchida, Yoshitaka Shibata
### A System to Select Reception Channel by Machine Learning in Hybrid Broadcasting Environments
Due to the recent prevalence of the Internet, some TV broadcasting services deliver videos using both electric wave broadcasting systems and the Internet (hybrid broadcasting environments). Video players encounter playback interruptions when they cannot receive a part of video data (video data segment) until the time to play it. The probability to encounter playback interruptions can be reduced by receiving video data segments earlier. However, it is difficult for video players to find from which reception channel (broadcasting system or the Internet) they can receive video data segments earlier since the time required for receiving them depends on various factors such as broadcasting schedules, the number of receiving video players, and so on. To find appropriate reception channels for reducing playback interruptions, we propose a system to select reception channel by machine learning.
Tomoki Yoshihisa, Yusuke Gotoh, Akimitsu Kanzaki
### A Meeting Log Structuring System Using Wearable Sensors
We propose a system that structures a meeting log by detecting and tagging the participants’ actions in the meeting using acceleration sensors. The proposed system detects head movement such as nodding of each participant or motion during utterances by using acceleration sensors attached to the heads of all participants in a meeting. In addition, we developed a Meeting Review Tree, which is an application that recognizes a meeting participants’ utterances and three kinds of actions using acceleration and angular velocity sensors and tags them to recorded movies. In the proposed system, the structure of the meeting is hierarchized into three layers and tagged contexts as follows: The first layer represents the transition of the reporter during the meeting, the second layer represents changes in information of speakers in the report, and the third layer represents motions such as nodding. As a result of the evaluation experiment, the recognition accuracy of the stratified first layer was 57.0% and that of the second layer was 61.0%.
Ayumi Ohnishi, Kazuya Murao, Tsutomu Terada, Masahiko Tsukamoto
### Hiding File Manipulation of Essential Services by System Call Proxy
Security software or logging programs are frequently attacked because they are an obstruction to attackers. Protecting these essential services from attack is crucial to preventing and mitigating damage. Hiding information related to essential services, such as that of the files and processes, can help to deter attacks on these services. This paper proposes a method of hiding file manipulation for essential services. The method makes the files invisible to all services except their corresponding essential services and provides access methods to those files in a virtual machine (VM) environment. In the proposed method, system calls to those files are executed by a proxy process on the other VM. The original system call is not executed in the operating system of the original VM, however, the result of file access is returned to the original process. Thus, the files of essential services are placed on the other VM and other processes on the original VM cannot access to them. Therefore, the proposed method can prevent or deter identification of essential services based on file information monitoring.
Masaya Sato, Hideo Taniguchi, Toshihiro Yamauchi
### Evaluation of Broadcasting System for Selective Contents Considering Interruption Time
Due to the recent popularization of digital broadcasting, selective contents broadcasting has attracted much attention. In selective contents broadcasting, although the server delivers contents based on their preferences, users may experience the interruption time while playing their selected contents. To reduce this interruption time, many researchers have proposed scheduling methods. However, since these scheduling methods evaluated the interruption time in simulation environments, we need to evaluate them in network environments. In this paper, we propose a broadcasting system of selective contents and evaluate its effectiveness in network environments.
Takuro Fujita, Yusuke Gotoh
### Play Recognition Using Soccer Tracking Data Based on Machine Learning
In professional football, every play data is recorded such as Pass, Dribble, etc. However, the play data is manually recorded, which requires huge effort. To reduce the human effort, we propose a method to recognize the labels of plays in football games from tracking data. By using features extracted from tracking data, we generate a play classifier model based on machine learning. We have evaluated the proposed method through real tracking data recorded in Japan Professional Football League (J. League). The results have shown that our play recognition is effective for mitigating the heavy workload for play labeling.
Tomoki Imai, Akira Uchiyama, Takuya Magome, Teruo Higashino
### A Graphical Front-End Interface for React.js
We present a graphical front-end interface for creating dynamical web pages by means of React.js. Its user does not have to write JavaScript codes for specifying the dynamical behavior of the web components but has only to draw state-transition diagrams graphically on the developed graphical editor. Using the graphical editor, the user composes a state transition diagram that specifies the dynamical behavior of each web component in terms of circles representing the states of the component and arrows representing the conditioned transitions among the states. Then the developed translator converts the state transition diagrams into web components of React.js in JavaScript that compose the target web page. This system of the graphical editor and the translator enables general users without knowledge and experiences in programming to create dynamical web pages. Wanna-be programmers may start learning JavaScript and React.js by comparing their diagrams and the translated JavaScript codes.
Shotaro Naiki, Masaki Kohana, Shusuke Okamoto, Masaru Kamada
### An Attendance Management System Capable of Mapping Participants onto the Seat Map
We present an attendance management system where the student and the seat position are identified by IC cards of the FeliCa standard. Seated in a classroom, each student first touches his/her student ID card of the FeliCa standard with his/her own Android smartphone and then touches another FeliCa card fixed on the desk at his/her seat. Then the developed Android application software sends the student ID and the seat ID to the server. The developed service program produces the seating list with the student names mapped onto the seat map of the classroom. The seating list can be updated instantly as soon as the students touch the cards after getting seated or moving to other seats in the classroom. Looking at the seating list as a web page, the teacher can easily identify and call on each student. The seating list also helps the teacher spot a blank seat or a student missing, or equivalently, cheating in attendance.
Shinya Kinoshita, Michitoshi Niibori, Masaru Kamada
Yasuhiro Ohtaki
### Anonymous Accessible Bulletin Board System with Location Based Access Control Mechanism
A bulletin board system (BBS) which allows anonymous submission has lower barriers to enter the discussions because everyone can post opinions without any responsibilities. Therefore, there are some risks that “a flaming” occurs, where the discussion is inundated with comments. To suppress emerging such situation, we developed a novel BBS with location-based access control mechanism. The BBS is anonymously accessible by default; however, the access permission to each chat room in the BBS depends on its geospatial information. That is, the chat rooms keep their geographical information when they are created, and users who can access each room must exist near the original location. In this paper, the overview of our system and its applications are described.
Jun Iio, Shogo Asada, Mitsuhiko Kai
### Newly-Added Functions for Video Search System Prototype with a Three-Level Hierarchy Model
This paper describes newly added functions onto ELVIDS, previously presented the three hierarchical video search system. We have developed this system for fostering scholarly use of videos over the three years. The development in the previous phase was mainly for giving functions for users such as retrieving videos, displaying search results, and playing back the videos. As the next phase, we considered adding functions for metadata registrars. In this paper, the background, sample dataset and a protocol for an authentication are firstly described in the earlier chapters. Then, the functions for metadata registration and the authentication including the methods and the interfaces are introduced. Finally, the conclusions and the further works are mentioned.
Tongjin Lee, Jun Iio
### A Location-Based Web Browser Network for Virtual Worlds
This paper proposes a way to construct networks among Web browsers. The building a web-based virtual world needs a lot of computing resource, which means that we need a lot of Web servers. However, if we increase the number of Web servers, the financial and the maintenance cost also increases. In this study, we try to use computing resource on Web browsers. In our previous study, we proposed a way to share data among Web browsers. This way has a problem that is the longer data transfer time when the number of users increases. Therefore, we try to construct small Web browser networks based on the location of each players on a virtual world.
Masaki Kohana, Shusuke Okamoto
### Overview of Digital Forensic Tools for DataBase Analysis
The number of digital devices that people use in everyday life has significantly increased. Since they have become an integral part of everyday life, they contain information that are often extremely sensitive. Modern devices use complex data structures to store data (heterogeneous media files, documents, GPS positions and SQLite databases, etc.), therefore, during a forensic investigation, it has been necessary the adoption of specialized acquisition and analysis tools.
Flora Amato, Giovanni Cozzolino, Marco Giacalone, Antonino Mazzeo, Francesco Moscato, Francesco Romeo
### Remarks of Social Data Mining Applications in the Internet of Data
Social network analysis attracted interests from both the research and business communities for a strong potential and variety of applications. In addition, this interest has been fuelled by the large success of online social networking sites and the subsequent abundance of social network data produced. A key aspect in this research field is the influence maximization in social networks. In this paper we discuss an overview about the models and the approaches widely used to analyse social networks. In this context, we also discuss data preparation and privacy concerns also considering different kind of approaches based on centrality measures.
Salvatore Cuomo, Francesco Maiorano, Francesco Piccialli
### An Approach for Securing Cloud-Based Wide Area Monitoring of Smart Grid Systems
Computing power and flexibility provided by cloud technologies represent an opportunity for Smart Grid applications, in general, and for Wide Area Monitoring Systems, in particular. Even though the cloud model is considered efficient for Smart Grids, it has stringent constraints in terms of security and reliability. An attack to the integrity or confidentiality of data may have a devastating impact for the system itself and for the surrounding environment. The main security risk is represented by malicious insiders, i.e., malevolent employees having privileged access to the hosting machines. In this paper, we evaluate a powerful hardening approach that could be leveraged to protect synchrophasor data processed at cloud level. In particular, we propose the use of homomorphic encryption to address risks related to malicious insiders. Our goal is to estimate the feasibility of such a security solution by verifying the compliance with frame rate requirements typical of synchrophasor standards.
Luigi Coppolino, Salvatore D’Antonio, Giovanni Mazzeo, Luigi Romano, Luigi Sgaglione
### Cooperative Localization Logic Schema in Vehicular Ad Hoc Networks
Localization of nodes in wireless sensor networks (WSNs) is typically obtained by exploiting specific systems such as Global Position System (GPS). In this work we consider a GPS-free scenario and we want to provide a way to precisely estimate the nodes position, by exploiting cooperation mechanisms. Many algorithms have been proposed for environment mapping problem considering a single vehicle perspective, but they do not fully exploit the potential of having a network of nodes where the collaboration can seriously speedup the mapping procedure. In this work, we analyze the Vehicular Ad Hoc Networks (VANets) scenario and we propose a strategy for vehicle pose estimation by exploiting a decentralized clustering technique, and some fixed nodes in the environment called anchors, whose position is already known. The power of this approach is in the cooperative nature of vehicles localization, explained by means of Prolog facts and rules that point out the actual inference procedure leading to the pose estimation.
Walter Balzano, Silvia Stranieri
### FiDGP: A Smart Fingerprinting Radiomap Refinement Method Based on Distance-Geometry Problem
Localization-based services are having a lot of attention in the latest years, due to the widespread availability of mobile smart devices like smartphones. While in outdoor environments it is possible to use Global Navigation Satellite Systems (GNSS) to obtain accurate user position, in indoor environments where sky visibility is an issue good methodologies are still under research. In this context, WiFi fingerprinting based localization systems are quite interesting as they offer good positional accuracy using available network signals stored in a database named RadioMap, without the need of a secondary localization-only infrastructure. In this paper we present FiDGP: A smart Fingerprinting radiomap refinement method based on Distance-Geometry Problem, which exploits wifi-fingerprinting and a DGP-based algorithm in order to provide superior positioning while also keeping the RadioMap updated over time.
Walter Balzano, Fabio Vitale
### Backmatter
Weitere Informationen
|
2019-10-19 08:17:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26600247621536255, "perplexity": 2204.241368587869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986692126.27/warc/CC-MAIN-20191019063516-20191019091016-00033.warc.gz"}
|
https://math.stackexchange.com/questions/2047462/probability-of-coin-flip
|
# Probability of coin flip
I was wondering about the probability of coin flips. How many times would i need to toss a coin to have at least 75% chance of getting a head.
I know the answer would be $1-\left(\left(\frac{1}{2}\right)^2\right)$ = 2 throws but how would you work it out without trial and error.
• One flip you gain .5, another flip you get half of that which makes .75 so 2 – suomynonA Dec 7 '16 at 3:01
• Log(1-3/4)/log(1/2)? – Kitter Catter Dec 7 '16 at 3:06
Having at least a $75%$ chance of getting a head is equivalent to a $(100-75)\%=25\%$ or lower chance of getting no heads.
Getting no heads means flipping all tails. The probability of flipping $k$ consecutive tails is $\frac{1}{2^k}$. We want this probability to be less than or equal to $25\%$ (or $\frac{1}{4}$).
$k=2$ is the smallest solution, so you would need to toss the coin twice.
If you want to get a%, you need $$log_{1\over2}1-{a\over100}$$ times, then find the number that is closest to it that is greater than it.
|
2019-12-06 21:07:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8599188923835754, "perplexity": 254.9019365265186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540490972.13/warc/CC-MAIN-20191206200121-20191206224121-00116.warc.gz"}
|
https://www.secretprojects.co.uk/threads/astronomy-and-planetary-science-thread.36194/
|
# Astronomy and Planetary Science Thread
#### Flyaway
##### ACCESS: Above Top Secret
Senior Member
The search for radio emission from the exoplanetary systems 55 Cancri, υ Andromedae, and τ Boötis using LOFAR beam-formed observations
Observing planetary auroral radio emission is the most promising method to detect exoplanetary magnetic fields, the knowledge of which will provide valuable insights into the planet's interior structure, atmospheric escape, and habitability. We present LOFAR-LBA circularly polarized beamformed observations of the exoplanetary systems 55 Cancri, υ Andromedae, and τ Boötis. We tentatively detect circularly polarized bursty emission from the τ Boötis system in the range 14-21 MHz with a flux density of ∼890 mJy and with a significance of ∼3σ. For this detection, no signal is seen in the OFF-beams, and we do not find any potential causes which might cause false positives. We also tentatively detect slowly variable circularly polarized emission from τ Boötis in the range 21-30 MHz with a flux density of ∼400 mJy and with a statistical significance of >8σ. The slow emission is structured in the time-frequency plane and shows an excess in the ON-beam with respect to the two simultaneous OFF-beams. Close examination casts some doubts on the reality of the slowly varying signal. We discuss in detail all the arguments for and against an actual detection. Furthermore, a ∼2σ marginal signal is found from the υ Andromedae system and no signal is detected from the 55 Cancri system. Assuming the detected signals are real, we discuss their potential origin. Their source probably is the τ Bootis planetary system, and a possible explanation is radio emission from the exoplanet τ Bootis b via the cyclotron maser mechanism. Assuming a planetary origin, we derived limits for the planetary polar surface magnetic field strength, finding values compatible with theoretical predictions. Further low-frequency observations are required to confirm this possible first detection of an exoplanetary radio signal. [Abridged]
Last edited:
Senior Member
#### Flyaway
##### ACCESS: Above Top Secret
Senior Member
Exploring Primordial Black Holes from the Multiverse with Optical Telescopes
Primordial black holes (PBHs) are a viable candidate for dark matter if the PBH masses are in the currently unconstrained "sublunar" mass range. We revisit the possibility that PBHs were produced by nucleation of false vacuum bubbles during inflation. We show that this scenario can produce a population of PBHs that simultaneously accounts for all dark matter, explains the candidate event in Subaru Hyper Suprime-Cam (HSC) data, and contains both heavy black holes as observed by LIGO and very heavy seeds of supermassive black holes. We demonstrate with numerical studies that future observations of HSC, as well as other optical surveys, such as LSST, will be able to provide a definitive test for this generic PBH formation mechanism if it is the dominant source of dark matter.
Why was the HSC indispensable in this research? The HSC has a unique capability to image the entire Andromeda galaxy every few minutes. If a black hole passes through the line of sight to one of the stars, the black hole’s gravity bends the light rays and makes the star appear brighter than before for a short period of time. The duration of the star’s brightening tells the astronomers the mass of the black hole. With HSC observations, one can simultaneously observe one hundred million stars, casting a wide net for primordial black holes that may be crossing one of the lines of sight.
The first HSC observations have already reported a very intriguing candidate event consistent with a PBH from the “multiverse,” with a black hole mass comparable to the mass of the Moon. Encouraged by this first sign, and guided by the new theoretical understanding, the team is conducting a new round of observations to extend the search and to provide a definitive test of whether PBHs from the multiverse scenario can account for all dark matter.
Senior Member
#### Flyaway
##### ACCESS: Above Top Secret
Senior Member
Looks like they accidentally proved one aspect of the MOND theory.
Testing the Strong Equivalence Principle: Detection of the External Field Effect in Rotationally Supported Galaxies
Abstract
The strong equivalence principle (SEP) distinguishes general relativity (GR) from other viable theories of gravity. The SEP demands that the internal dynamics of a self-gravitating system under freefall in an external gravitational field should not depend on the external field strength. We test the SEP by investigating the external field effect (EFE) in Milgromian dynamics (MOND), proposed as an alternative to dark matter in interpreting galactic kinematics. We report a detection of this EFE using galaxies from the Spitzer Photometry and Accurate Rotation Curves (SPARC) sample together with estimates of the large-scale external gravitational field from an all-sky galaxy catalog. Our detection is threefold: (1) the EFE is individually detected at 8σ to 11σ in "golden" galaxies subjected to exceptionally strong external fields, while it is not detected in exceptionally isolated galaxies, (2) the EFE is statistically detected at more than 4σ from a blind test of 153 SPARC rotating galaxies, giving a mean value of the external field consistent with an independent estimate from the galaxies' environments, and (3) we detect a systematic downward trend in the weak gravity part of the radial acceleration relation at the right acceleration predicted by the EFE of the MOND modified gravity. Tidal effects from neighboring galaxies in the Λ cold dark matter (CDM) context are not strong enough to explain these phenomena. They are not predicted by existing ΛCDM models of galaxy formation and evolution, adding a new small-scale challenge to the ΛCDM paradigm. Our results point to a breakdown of the SEP, supporting modified gravity theories beyond GR.
#### Nik
##### ACCESS: Top Secret
FWIW, I vaguely remember a suggestion on PhysOrg that, given Jupiter is a powerful source of auroral etc RF, hypothetical planets X, XI , even XII might be sought thus ...
Notion derided. But, clearly, a seed was planted. 'Nearby' exoplanets were duly scanned for RF...
Senior Member
#### Flyaway
##### ACCESS: Above Top Secret
Senior Member
I see Harvard professor Avi Loeb has been out and about promoting his book about Oumuamua. He appears to be convinced it was ‘alien’ junk.
#### Flyaway
##### ACCESS: Above Top Secret
Senior Member
591 High-velocity Stars in the Galactic Halo Selected from LAMOST DR7 and Gaia DR2
Abstract
In this paper, we report 591 high-velocity star candidates (HiVelSCs) selected from over 10 million spectra of Data Release 7 (DR7) of the Large Sky Area Multi-object Fiber Spectroscopic Telescope and the second Gaia data release, with three-dimensional velocities in the Galactic rest frame larger than 445 km s−1. We show that at least 43 HiVelSCs are unbound to the Galaxy with escape probabilities larger than 50%, and this number decreases to eight if the possible parallax zero-point error is corrected. Most of these HiVelSCs are metal-poor and slightly α-enhanced inner halo stars. Only 14% of them have [Fe/H] > −1, which may be the metal-rich "in situ" stars in the halo formed in the initial collapse of the Milky Way or metal-rich stars formed in the disk or bulge but kinematically heated. The low ratio of 14% implies that the bulk of the stellar halo was formed from the accretion and tidal disruption of satellite galaxies. In addition, HiVelSCs on retrograde orbits have slightly lower metallicities on average compared with those on prograde orbits; meanwhile, metal-poor HiVelSCs with [Fe/H] < −1 have an even faster mean retrograde velocity compared with metal-rich HiVelSCs. To investigate the origins of HiVelSCs, we perform orbit integrations and divide them into four types, i.e., hypervelocity stars, hyper-runaway stars, runaway stars and fast halo stars. A catalog for these 591 HiVelSCs, including radial velocities, atmospheric parameters, Gaia astrometric parameters, spatial positions, and velocities, etc., is available in the China-VO PaperData Repository at doi:10.12149/101038.
Source: https://www.universetoday.com/14946...s-many-on-their-way-out-of-the-milky-way/amp/
#### Flyaway
##### ACCESS: Above Top Secret
Senior Member
Cosmic Distances Calibrated to 1% Precision with Gaia EDR3 Parallaxes and Hubble Space Telescope Photometry of 75 Milky Way Cepheids Confirm Tension with LambdaCDM
We present an expanded sample of 75 Milky Way Cepheids with Hubble Space Telescope (HST) photometry and Gaia EDR3 parallaxes which we use to recalibrate the extragalactic distance ladder and refine the determination of the Hubble constant. All HST observations were obtained with the same instrument (WFC3) and filters (F555W, F814W, F160W) used for imaging of extragalactic Cepheids in Type Ia supernova (SN Ia) hosts. The HST observations used the WFC3 spatial scanning mode to mitigate saturation and reduce pixel-to-pixel calibration errors, reaching a mean photometric error of 5 millimags per observation. We use new Gaia EDR3 parallaxes, vastly improved since DR2, and the Period-Luminosity (PL) relation of these Cepheids to simultaneously calibrate the extragalactic distance ladder and to refine the determination of the Gaia EDR3 parallax offset. The resulting geometric calibration of Cepheid luminosities has 1.0% precision, better than any alternative geometric anchor. Applied to the calibration of SNe~Ia, it results in a measurement of the Hubble constant of 73.0 +/- 1.4 km/sec/Mpc, in good agreement with conclusions based on earlier Gaia data releases. We also find the slope of the Cepheid PL relation in the Milky Way, and the metallicity dependence of its zeropoint, to be in good agreement with the mean values derived from other galaxies. In combination with the best complementary sources of Cepheid calibration, we reach 1.8% precision and find H_0=73.2 +/- 1.3 km/sec/Mpc, a 4.2 sigma difference with the prediction from Planck CMB observations under LambdaCDM. We expect to reach ~1.3% precision in the near term from an expanded sample of ~40 SNe Ia in Cepheid hosts.
Source: https://www.universetoday.com/14948...t-doesnt-resolve-the-crisis-in-cosmology/amp/[/B]
Senior Member
#### Raberto
##### Patiently awaiting RQ-180 revelations...
I see Harvard professor Avi Loeb has been out and about promoting his book about Oumuamua. He appears to be convinced it was ‘alien’ junk.
New Scientist 'Big Thinkers' online lecture by Professor Loeb on Feb 11th if anyone else is interested? I'm attending =)
Senior Member
Senior Member
Last edited:
Senior Member
Senior Member
Last edited:
#### Flyaway
##### ACCESS: Above Top Secret
Senior Member
The future large obliquity of Jupiter
Aims: We aim to determine whether Jupiter's obliquity is bound to remain exceptionally small in the Solar System, or if it could grow in the future and reach values comparable to those of the other giant planets.
Methods: The spin axis of Jupiter is subject to the gravitational torques from its regular satellites and from the Sun. These torques evolve over time due to the long-term variations of its orbit and to the migration of its satellites. With numerical simulations, we explore the future evolution of Jupiter's spin axis for different values of its moment of inertia and for different migration rates of its satellites. Analytical formulas show the location and properties of all relevant resonances.
Results: Because of the migration of the Galilean satellites, Jupiter's obliquity is currently increasing, as it adiabatically follows the drift of a secular spin-orbit resonance with the nodal precession mode of Uranus. Using the current estimates of the migration rate of the satellites, the obliquity of Jupiter can reach values ranging from 6° to 37° after 5 Gyrs from now, according to the precise value of its polar moment of inertia. A faster migration for the satellites would produce a larger increase in obliquity, as long as the drift remains adiabatic.
Conclusions: Despite its peculiarly small current value, the obliquity of Jupiter is no different from other obliquities in the Solar System: It is equally sensitive to secular spin-orbit resonances and it will probably reach comparable values in the future.
Senior Member
Last edited:
Senior Member
Senior Member
Senior Member
Senior Member
Senior Member
#### rooster
##### ACCESS: Secret
I see Harvard professor Avi Loeb has been out and about promoting his book about Oumuamua. He appears to be convinced it was ‘alien’ junk.
I have read the book but I've seen him on TV. He doesn't think its junk but a lightsail type of probe.
I think its interesting that this is the first object from outside our solar system and it was like no one was interested in it especially given its fairly unique 1x10 proportion.
Senior Member
Senior Member
Last edited:
Senior Member
Related paper:
Last edited:
Senior Member
#### Flyaway
##### ACCESS: Above Top Secret
Senior Member
GEO600 reaches 6 dB of squeezing
Gravitational waves cause tiny length changes in the kilometer-size detectors of the international network (GEO600, KAGRA, LIGO, Virgo). The instruments use laser light to detect these effects and are so sensitive that they are fundamentally limited by quantum mechanics. This limit manifests as an ever-present background noise which can never be fully removed and which overlaps with gravitational-wave signals. But one can change the noise properties – using a process called squeezing – such that it does not disturb the measurements as much. Now, GEO600 researchers have achieved the strongest squeezing ever seen in a gravitational-wave detector. They lowered the quantum mechanical noise by up to a factor of two. This is a big step to third-generation detectors such as the Einstein Telescope and Cosmic Explorer. The GEO600 team is confident to reach even better squeezing in the future.
Senior Member
#### Flyaway
##### ACCESS: Above Top Secret
Senior Member
Evidence for chromium hydride in the atmosphere of hot Jupiter WASP-31b
Context. The characterisation of exoplanet atmospheres has shown a wide diversity of compositions. Hot Jupiters have the appropriate temperatures to host metallic compounds, which should be detectable through transmission spectroscopy.
Aims. We aim to detect exotic species in the transmission spectra of hot Jupiters, specifically WASP-31b, by testing a variety of chemical species to explain the spectrum.
Methods. We conduct a re-analysis of publicly available transmission data of WASP-31b using the Bayesian retrieval framework TAUREX II. We retrieve various combinations of the opacities of 25 atomic and molecular species to determine the minimum set that is needed to fit the observed spectrum.
Results. We report evidence for the spectroscopic signatures of chromium hydride (CrH), H2O, and K in WASP-31b. Compared to a flat model without any signatures, a CrH-only model is preferred with a statistical significance of ~3.9σ. A model consisting of both CrH and H2O is found with ~2.6 and ~3σ confidence over a CrH-only model and an H2O-only model, respectively. Furthermore, weak evidence for the addition of K is found at ~2.2σ over the H2O+CrH model, although the fidelity of the data point associated with this signature was questioned in earlier studies. Finally, the inclusion of collision-induced absorption and a Rayleigh scattering slope (indicating the presence of aerosols) is found with ~3.5σ confidence over the flat model. This analysis presents the first evidence for signatures of CrH in a hot Jupiter atmosphere. At a retrieved temperature of 1481−355+264 K, the atmosphere of WASP-31b is hot enough to host gaseous Cr-bearing species, and the retrieved abundances agree well with predictions from thermal equilibrium chemistry. Furthermore, the retrieved abundance of CrH agrees with the abundance in an L-type brown dwarf atmosphere. However, additional retrievals using VLT FORS2 data lead to a non-detection of CrH. Future observations with James Webb Space Telescope have the potential to confirm the detection and/or discover other CrH features.
#### Flyaway
##### ACCESS: Above Top Secret
Senior Member
Size and structures of disks around very low mass stars in the Taurus star-forming region★
Context. The discovery of giant planets orbiting very low mass stars (VLMS) and the recent observed substructures in disks around VLMS is challenging planet formation models. Specifically, radial drift of dust particles is a catastrophic barrier in these disks, which prevents the formation of planetesimals and therefore planets.
Aims. We aim to estimate if structures, such as cavities, rings, and gaps, are common in disks around VLMS and to test models of structure formation in these disks. We also aim to compare the radial extent of the gas and dust emission in disks around VLMS, which can give us insight about radial drift.
Methods. We studied six disks around VLMS in the Taurus star-forming region using ALMA Band 7 (~340 GHz) at a resolution of ~0.1″. The targets were selected because of their high disk dust content in their stellar mass regime.
Results. Our observations resolve the disk dust continuum in all disks. In addition, we detect the 12CO (J = 3−2) emission line in all targets and 13CO (J = 3−2) in five of the six sources. The angular resolution allows the detection of dust substructures in three out of the six disks, which we studied by using UV-modeling. Central cavities are observed in the disks around stars MHO 6 (M 5.0) and CIDA 1 (M 4.5), while we have a tentative detection of a multi-ringed disk around J0433. We estimate that a planet mass of ~0.1 MJup or ~0.4 MSaturn is required for a single planet to create the first gap in J0433. For the cavities of MHO 6 and CIDA 1, a Saturn-mass planet (~0.3 MJup) is required. The other three disks with no observed structures are the most compact and faintest in our sample, with the radius enclosing 90% of the continuum emission varying between ~13 and 21 au. The emission of 12CO and 13CO is more extended than the dust continuum emission in all disks of our sample. When using the 12CO emission to determine the gas disk extension Rgas, the ratio of Rgas∕Rdust in our sample varies from 2.3 to 6.0. One of the disks in our sample, CIDA 7, has the largest Rgas∕Rdust ratio observed so far, which is consistent with models of radial drift being very efficient around VLMS in the absence of substructures.
Conclusions. Given our limited angular resolution, substructures were only directly detected in the most extended disks, which represent 50% of our sample, and there are hints of unresolved structured emission in one of the bright smooth sources. Our observations do not exclude giant planet formation on the substructures observed. A comparison of the size and luminosity of VLMS disks with their counterparts around higher mass stars shows that they follow a similar relation.
#### Flyaway
##### ACCESS: Above Top Secret
Senior Member
Six transiting planets and a chain of Laplace resonances in TOI-178
Determining the architecture of multi-planetary systems is one of the cornerstones of understanding planet formation and evolution. Resonant systems are especially important as the fragility of their orbital configuration ensures that no significant scattering or collisional event has taken place since the earliest formation phase when the parent protoplanetary disc was still present. In this context, TOI-178 has been the subject of particular attention since the first TESS observations hinted at a 2:3:3 resonant chain. Here we report the results of observations from CHEOPS, ESPRESSO, NGTS, and SPECULOOS with the aim of deciphering the peculiar orbital architecture of the system. We show that TOI-178 harbours at least six planets in the super-Earth to mini-Neptune regimes, with radii ranging from 1.152(-0.070/+0.073) to 2.87(-0.13/+0.14) Earth radii and periods of 1.91, 3.24, 6.56, 9.96, 15.23, and 20.71 days. All planets but the innermost one form a 2:4:6:9:12 chain of Laplace resonances, and the planetary densities show important variations from planet to planet, jumping from 1.02(+0.28/-0.23) to 0.177(+0.055/-0.061) times the Earth's density between planets c and d. Using Bayesian interior structure retrieval models, we show that the amount of gas in the planets does not vary in a monotonous way, contrary to what one would expect from simple formation and evolution models and unlike other known systems in a chain of Laplace resonances. The brightness of TOI-178 allows for a precise characterisation of its orbital architecture as well as of the physical nature of the six presently known transiting planets it harbours. The peculiar orbital configuration and the diversity in average density among the planets in the system will enable the study of interior planetary structures and atmospheric evolution, providing important clues on the formation of super-Earths and mini-Neptunes.
Artist animation of the system:
View: https://youtu.be/-WevvRG9ysY
Last edited:
#### Flyaway
##### ACCESS: Above Top Secret
Senior Member
Refining the Transit-timing and Photometric Analysis of TRAPPIST-1: Masses, Radii, Densities, Dynamics, and Ephemerides
Abstract
We have collected transit times for the TRAPPIST-1 system with the Spitzer Space Telescope over four years. We add to these ground-based, HST, and K2 transit-time measurements, and revisit an N-body dynamical analysis of the seven-planet system using our complete set of times from which we refine the mass ratios of the planets to the star. We next carry out a photodynamical analysis of the Spitzer light curves to derive the density of the host star and the planet densities. We find that all seven planets' densities may be described with a single rocky mass–radius relation which is depleted in iron relative to Earth, with Fe 21 wt% versus 32 wt% for Earth, and otherwise Earth-like in composition. Alternatively, the planets may have an Earth-like composition but enhanced in light elements, such as a surface water layer or a core-free structure with oxidized iron in the mantle. We measure planet masses to a precision of 3%–5%, equivalent to a radial-velocity (RV) precision of 2.5 cm s−1, or two orders of magnitude more precise than current RV capabilities. We find the eccentricities of the planets are very small, the orbits are extremely coplanar, and the system is stable on 10 Myr timescales. We find evidence of infrequent timing outliers, which we cannot explain with an eighth planet; we instead account for the outliers using a robust likelihood function. We forecast JWST timing observations and speculate on possible implications of the planet densities for the formation, migration, and evolution of the planet system.
#### robunos
Senior Member
New analysis finds the super-giant star is dimming and has entered helium-burning phase
#### Flyaway
##### ACCESS: Above Top Secret
Senior Member
Related paper to the above.
Standing on the Shoulders of Giants: New Mass and Distance Estimates for Betelgeuse through Combined Evolutionary, Asteroseismic, and Hydrodynamic Simulations with MESA
Abstract
We conduct a rigorous examination of the nearby red supergiant Betelgeuse by drawing on the synthesis of new observational data and three different modeling techniques. Our observational results include the release of new, processed photometric measurements collected with the space-based Solar Mass Ejection Imager instrument prior to Betelgeuse's recent, unprecedented dimming event. We detect the first radial overtone in the photometric data and report a period of 185 ± 13.5 days. Our theoretical predictions include self-consistent results from multi-timescale evolutionary, oscillatory, and hydrodynamic simulations conducted with the Modules for Experiments in Stellar Astrophysics software suite. Significant outcomes of our modeling efforts include a precise prediction for the star's radius: ${764}_{-62}^{+116}\,{R}_{\odot }$. In concert with additional constraints, this allows us to derive a new, independent distance estimate of ${168}_{-15}^{+27}$ pc and a parallax of $\pi ={5.95}_{-0.85}^{+0.58}$ mas, in good agreement with Hipparcos but less so with recent radio measurements. Seismic results from both perturbed hydrostatic and evolving hydrodynamic simulations constrain the period and driving mechanisms of Betelgeuse's dominant periodicities in new ways. Our analyses converge to the conclusion that Betelgeuse's ≈400 day period is the result of pulsation in the fundamental mode, driven by the κ-mechanism. Grid-based hydrodynamic modeling reveals that the behavior of the oscillating envelope is mass-dependent, and likewise suggests that the nonlinear pulsation excitation time could serve as a mass constraint. Our results place α Orionis definitively in the early core helium-burning phase of the red supergiant branch. We report a present-day mass of 16.5–19 M ⊙—slightly lower than typical literature values.
#### Flyaway
##### ACCESS: Above Top Secret
Senior Member
Now it’s the turn of Alpha Centauri A to host a possible exoplanet following in the footsteps of Proxima Centauri, in this case it’s a warm Neptune in its habitable zone. As ever more data is needed as instrumental artefact cannot be ruled at this time.
Imaging low-mass planets within the habitable zone of α Centauri
Giant exoplanets on wide orbits have been directly imaged around young stars. If the thermal background in the mid-infrared can be mitigated, then exoplanets with lower masses can also be imaged. Here we present a ground-based mid-infrared observing approach that enables imaging low-mass temperate exoplanets around nearby stars, and in particular within the closest stellar system, α Centauri. Based on 75–80% of the best quality images from 100 h of cumulative observations, we demonstrate sensitivity to warm sub-Neptune-sized planets throughout much of the habitable zone of α Centauri A. This is an order of magnitude more sensitive than state-of-the-art exoplanet imaging mass detection limits. We also discuss a possible exoplanet or exozodiacal disk detection around α Centauri A. However, an instrumental artifact of unknown origin cannot be ruled out. These results demonstrate the feasibility of imaging rocky habitable-zone exoplanets with current and upcoming telescopes.
#### FighterJock
##### ACCESS: Top Secret
I would like Alpha Centauri A to have a planet orbiting it just like Proxima Centauri, though it is a pity it is not an Earth like planet. The astronomers will need to be careful with the data that they have, and go over it with a fine tooth comb before releasing it in full.
|
2022-06-26 13:58:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5537734627723694, "perplexity": 2836.894028640204}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103269583.13/warc/CC-MAIN-20220626131545-20220626161545-00467.warc.gz"}
|
http://whatdidilearn.info/2018/09/02/how-to-use-session-attributes-in-alexa-skills.html
|
Last time we have learned how to capture input from the user. Using captured inputs we are able to handle communication with users from our skills.
But a skill does not always have a linear conversation. We may want to ask a question and behave differently based on the answer we’ve got. To do that, we need to keep some information in the “memory” of an Alexa skill.
To achieve that, Alexa Skill SDK provides us an ability to use Session Attributes.
Let’s see how we can use that.
To demonstrate that on example let’s implement “Toy Robot Simulator” as an Alexa skill.
## Toy Robot Simulator
If you are not familiar with the Toy Robot problem here it is.
We have a table top with the size of 5 and 5 squares. We can place a toy robot on the table. The robot then would have its coordinates and a facing position. The facing direction could be “North”, “East”, “South” or “West”. Once the robot on the table we are able to move it using the “Move” command. It should move one step to the facing direction. We can also rotate it “left” and “right” changing facing direction. We can use the “Report” command to ask for the current Robot’s position. Also, the robot should not be able to fall from the table. To meet that requirement we should not move the robot if it is on the edge.
## Skill implementation
At first, I will create a boilerplate skill to work on. I will create it using ASK CLI in the same way I’ve described in the Use ASK CLI to create and deploy Alexa Skills article. If you are not familiar with that you may want to follow those instructions.
At the end of this step, we will have a skill with the welcome message and ability to place the robot. Once we place it in the initial position, Alexa tells us about that and terminates the skill.
The PlaceIntent is responsible to handle the start of the simulation. Here is how the implementation looks like:
const PlaceIntentHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'IntentRequest'
&& handlerInput.requestEnvelope.request.intent.name === 'PlaceIntent';
},
handle(handlerInput) {
const sessionAttributes = handlerInput.attributesManager.getSessionAttributes();
const { position } = sessionAttributes;
const speechText = 'The robot is in the initial position.';
return handlerInput.responseBuilder
.speak(speechText)
.withSimpleCard(CARD_TITLE, speechText)
.getResponse();
},
};
That would be enough to start from here.
### Placing the robot
Now, once we are placing the robot we need to actually put it in some place. That means we need to init the default position and facing direction.
Also, if the robot is already in the position, we want to notify the user about that.
To keep the position of the robot we can use the following structure:
position = {
'direction': 'north',
'x': 0,
'y': 0
};
If we define the position of the robot as a usual constant, that value would be lost between intents. Thus, we need to properly store it into a session attributes.
How do we do that?
The handlerInput object contains an attributesManager. Which in turn gives us getSessionAttributes and setSessionAttributes functions.
• getSessionAttributes - as you may understand already, provides us the list of existing session attributes
• setSessionAttributes - allows us to update them.
Now, let’s extend out PlaceIntentHandler to retrieve the robot’s position. Notify a user if the robot is in position already or set the robot in a default position. As the last step, we need to update the session attributes.
const PlaceIntentHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'IntentRequest'
&& handlerInput.requestEnvelope.request.intent.name === 'PlaceIntent';
},
handle(handlerInput) {
const sessionAttributes = handlerInput.attributesManager.getSessionAttributes();
const { position } = sessionAttributes;
let speechText = 'The robot is already in the position.';
if (typeof position === 'undefined') {
speechText = 'The robot is in the initial position.';
sessionAttributes.position = {
'direction': 'north',
'x': 0,
'y': 0
};
}
handlerInput.attributesManager.setSessionAttributes(sessionAttributes);
return handlerInput.responseBuilder
.speak(speechText)
.reprompt(speechText)
.withSimpleCard(CARD_TITLE, speechText)
.getResponse();
},
};
Now we have the robot in the position and the position is stored as a session attribute.
Let’s move on and implement ReportIntent to get the current position of the robot.
### Get reports
First, we need to create an intent. We can update the models/en-US.json file to include:
{
"name": "ReportIntent",
"slots": [],
"samples": [
"Report",
"Where is the robot",
"Where am I"
]
}
Then we need to add ReportIntentHandler into addRequestHandlers:
addRequestHandlers(
// ...
ReportIntentHandler,
// ...
)
Then we need to define ReportIntentHandler itself:
const ReportIntentHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'IntentRequest'
&& handlerInput.requestEnvelope.request.intent.name === 'ReportIntent';
},
handle(handlerInput) {
const sessionAttributes = handlerInput.attributesManager.getSessionAttributes();
const { position } = sessionAttributes;
let speechText = '';
if (typeof position === 'undefined') {
speechText = 'The robot is not in the position yet. You need to place it first.';
} else {
const { direction, x, y } = position;
speechText = The robot is in position ${x}${y} facing \${direction}.;
}
return handlerInput.responseBuilder
.speak(speechText)
.reprompt(speechText)
.withSimpleCard(CARD_TITLE, speechText)
.getResponse();
},
};
As you can see, this time we are only using getSessionAttributes to read the current position. We don’t need to update it.
Once we read a position and it exists, then we tell back to a user the current coordinates of the robot. If the position is blank, we assume the robot was not placed yet.
Here is the example of a user’s conversation:
### Rotate the robot
As usual, let’s start by defining the intent.
{
"name": "TurnRightIntent",
"slots": [],
"samples": [
"turn right",
"right",
"rotate right"
]
}
Then we need to register a request handler:
.addRequestHandlers(
// ...
TurnRightIntentHandler,
// ...
)
And implement the intent itself:
const TurnRightIntentHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'IntentRequest'
&& handlerInput.requestEnvelope.request.intent.name === 'TurnRightIntent';
},
handle(handlerInput) {
const sessionAttributes = handlerInput.attributesManager.getSessionAttributes();
const { position } = sessionAttributes;
const rotateDirections = {
north: 'east',
east: 'south',
south: 'west',
west: 'north'
};
let speechText = '';
if (typeof position === 'undefined') {
speechText = 'The robot is not in the position yet. You need to place it first.';
} else {
position.direction = rotateDirections[position.direction];
speechText = 'Beep-Boop.';
}
handlerInput.attributesManager.setSessionAttributes(sessionAttributes);
return handlerInput.responseBuilder
.speak(speechText)
.reprompt(speechText)
.withSimpleCard(CARD_TITLE, speechText)
.getResponse();
},
};
At first, we read a current position. In the same way, as we did in ReportIntentHandler we are checking if the robot has been placed at all. Then we define the rotation directions. If we turn right, we need to change from the North to the East, from the East to the South and so on. Then we actually rotate the robot and update session attributes. Because we want them to persist during our session.
The rotation of the robot to the left is almost identical. I will skip it here. The only difference is the rotation direction. From the North, we rotate to the West, from the West - to the South and so on.
Here is an example again.
Now let’s move and implement the last step. The movement.
### Movement
The intent goes first.
{
"name": "MoveIntent",
"slots": [],
"samples": [
"move",
"go forward"
]
}
Then the handler register.
.addRequestHandlers(
// ...
MoveIntentHandler,
// ...
)
Then the implementation itself.
const MoveIntentHandler = {
canHandle(handlerInput) {
return handlerInput.requestEnvelope.request.type === 'IntentRequest'
&& handlerInput.requestEnvelope.request.intent.name === 'MoveIntent';
},
handle(handlerInput) {
const sessionAttributes = handlerInput.attributesManager.getSessionAttributes();
let { position } = sessionAttributes;
let speechText = '';
if (typeof position === 'undefined') {
speechText = 'The robot is not in the position yet. You need to place it first.';
} else {
position = calculateNewPosition(position);
speechText = 'Beep-Boop.';
}
handlerInput.attributesManager.setSessionAttributes(sessionAttributes);
return handlerInput.responseBuilder
.speak(speechText)
.reprompt(speechText)
.withSimpleCard(CARD_TITLE, speechText)
.getResponse();
},
};
As you can see. It’s again very similar to the intents we have implemented before. We read the session attributes, change them and update again.
The difference here in calculation of a new position:
position = calculateNewPosition(position);
Here is the implementation of that function:
const calculateNewPosition = (position) => {
switch (position.direction) {
case 'north':
position.x = Math.min(position.x + 1, 4);
return;
case 'east':
position.y = Math.min(position.y + 1, 4);
return;
case 'south':
position.x = Math.max(position.x - 1, 0);
return;
case 'west':
position.x = Math.max(position.y - 1, 0);
return;
default:
}
return position;
};
We are moving the robot depending on where it’s facing. In addition, we prevent the robot to fall from the table. That is why are not going above 4 and below 0 in coordinates.
There is an example after I’ve made a couple of moves.
## Wrapping up
That is it. You can see that using the session attributes is quite a trivial task. Even though, I was trying to provide more examples and make the article more complete.
Now, having tools like capture user input and sharing attributes between intents you can build quite powerful skills.
You can find the complete source code on the GitHub page.
See you next time.
|
2019-04-24 10:40:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22368057072162628, "perplexity": 3466.3055606370617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578640839.82/warc/CC-MAIN-20190424094510-20190424115500-00050.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-6th-edition/chapter-1-sections-1-2-1-3-integrated-review-algebraic-expressions-and-operations-on-whole-numbers-page-29/13
|
## Intermediate Algebra (6th Edition)
$-15-2x$
We can translate the phrase 'subtract twice a number from -15' into an algebraic expression. This phrase is equivalent to the expression $-15-(2\times x)=-15-2x$, for some number x. We can read this phrase as 'Twice of x (or 2 times x) subtracted from -15.'
|
2017-02-22 15:53:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7536860108375549, "perplexity": 1894.3999086647284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170993.54/warc/CC-MAIN-20170219104610-00481-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://bitbucket.org/libsleipnir/sleipnir/src/ed214a93afc7/src/seekweight.h
|
# sleipnir / src / seekweight.h
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 /***************************************************************************** * This file is provided under the Creative Commons Attribution 3.0 license. * * You are free to share, copy, distribute, transmit, or adapt this work * PROVIDED THAT you attribute the work to the authors listed below. * For more information, please see the following web page: * http://creativecommons.org/licenses/by/3.0/ * * This file is a component of the Sleipnir library for functional genomics, * authored by: * Curtis Huttenhower (chuttenh@princeton.edu) * Mark Schroeder * Maria D. Chikina * Olga G. Troyanskaya (ogt@princeton.edu, primary contact) * * If you use this library, the included executable tools, or any related * code in your work, please cite the following publication: * Curtis Huttenhower, Mark Schroeder, Maria D. Chikina, and * Olga G. Troyanskaya. * "The Sleipnir library for computational functional genomics" *****************************************************************************/ #ifndef SEEKWEIGHT_H #define SEEKWEIGHT_H #include "seekbasic.h" #include "seekreader.h" #include "seekquery.h" #include "seekevaluate.h" namespace Sleipnir { /*! * \brief * Provide functions to assign dataset weight using the query gene. * * For dataset weighting, one way is to use CSeekWeighter::CVWeighting. The CSeekWeighter::CVWeighting * uses a cross-validation (CV) framework, where it partitions the query and performs a search * instance on one sub-query, using the remainder of the queries as the evaluation of the search instance. * * The CSeekWeighter::OrderStatisticsRankAggregation is a rank-based technique described by Adler et al (2009). This combines * dataset weighting and dataset gene-ranking aggregation all into one step. * */ class CSeekWeighter{ public: /*! * \brief * Calculates for each gene the average \a correlation to all of the query genes in a dataset. * * \param rank * A vector that stores the \a correlation of each gene to all of the query genes * * \param cv_query * A vector that stores the query genes * * \param sDataset * A dataset * * \param MIN_REQUIRED * A utype that specifies how many query genes are required to be present in a dataset. * If not enough query genes are present, then the averaging is not performed. * * \param bSquareZ * If true, square the \a correlation values before adding \a correlations. * * \remark * The word \a correlations refer to z-scored, standardized Pearson correlations. * The result is returned in the parameter \c rank. * */ /*cv_query must be present in sDataset */ static bool LinearCombine(vector &rank, const vector &cv_query, CSeekDataset &sDataset, const utype &, const bool &); /*! * \brief * Cross-validates query-genes in a dataset * * \param sQuery * The query and its partitions * * \param sDataset * A dataset * * \param rate * RBP parameter \a p * * \param percent_required * Percentage of query genes required to be present in the dataset * * \param bSquareZ * Whether or not to square \a correlations * * \param rrank * Temporary vector storing intermediary \a correlations * * \param goldStd * If a gold-standard gene-set is provided, use this to evaluate the retrieval of a cross-validation * * This performs multiple cross-validation runs to validate * the query genes in retrieving themselves in the dataset. * The sum of the evaluation of all the * runs then becomes the dataset weight. For evaluation, we use the following formula for scoring a validation run \f$i\f$: * \f[s(i)=\sum_{g \in U}{(1-p)p^{rank(g)}}\f] * where \f$U\f$ is the \f$N-1\f$ parts of the query used for evaluation, \f$p\f$ is an exponential rate * parameter, \f$rank(g)\f$ is the position of \f$g\f$ in the ranking of genes * generated by the subsearch instance \f$i\f$. * * The above formulation is inspired by rank-biased precision. * The parameter \a p needs to be provided. The default value is 0.99. * */ static bool CVWeighting(CSeekQuery &sQuery, CSeekDataset &sDataset, const float &rate, const float &percent_required, const bool &bsquareZ, vector *rrank, const CSeekQuery *goldStd = NULL); /*! * \brief * Performs OrderStatisticsAggregation, also known as the MEM algorithm * * \param iDatasets * The number of datasets * * \param iGenes * The number of genes * * \param rank_d * Two-dimensional vectors storing correlation-ranks to the query genes. * First dimension: datasets. Second dimension: genes. * * \param counts * A vector storing the count of datasets for each gene * * \param master_rank * A vector storing the integrated gene-score * * \param numThreads * The number of threads to be used (in a parallel setup) * * \c rank_d needs to be prepared as follows: a correlation rank vector is obtained from sorting Pearson correlations * in a dataset, and then it is normalized by (rank of correlation) / (number of genes). The result is stored * in \c rank_d. * * Afterward, for each gene \a g, the algorithm compares this gene's \c rank_d distribution across datasets with * that derived from a set of datasets with randomly ordered correlation vectors (ie a null distribution). * A significance p-value is calculated for this gene, and \a -log(p) values are stored in master_rank. */ static bool OrderStatisticsRankAggregation(const utype&, const utype&, utype**, const vector &, vector&, const utype&); static bool OrderStatisticsPreCompute(); /*! * \brief * Simulates a dataset weight for one-gene query * * \param sQuery * The query * * \param sDataset * The dataset * * \param rate * RBP parameter \a p * * \param percent_required * Percentage of query genes required to be present in a dataset (assumed to be 1 in this case) * * \param bSquareZ * Whether or not to square \a correlations * * \param rrank * Final gene-score * * \param goldStd * Gold-standard gene-set for weighting a dataset * * This function is mainly used for equal weighting. * Although equal weighting integrates all datasets with weight = 1, * for the purpose of displaying datasets, the datasets need to be ranked according to the distance to the * average gene-ranking. * * This average gene-ranking is produced by summing gene-rankings from all datasets and divided by the number of datasets. * To score a dataset, we calculate the RBP precision of this dataset in retrieving the top 100 genes of the average ranking. * */ static bool OneGeneWeighting(CSeekQuery&, CSeekDataset&, const float&, const float&, const bool&, vector*, const CSeekQuery*); static bool AverageWeighting(CSeekQuery &sQuery, CSeekDataset &sDataset, const float &percent_required, const bool &bSquareZ, float &w); }; } #endif
|
2015-04-18 14:13:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4085579812526703, "perplexity": 1945.9626044761974}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246634333.17/warc/CC-MAIN-20150417045714-00130-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://socratic.org/questions/a-10-kg-mass-has-a-ke-of-125-j-what-is-its-velocity
|
# A 10 kg mass has a KE of 125 J, what is its velocity?
Dec 9, 2015
5 m/s
#### Explanation:
$E K = \frac{1}{2} m {v}^{2}$
Given: $E K = 125 J = 125 k g {m}^{2} / {s}^{2} , m = 10 k g$
So ${v}^{2} = \frac{2 E K}{m}$ and $v = \sqrt{\frac{2 E K}{m}}$
$v = \sqrt{\frac{2 \cdot 125 \frac{k g {m}^{2}}{s} ^ 2}{10 k g}} = \sqrt{25 {m}^{2} / {s}^{2}} = 5 \frac{m}{s}$
Note: I solve the units first; it helps prevent mistakes.
|
2019-10-23 12:40:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8120815753936768, "perplexity": 2749.3888311056094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987833766.94/warc/CC-MAIN-20191023122219-20191023145719-00017.warc.gz"}
|
https://ph.gsusigmanu.org/9499-what-does-ldquogpu-accelerated-butterfly-matched-fil.html
|
# What does “GPU-accelerated butterfly matched filtering over dense bank of time-symmetric chirp-like templates” mean? (GW170817)
We are searching data for your request:
Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.
A new analysis of gravitational wave (and other data) from GW170817 on 2017-Aug-17 has been published, strongly suggesting that the merger of two neutron stars resulted in a large, rapidly rotating neutron star, rather than a black hole. From the abstract of the recent open access Letter in MNRAS Observational evidence for extended emission to GW170817:
[… ]Here, we report on a possible detection of extended emission (EE) in gravitational radiation during GRB170817A: a descending chirp with characteristic time-scale τs = 3.01 ± 0.2 s in a (H1,L1)-spectrogram up to 700 Hz with Gaussian equivalent level of confidence greater than 3.3σ based on causality alone following edge detection applied to (H1,L1)-spectrograms merged by frequency coincidences. Additional confidence derives from the strength of this EE. The observed frequencies below 1 kHz indicate a hypermassive magnetar rather than a black hole, spinning down by magnetic winds and interactions with dynamical mass ejecta.
Maurice H P M van Putten and Massimo Della Valle, Monthly Notices of the Royal Astronomical Society: Letters, Volume 482, Issue 1, 1 January 2019, Pages L46-L49, https://doi.org/10.1093/mnrasl/sly166
Discussion in paper points to supplementary data and in the introduction to that document, it says:
For GW170817A/GRB170817A, we perform a model-independent deep search for broadband extended gravitational-wave emission in 2048 s (LIGO 2017) of data at 4096 Hz according to Fig. A1 comprising
• Pre-processing: cleaning and glitch removal (Abbott et al. 2017a) followed by whitening of H1, L1 and V1 data;
• Singe detector spectrograms by GPU-accelerated butterfly filtering of H1, L1 and V1 by matched filtering over a dense bank of time-symmetric chirp-like templates (van Putten et al. 2014; van Putten 2017);
• Merging spectrograms by coincidences in frequency or amplitude, producing merged spectrograms as input to image analysis (van Putten 2018).
I am having trouble understanding what GPU-accelerated butterfly filtering of H1, L1 and V1 by matched filtering over a dense bank of time-symmetric chirp-like templates means.van Putten et al. 2014in the supplementary data document refers to both BROADBAND TURBULENT SPECTRA IN GAMMA-RAY BURST LIGHT CURVES and to as well. These appear to be thorough explanations, but pretty in-depth.
Question: Is it possible to explain the basics of what "GPU-accelerated butterfly filtering of H1, L1 and V1 by matched filtering over a dense bank of time-symmetric chirp-like templates" means? A dense bank of chirp-like templates sounds like it could be analogous to a wavelet-type analysis, but with basis waveforms tailored to this specific problem.
First image is a cropped and annotated version of the second which ia from here.
Figure2. Ascending-descending chirp in the (H1,L1)-spectrogram produced by the double neutron star merger GW170817 concurrent with GRB170817A (Goldstein et al. 2017) past coalescence (tc = 1842.43 s). Minor accompanying features around 100 Hz ( 1840-1852 s) are conceivably due to dynamical mass ejecta. Colour coding (blue-to-yellow) is proportional to amplitude defined by butterfly output ρ of time-symmetric chirp-like template correlations to data.
|
2023-01-28 22:42:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33976006507873535, "perplexity": 5712.342738454721}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499695.59/warc/CC-MAIN-20230128220716-20230129010716-00736.warc.gz"}
|
https://ask.libreoffice.org/en/question/165275/hidden-text-and-macros/
|
# Hidden Text and Macros [closed]
I have a question about hidden text, and setting up a couple of macros for using it. I did a google search some years back and found out information about using hidden text, and figured out how to record a couple of macros so that I could easily use it in documents. I don't remember if I got the information from an OpenOffice forum, LibreOffice forum, or some other site. I have found this particularly helpful with outlining using numbered or bulleted lists. It is still set up on my other system, and working fine in an older version of LibreOffice on that system, but I can't figure out how I did it, and copying the macro text over from the old system/earlier LibreOffice to new system/LibreOffice ver 5.0.3.2 did not work.
I know that I defined a variable I named Hide, with a value of 9. Then, I believe I recorded a macro where I created a Hidden Text or Hidden Paragraph Function (it could have even been a Hidden Section maybe), with a condition "Hide==9" and saved it. Then I assigned keyboard shortcuts to each. After that, I could use the shortcut and put in the variable on the end of a line on a numbered list. Then, I could type in text in an unnumbered list entry below it, select the text and all the way up to the 'field place-holder' (the square box with a 9 in it denoting a field), but not including the 'place-holder' in the text selection. Then I would hit my shortcut key for hiding the text and it was hidden. Then, all I had to do was double-click on the variable field and change the value from 9 to 0 or 1 or something to hide/unhide that block of text.
So it was kind of like this :
1. A list entry, with the variable field at the end [9]
Some text in an unnumbered entry, expanding on the subject or whatever
On the new system/LO 5.0.3.2, I don't have a problem setting up the Hide variable and getting that to work. But when I select my text block and all the way up to the 'variable place-holder' ( the [9] in the above example) and use my shortcut, it doesn't hide the text and work as it did in the older version on the other system. In version 5.0.3.2, it puts the text in another box up right next to/behind the variable field. Changing the value of the variable does not hide/unhide the text. And I can't get the text to unhide no matter what I try.
Hopefully someone can figure out what I am talking about and will be able to help me figure out how I did this. Thanks for any help anyone can give.
edit retag reopen merge delete
### Closed for the following reason question is not relevant or outdated by Alex Kemp close date 2020-07-24 12:37:54.948684
Sort by » oldest newest most voted
OK, I have gotten it to work adding a section and clicking the box for Hide. Then setting the condition to "Hide==9". But it only works one time. When you double-click the variable and change it from 9 to 1 or 0 or whatever, it shows the text. Then if you double-click it and change it back to 9, it doesn't hide the text. Then, when you click "format > section" in the main menu, and the box opens, you can see that it automatically changes the Hide condition from "Hide==9" to just zero. LibreOffice seems to do this change of the condition to 0 automatically, on it's own. I don't know if this is a difference between the old version of LibreOffice and version 5.0.3.2 or what.
more
|
2020-08-15 14:35:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3535621166229248, "perplexity": 718.7404594557341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740848.39/warc/CC-MAIN-20200815124541-20200815154541-00595.warc.gz"}
|
https://www.vernier.com/experiments/cwv/22/acid_rain/
|
Vernier Software & Technology
# Acid Rain
## Introduction
In this experiment, you will observe the formation of four acids that occur in acid rain:
• carbonic acid, H2CO3
• nitrous acid, HNO2
• nitric acid, HNO3
• sulfurous acid, H2SO3
Carbonic acid occurs when carbon dioxide gas dissolves in rain droplets of unpolluted air:
${\text{(1) C}}{{\text{O}}_{\text{2}}}{\text{(g) + }}{{\text{H}}_{\text{2}}}{\text{O(1)}} \to {{\text{H}}_{\text{2}}}{\text{C}}{{\text{O}}_{\text{3}}}{\text{(aq)}}$
Nitrous acid and nitric acid result from a common air pollutant, nitrogen dioxide (NO2). Most nitrogen dioxide in our atmosphere is produced from automobile exhaust. Nitrogen dioxide gas dissolves in rain drops and forms nitrous and nitric acid:
${\text{(2) 2 N}}{{\text{O}}_{\text{2}}}{\text{(g) + }}{{\text{H}}_{\text{2}}}{\text{O(1)}} \to {\text{HN}}{{\text{O}}_{\text{2}}}{\text{(aq) + HN}}{{\text{O}}_{\text{3}}}{\text{(aq)}}$
Sulfurous acid is produced from another air pollutant, sulfur dioxide (SO2). Most sulfur dioxide gas in the atmosphere results from burning coal containing sulfur impurities. Sulfur dioxide dissolves in rain drops and forms sulfurous acid:
${\text{(3) S}}{{\text{O}}_{\text{2}}}{\text{(g) + }}{{\text{H}}_{\text{2}}}{\text{O(1)}} \to {{\text{H}}_{\text{2}}}{\text{S}}{{\text{O}}_{\text{3}}}{\text{(aq)}}$
In the procedure outlined below, you will first produce these three gases. You will then bubble the gases through water, producing the acids found in acid rain. The acidity of the water will be monitored with a pH Sensor.
## Objectives
In this experiment, you will
• Generate three gaseous oxides, CO2, SO2, and NO2.
• Simulate the formation of acid rain by bubbling each of the three gases into water and producing three acidic solutions.
• Measure the pH of the three resulting acidic solutions to compare their relative strengths.
## Sensors and Equipment
This experiment features the following Vernier sensors and equipment.
### Option 2
You may also need an interface and software for data collection. What do I need for data collection?
## Chemistry with Vernier
See other experiments from the lab book.
1 Endothermic and Exothermic Reactions 2 Freezing and Melting of Water 3 Another Look at Freezing Temperature 4 Heat of Fusion of Ice 5 Find the Relationship: An Exercise in Graphing Analysis 6 Boyle's Law: Pressure-Volume Relationship in Gases 7 Pressure-Temperature Relationship in Gases 8 Fractional Distillation 9 Evaporation and Intermolecular Attractions 10 Vapor Pressure of Liquids 11 Determining the Concentration of a Solution: Beer's Law 12 Effect of Temperature on Solubility of a Salt 13 Properties of Solutions: Electrolytes and Non-Electrolytes 14 Conductivity of Solutions: The Effect of Concentration 15 Using Freezing Point Depression to Find Molecular Weight 16 Energy Content of Foods 17 Energy Content of Fuels 18 Additivity of Heats of Reaction: Hess's Law 19 Heat of Combustion: Magnesium 20 Chemical Equilibrium: Finding a Constant, Kc 21 Household Acids and Bases 22 Acid Rain 23 Titration Curves of Strong and Weak Acids and Bases 24 Acid-Base Titration 25 Titration of a Diprotic Acid: Identifying an Unknown 26 Using Conductivity to Find an Equivalence Point 27 Acid Dissociation Constant, Ka 28 Establishing a Table of Reduction Potentials: Micro-Voltaic Cells 29 Lead Storage Batteries 30 Rate Law Determination of the Crystal Violet Reaction 31 Time-Release Vitamin C Tablets 32 The Buffer in Lemonade 33 Determining the Free Chlorine Content of Swimming Pool Water 34 Determining the Quantity of Iron in a Vitamin Tablet 35 Determining the Phosphoric Acid Content in Soft Drinks 36 Microscale Acid-Base Titration
### Experiment 22 from Chemistry with Vernier Lab Book
#### Included in the Lab Book
Vernier lab books include word-processing files of the student instructions, essential teacher information, suggested answers, sample data and graphs, and more.
|
2019-10-15 19:30:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34066927433013916, "perplexity": 8073.455039738872}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660231.30/warc/CC-MAIN-20191015182235-20191015205735-00234.warc.gz"}
|
https://sharpe-maths.uk/index.php?main=topic&id=20
|
Previous topic:
Multiplying fractions
Current topic:
Dividing fractions
Next topic:
Multiplying & dividing mixed numbers
### Dividing fractions
There's an old joke, I am rather fond of, which goes "How do you get down off an elephant?", to which the answer is "You don't! You get down off a duck."
This is borderline irrelevant but included here a) because I like it and b) because it uses a humourous device based on the idea that to do something, it is often better to begin somewhere other than where you are at present. Like the person asking for directions, who gets the reply, "Well, I wouldn't start from here."
Here's a more relevant question: "How do you divide fractions?" and the answer is "You don't! You turn the division into a multiplication problem and then solve that."
There are, in fact several ways to divide fractions, but by miles the simplest is to rewrite the division problem as a multiplication problem.
To do this, we need to understand the idea of a reciprocal in mathematics.
Put simply, the reciprocal of a number is that number which multiplies by the original number to give $1$. The reciprocal is the multiplicative inverse of the original number.
Here is an example $\frac{2}{5} \times \frac{5}{2} = 1$. In this statement we can see that these fractions will multiply together to make $1$. The calculation below shows how the cancellation would work.
\begin{aligned} & \frac{2}{5} \times \frac{5}{2}\\ &= \frac{\cancelto{1}{2}}{\cancelto{1}{5}} \times \frac{\cancelto{1}{5}}{\cancelto{1}{2}}\\ &=1 \end{aligned}
Now let us look at a more generalised version.
\begin{aligned} & \frac{a}{b} \times \frac{b}{a}\\ &= \frac{\cancelto{1}{a}}{\cancelto{1}{b}} \times \frac{\cancelto{1}{b}}{\cancelto{1}{a}}\\ &=1 \end{aligned}
What you should notice is that the reciprocal of any fraction is the same fraction turned upside down
This means that dividing by a fraction does exactly the same thing as multiplying by the fraction's reciprocal. So we end up with something like the following.
\begin{aligned} & \frac{3}{5} \div \frac{4}{5}\\ &= \frac{3}{5} \times \frac{5}{4}\\ &= \frac{3}{\cancelto{1}{5}} \times \frac{\cancelto{1}{5}}{4}\\ &=\frac{3}{4} \end{aligned}
|
2021-01-27 22:26:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998856782913208, "perplexity": 906.1089596273528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704833804.93/warc/CC-MAIN-20210127214413-20210128004413-00060.warc.gz"}
|
http://inperc.com/wiki/index.php?title=Christopher_Means
|
This site contains: mathematics courses and book; covers: image analysis, data analysis, and discrete modelling; provides: image analysis software. Created and run by Peter Saveliev.
# Christopher Means
## 1 Week 1 Progress
I am working on the dual simulation of heat transfer, meaning the heat moves around a graph. It goes between vertices (stations) via the edges (pipes). Read about the dual grid.
With help from Zach Ahlers, I was able to construct a simulation with random k-values for each wall. For simplicity, I have set the values for the outermost group of cells to be zero on all sides. This way, the simulation will run exactly as if the lattice were 2 elements smaller, except that it will still be able to take a temperature value for the neighbor cells into account.
I've completed the dual simulation. My next step will be to add in correction factors. I've had a hurdle in my way in that no heat was being transferred. I realized after many, many hours of debugging that in calculating temp change I multiplied the sum of effects from the four neighbors by 1/4 to find the average. Unfortunately, I hadn't realized this is an integer division resulting in 0, and cancelling out the transfer.
## 2 Week 2 Progress
I will now be starting on the discrete wave equation.
My current goal is to create a multi-lattice simulation, similar to that of the heat equation that we were able to construct.
Professor Saveliev provided me with these links to give me a starting point. I'll keep everyone posted on my progress.
I've looked into the physics of this project and come up with an equation I will use to model it. Each molecule in the medium transferred some of it's energy to it's neighbors. These molecules, in turn, disperse their energy in a similar fashion. An important thing to remember in any wave model is that the average depth of the medium, and the volume, is constant. That is to say that for an increase in potential energy at one point, another point has to have an equal decrease. The most simple way to ensure this is the case would be to keep the amount of energy leaving a molecule equal to the amount gained by it's neighbor. This should not be hard to keep track of.
This is how you "discretize" the PDE...
The continuous wave equation can be expressed as the second time derivative of f equal to some constant squared multiplied by the Laplacian of f. In the discrete model, I will start on a square lattice, so each index on the grid will have four neighbors in space, as well as two in time. To reconfigure the continuous equation into a discrete one, we simply translate to discrete differentials. The first order time derivative is computed discretely as the difference of $f$ at time $t$ and $t-1$, or $t+1$ and $t$. The second order derivative, which is what the wave equation uses, is a difference of differences. It can be computed by $f$ at time $t$ minus $f$ at $t-1$ subtracted from $f$ at $t+1$ minus $f$ at $t$, resulting in $f|t+1|+f|t+1|+2f|t|$... hmmm..
Apply a similar method to compute the Laplacian. $$f(x+1,y, t) + f(x-1, y, t) + f(x, y+1, t) + f(x, y-1, t) - 4*f(x, y, t).$$
The final step in finding the discrete wave equation is taking care of the constant. This will be treated the same way we treated the k values for heat. The only restriction is that c must be less than one.
Continuous:
$\frac{ \partial^2 \mathbf{F}}{\partial t^2} = \mathbf{c^2} \nabla^2 \mathbf{F}$
Discrete:
$\mathbf{F}_{(x, y, t+1)} + \mathbf{F}_{(x, y, t-1)} - 2*\mathbf{F}_{(x, y, t)} = \mathbf{c}^2 *( \mathbf{F}_{(x+1, y, t)} + \mathbf{F}_{(x-1, y, t)} + \mathbf{F}_{(x, y+1, t)} + \mathbf{F}_{(x, y-1, t)} - 4*\mathbf{F}_{(x, y, t)} )$
The model will calculate the wave height for the next time step using this formula:
$\mathbf{F}_{(x, y, t+1)} = 2*\mathbf{F}_{(x, y, t)} - \mathbf{F}_{(x, y, t-1)} + \mathbf{c}^2 *( \mathbf{F}_{(x+1, y, t)} + \mathbf{F}_{(x-1, y, t)} + \mathbf{F}_{(x, y+1, t)} + \mathbf{F}_{(x, y-1, t)} - 4*\mathbf{F}_{(x, y, t)} )$
## 3 Progress through Week 5
As of now I have made the additions of dampening, both linear and non-linear, and a few obstacles. I attempted to create multiple threads to handle the simulation in order to increase speed and maneuverability. Unfortunately, this topic is a bit beyond my skill set, and I was unable to get it to work without catastrophic error. To see the dampened wave equation, please refer to my colleague's entry on the subject: Jack Goodman.
## 4 Week 6
The group has decided that it is necessary to compare the simulation's results to a known solution to the wave equation on a square membrane, to confirm that the simulation converges to the continuous case.
The general solution to the 2D wave equation is:
$\ \mathbf{F} = \displaystyle\sum\limits_{m=1}^\infty \ \displaystyle\sum\limits_{n=1}^\infty \ sin(\mathbf{m \pi x}) sin(\mathbf{n \pi x}) \ (\mathbf{A}_{mn}cos(\mathbf{\omega_{mn}t}) + \ \mathbf{B}_{mn}sin(\mathbf{\omega_{mn}t}))$ $\omega \ is \ the \ frequency \ of \ each \ point's \ one \ \ dimensional \ oscillation$
The unknown coefficients are given by:
$\mathbf{A_{mn} = \frac{4}{n^2} \int_0^n \int_0^n \ f_{(x,y)}sin(m \pi x) sin(n \pi y) dx dy}$ $\mathbf{B_{mn} = \frac{4}{\omega_{mn} n^2} \int_0^n \int_0^n \ g_{(x,y)}sin(m \pi x) sin(n \pi y) dx dy}$
The function Omega represents the dispersion relation:
$\mathbf{ \omega_{mn}^2 } = \mathbf{C^2 (m^2 + n^2) \pi ^2}$
The boundary condition we will use is (note: The lattice size is nxn):
$\mathbf{F_{(x,y,t)} = 0 } \ \ \ \ \ for \ x=0, \ x=n, \ y=0, \ y=n$
And the initial value is given by:
$\mathbf{F_{(x,y,0)} = f_{(x,y)}} \ \ \ \ \ for \ 0<x<n \ and \ 0<y<n$ $\frac{ \partial \mathbf{F} }{\partial t} \mathbf{_{(x,y,0)}} = \mathbf{g_{(x,y)}} \ \ \ \ \ for \ 0<x<n \ and \ 0<y<n$
|
2013-05-19 18:42:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6552494168281555, "perplexity": 364.47555528948845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697974692/warc/CC-MAIN-20130516095254-00059-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://math.libretexts.org/Courses/Lumen_Learning/Book%3A_Beginning_Algebra_(Lumen)/05%3A_Polynomials/5.04%3A_Applications_of_Polynomials
|
# 5.4: Applications of Polynomials
Learning Objectives
• Geometric Applications
• Write a polynomial representing the perimeter of a shape
• Write a polynomial representing the area of a surface
• Write a polynomial representing the volume of a solid
• Cost, Revenue, and Profit Polynomials
• Write a profit polynomial given revenue and cost polynomials
• Find profit for given quantities produced
In this section we will explore ways that polynomials are used in applications of perimeter, area, and volume. First, we will see how a polynomial can be used to describe the perimeter of a rectangle.
### Example
A rectangular garden has one side with a length of $$2x + 3$$. Find the perimeter of the garden.
[hidden-answer a=”627193″]The perimeter of a rectangle is the sum of its side lengths.
$$\left(x+7\right)+\left(2x+3\right)+\left(x+7\right)+\left(2x+3\right)$$
Regroup by like terms.
$$\left(x+2x+x+2x\right)+\left(7+3+7+3\right)$$
$$6x+20$$
The perimeter is $$6x+20$$.
In the following video you are shown how to find the perimeter of a triangle whose sides are defined as polynomials.
A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=106
The area of a circle can be found using the radius of the circle and the constant pi in the formula $$A=\pi{r^2}$$. In the next example we will use this formula to find a polynomial that describes the area of an irregular shape.
Example
Find a polynomial for the shaded region of the figure.
Read and Understand: We need to find a way to describe the shaded region of this shape using polynomials. We know the formula for the area of a circle is $$A=\pi{r^2}$$. The figure we are working with is a circle with a smaller circle cut out.
Define and Translate: The larger circle has radius = r, and the smaller circle has radius= 3. If we find the area of the larger circle, then subtract the area of the smaller circle, the remaining area will be the shaded region. First define the area of the larger circle:
$$A_{1}=\pi{r^2}$$
Then define the area of the smaller circle.
$$A_{2}=\pi{3^2}=9\pi$$
Write and Solve: The shaded region can be found by subtracting the smaller area from the larger area.
$$\begin{array}{c}A_{1}-A_{2}\\=\pi{r^2}-9\pi\end{array}$$
The area of the shaded region is $$\pi{r^2}-9\pi$$
$$\pi{r^2}-9\pi$$
In the video that follows, you will be shown an example of determining the area of a rectangle whose sides are defined as polynomials.
A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=106
Pi
It is easy to confuse pi as a variable because we use a greek letter to represent it. We use a greek letter instead of a number because nobody has been able to find an end to the number of digits of pi. To be precise and thorough, we use the greek letter as a way to say: “we are including all the digits of pi without having to write them”. The expression for the area of the shaded region in the example above included both the variable r, which represented an unknown radius and the number pi. If we needed to use this expression to build a physical object or instruct a machine to cut specific dimensions, we would round pi to an appropriate number of decimal places.
In the next example, we will write the area for a rectangle in two different ways, one as the product of two binomials and the other as the sum of four rectangles. Because we are describing the same shape two different ways, we should end up with the same expression no matter what way we define the area.
Example
Write two different polynomials that describe the area of of the figure. For one expression, think of the rectangle as one large figure, and for the other expression, think of the rectangle as the sum of 4 different rectangles.
First, we will define the polynomial that describes the area of the rectangle as one figure.
Read and Understand: We are tasked with writing an expressions for the area of the figure above. The area of a rectangle is given as $$A=lw$$. We need to consider the whole figure in our dimensions.
Define and Translate: Use the formula for area: $$\begin{array}{c}l=\left(y+7\right)\\w=\left(y+9\right)\end{array}$$
You could define $$\begin{array}{c}w=\left(y+7\right)\\l=\left(y+9\right)\end{array}$$ because it doesn’t matter in which order you multiply.
Write and Solve:
$$\begin{array}{c}A=lw\\l=\left(y+7\right)\\w=\left(y+9\right)\end{array}$$,
We can use either method we learned to multiply binomials to simplify this expression, we will use a table.
$$+7$$ $$y^2$$ $$+9$$ $$63$$
Sum the terms from the table, and simplify:
$$\begin{array}{c}\left(y+7\right)\left(y+9\right)\\=y^2+7y+9y+63\\=y^2+16y+63\end{array}$$
$$A=y^2+16y+63$$
Now we will find an expression for the area of the whole figure as comprised by the areas of the four rectangles added together.
Read and Understand: The area of a rectangle is given as $$A=lw$$. We need to first define the areas of each rectangle, then sum them all together to get the area of the whole figure. It helps to label the four rectangle in the figure so you can keep the dimensions organized.
Define and Translate: Use the formula for area: $$A=lw$$ for each rectangle:
$$A_{1}=7\cdot{y}$$
$$A_{2}=7\cdot{9}=63$$
$$A_{3}=y\cdot{y}=y^2$$
$$A_{4}=y\cdot{9}$$
Write and Solve:
$$\begin{array}{c}A=A_{1}+A_{2}+A_{3}+A_{4}\\=7y+63+y^2+9y\\\text{ reorganize and simplify }\\=y^2+16y+63\end{array}$$
$$A=y^2+16y+63$$
Hopefully it isn;t surprising that both expressions simplify to the same thing.
The last example we will provide in this section is one for volume. The volume of regular solids such as spheres, cylinders, cones and rectangular prisms are known. We will find an expression for the volume of a cylinder, which is defined as $$V=\pi{r^2}h$$.
Example
Define a polynomial that describes the volume of the cylinder shown in the figure:
Read and Understand: We are tasked with writing an expressions for the volume of the cylinder in the figure above. The volume of a cylinder is given as$$\pi$$ is a constant, and r is the radius and h is the height of the cylinder.
Define and Translate: Use the formula for volume:$$V=\pi{r^2}h$$, we need to define r and h.
$$\begin{array}{c}r=\left(t-2\right)\\h=7\end{array}$$
Write and Solve: Substitute r and h into the formula for volume.
$$\begin{array}{c}V=\pi{r^2}h\\=\pi\left(t-2\right)^2\cdot{7}\\=7\pi\left({t^2}-2t-2t+4\right)\\=7\pi\left({t^2}-4t+4\right)\end{array}$$
Note that we usually write other constants that are multiplied by $$7\pi$$ to each term in the polynomial.
$$\begin{array}{c}7\pi\left({t^2}-4t+4\right)\\=\left(7\pi\right){t^2}-\left(7\pi\right)\cdot{4t}+\left(7\pi\right)\cdot{4}\\=7\pi{t^2}-28\pi{t}+28\pi\end{array}$$.
Note again how we left $$\pi$$ as a greek letter. If we needed to use this calculation for measurement of materials, we would round pi, or a computer would round for us.
$$V=\pi{r^2}h=7\pi{t^2}-28\pi{t}+28\pi$$
In this last video, we present another example of finding the volume of a cylinder whose dimensions include polynomials.
A YouTube element has been excluded from this version of the text. You can view it online here: pb.libretexts.org/ba/?p=106
## Cost, Revenue, and Profit Polynomials
In the systems of linear equations section, we discussed how a company’s cost and revenue can be modeled with two linear equations. We found that the profit region for a company was the area between the two lines where the company would make money based on how much was produced. In this section, we will see that sometimes polynomials are used to describe cost and revenue.
Profit is typically defined in business as the difference between the amount of money earned (revenue) by producing a certain number of items and the amount of money it takes to produce that number of items. When you are in business, you definitely want to see profit, so it is important to know what your cost and revenue is.
Cell Phones
For example, let’s say that the cost to a manufacturer to produce a certain number of things is C and the revenue generated by selling those things is R. The profit, P, can then be defined as
P = R-C
The example we will work with is a hypothetical cell phone manufacturer whose cost to manufacture x number of phones is $$R=-0.09x^2+7000x$$.
Example
Define a Profit polynomial for the hypothetical cell phone manufacturer.
Read and Understand: Profit is the difference between revenue and cost, so we will need to define P = R – C for the company.
Define and Translate: $$\begin{array}{c}R=-0.09x^2+7000x\\C=2000x+750,000\end{array}$$
Write and Solve: Substitute the expressions for R and C into the Profit equation.
$$\begin{array}{c}P=R-C\\=-0.09x^2+7000x-\left(2000x+750,000\right)\\=-0.09x^2+7000x-2000x-750,000\\=-0.09x^2+5000x-750,000\end{array}$$
Remember that when you subtract a polynomial, you have to subtract every term of the polynomial.
$$P=-0.09x^2+5000x-750,000$$
Mathematical models are great when you use them to learn important information. The cell phone manufacturing company can use the profit equation to find out how much profit they will make given x number of phones are manufactured. In the next example, we will explore some profit values for this company.
Example
Given the following numbers of cell phones manufactured, find the profit for the cell phone manufacturer:
1. x = 100 phones
2. x = 25,000 phones
3. x=60,000 phones
Read and Understand: The profit polynomial defined in the previous example,$$P=-0.09x^2+5000x-750,000$$, gives profit based on x number of phones manufactured. We need to substitute the given numbers of phones manufactured into the equation, then try to understand what our answer means in terms of profit and number of phones manufactured.
We will move straight into write and solve since we already have our polynomial. It is probably easiest to use a calculator since the numbers in this problem are so large.
Write and Solve:
Substitute x = 100
$$\begin{array}{c}P=-0.09x^2+5000x-750,000\\=-0.09\left(100\right)^2+5000\left(100\right)-750,000\\=-900+500,000-750,000\\=-250,900\end{array}$$
Interpret: When the number of phones manufactured is 100, the profit for the business is $-250,000. This is not what we want! The company must produce more than 100 phones to make a profit. Write and Solve: Substitute x = 25,000 $$\begin{array}{c}P=-0.09x^2+5000x-750,000\\=-0.09\left(25000\right)^2+5000\left(25000\right)-750,000\\=-6120000+125,000,000-750,000\\=118,130,000\end{array}$$ Interpret: When the number of phones manufactured is 25,000, the profit for the business is$118,130,000. This is more like it! If the company makes 25,000 phones it will make a profit after it pays all it’s bills.
If this is true, then the company should make even more phones so it can make even more money, right? Actually, something different happens as the number of items manufactured increases without bound.
Write and Solve:
Substitute x = 60,000
$$\begin{array}{c}P=-0.09x^2+5000x-750,000\\=-0.09\left(60000\right)^2+5000\left(60000\right)-750,000\\=-324,000,000+300,000,000-750,000\\=-24,750,000\end{array}$$
Interpret: When the number of phones manufactured is 60,000, the profit for the business is \$-24,750,000. Wait a minute! If the company makes 60,000 phones it will lose money, what happened? At some point, the cost to manufacture the phones will overcome the amount of profit that the business can make. If this is interesting to you, you may enjoy reading about Economics and Business models.
In this section we defined polynomials that represent perimeter, area and volume of well-known shapes. We also introduced some convention about how to use and write $$\pi$$ when it is combined with other constants and variables. The next application will introduce you to cost and revenue polynomials. We explored cost and revenue equations in the module on Systems of Linear Equations, now we will see that they can be more than just linear equations, they can be polynomials.
|
2021-01-27 05:02:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7149209380149841, "perplexity": 312.1933319431139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704820894.84/warc/CC-MAIN-20210127024104-20210127054104-00061.warc.gz"}
|
http://djjr-courses.wikidot.com/225:difference-equations-equilibria
|
### Thinking Graphically about Difference Equations
#### Preliminaries — A Brief Review
Recall the graph of a line:
And the "45 degree line" (equation Y=X — slope 1, y-intercept 0)
#### Step 1 : General form of a difference equation
(1)
$$P_{n} = a P_{n-1} + b$$
The next value equals something times the previous value plus some increment.
In our compound interest example, a was 1 plus the interest rate and b was zero. In our population models, a was 1 plus the birth rate minus the death rate and b was the recruitment or immigration per time period. In our weasel examples a was 1 plus the reproduction rate and b was the number killed each year by hunters.
#### Step 2. Plot the change from step to step
(2)
$$P_{n+1} = a P_{n} + b$$
This looks a lot like the equation for a line (if we think of pn+1 as y and pn as x). This makes sense since the very essence of difference equations is to express the next value as a function of the previous value (we might write next=f(previous) and this is the same as we do for a line: y=f(x)).
This is an odd little graph. How would we use it? Let's suppose some pn is some number C. We locate this on the horizontal axis. Then, to find pn+1 we go up to the line and across to the corresponding value on the vertical axis. Call this number D.
What comes next? Now D will be pn and we'll seek the next value. We locate D on the horizontal axis and repeat the process.
Now we know three points – three "states" of the system in succession. If we look up at the line and imagine how we have "moved" along it, we can depict how the system has moved.
Now
#### Step 3. Recall that at equilibrium, the system stays the same from one period to the next.
Call the value at which the system settles pe "p sub e" or the equilibrium value. It is still governed by the generic difference equation but it looks like this
(3)
$$P_{e} = a P_{e} + b$$
This can be solved for pe:
(4)
\begin{align} P_e = \frac {b}{(1-a)} \end{align}
And, the equation can be written out in our usual terms, it looks like this
(5)
$$p_{(n+1)} = p_{n}$$
But this is just the equation for a "45 degree line" – a line with slope 1 that goes through the origin (that is, the point 0,0).
Any time the system is at equilibrium it will be somewhere on this line – since, by definition, equilibrium is when pn=pn+1
Thus, if we draw a 45 degree line on the graph we drew above, we can locate the equilibrium.
Note that in the example above our line had a slope of less than 1. What happens if we have a line with a slope greater than 1?
For positions both above and below the equilibrium, the tendency is for the system to move AWAY from the equilibrium.
The difference we are recognizing here is between STABLE and UNSTABLE equilibria. In a stable equilibrium, a small change in the system results in a "self-correcting" move back to the equilibrium. In an unstable equilibrium, a small perturbation or bump results in a sharp and accelerating movement AWAY from the equilibrium point.
Consider these real world examples.
#### What Can We Learn from the Slope of the Pn+1=f(Pn) Line?
Something we've seen graphically is very interesting. When the line describing our difference equation crosses the 45 degree line with slope less than one we get a stable equilibrium. When the line crosses with a slope greater than one we get an unstable equilibrium.
Let's think for a second whether there is any intuition in this. Recall that
(6)
\begin{align} P_e = \frac {b}{(1-a)} \end{align}
Consider a point one unit away from Pe. Since the slope of the line is a the next point would be Pe+a. If a<1 then our new point is closer to Pe than Pe+1 was. If a>1 then the new point is further away.
What is a is negative? If we move one unit away from equilibrium, what happens? Our next point is at Pe+a but this is on the other side of Pe since a is negative. A little thinking will get us to the fact that the point after this will again be to the right of Pe. With a negative sloping line our sequence oscillates. But does it converge or diverge. It turns out that the same rule holds as before. For absolute value greater than 1 we get divergence (an unstable equilibrium) and for absolute value less than 1 we get convergence (stable equilibrium).
Let's try our step by stepping with the following two diagrams
### References and Resources
BLOSSOMS - Fabulous Fractals and Difference Equations
page revision: 9, last edited: 16 Jan 2016 02:45
|
2017-04-26 15:50:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 4, "x-ck12": 0, "texerror": 0, "math_score": 0.7219145894050598, "perplexity": 564.9432517599929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121453.27/warc/CC-MAIN-20170423031201-00634-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://2022.help.altair.com/2022/flux/Flux/Help/english/UserGuide/English/topics/IntroductionExemples1.htm
|
# Introduction / examples
## Definition
A problem is known to be of time dependent or of transient type when the state variable, respectively the unknown of the associated system, is a function of time. Two examples are presented in the following paragraphs.
## 1st example (thermal application)
The differential equation solved using finite element method in a Thermal Transient application is the following:
where:
• [k] is the tensor of thermal conductivity
• ρCp is the volume heat capacity
• q is the volume density of power of the heat source
• T is the temperature, respectively the state variable, i.e. the unknown of the system
In this case, the temperature T is a function of time (differential equation of first degree).
## 2nd example (magnetic application)
The differential equation solved using finite element method in a Transient Magnetic application (with the vector model) is the following:
where:
• [ν] is the tensor of the magnetic reluctivity in the computation domain
• [σ] is the tensor of the electric conductivity of the computation domain
• is the density of the current source
• is the magnetic vector potential, respectively the state variable, i.e. the unknown of the system
In this example, the magnetic vector potential A is function of time (differential equation is of first degree).
## Time discretization equations
The method called step by step in time domain is applied for the integration of this last differential equation with respect to the time variable. The time domain is divided into smaller time intervals, called time steps, during which the linear variation of the unknown with respect to time is assumed.
|
2023-02-04 12:28:25
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8816488981246948, "perplexity": 677.8190727552766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500126.0/warc/CC-MAIN-20230204110651-20230204140651-00337.warc.gz"}
|
https://handwiki.org/wiki/Coefficient_of_variation
|
# Coefficient of variation
Short description: Statistical parameter
In probability theory and statistics, the coefficient of variation (CV), also known as relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is often expressed as a percentage, and is defined as the ratio of the standard deviation $\displaystyle{ \ \sigma }$ to the mean $\displaystyle{ \ \mu }$ (or its absolute value, $\displaystyle{ | \mu | }$). The CV or RSD is widely used in analytical chemistry to express the precision and repeatability of an assay. It is also commonly used in fields such as engineering or physics when doing quality assurance studies and ANOVA gauge R&R. In addition, CV is utilized by economists and investors in economic models.
## Definition
The coefficient of variation (CV) is defined as the ratio of the standard deviation $\displaystyle{ \ \sigma }$ to the mean $\displaystyle{ \ \mu }$, $\displaystyle{ c_{\rm v} = \frac{\sigma}{\mu}. }$[1]
It shows the extent of variability in relation to the mean of the population. The coefficient of variation should be computed only for data measured on scales that have a meaningful zero (ratio scale) and hence allow relative comparison of two measurements (i.e., division of one measurement by the other). The coefficient of variation may not have any meaning for data on an interval scale.[2] For example, most temperature scales (e.g., Celsius, Fahrenheit etc.) are interval scales with arbitrary zeros, so the computed coefficient of variation would be different depending on the scale used. On the other hand, Kelvin temperature has a meaningful zero, the complete absence of thermal energy, and thus is a ratio scale. In plain language, it is meaningful to say that 20 Kelvin is twice as hot as 10 Kelvin, but only in this scale with a true absolute zero. While a standard deviation (SD) can be measured in Kelvin, Celsius, or Fahrenheit, the value computed is only applicable to that scale. Only the Kelvin scale can be used to compute a valid coefficient of variability.
Measurements that are log-normally distributed exhibit stationary CV; in contrast, SD varies depending upon the expected value of measurements.
A more robust possibility is the quartile coefficient of dispersion, half the interquartile range $\displaystyle{ {(Q_3 - Q_1)/2} }$ divided by the average of the quartiles (the midhinge), $\displaystyle{ {(Q_1 + Q_3)/2} }$.
In most cases, a CV is computed for a single independent variable (e.g., a single factory product) with numerous, repeated measures of a dependent variable (e.g., error in the production process). However, data that are linear or even logarithmically non-linear and include a continuous range for the independent variable with sparse measurements across each value (e.g., scatter-plot) may be amenable to single CV calculation using a maximum-likelihood estimation approach.[3]
## Examples
A data set of [100, 100, 100] has constant values. Its standard deviation is 0 and average is 100, giving the coefficient of variation as
0 / 100 = 0
A data set of [90, 100, 110] has more variability. Its population standard deviation is 8.165 and its average is 100, giving the coefficient of variation as
8.165 / 100 = 0.08165
A data set of [1, 5, 6, 8, 10, 40, 65, 88] has still more variability. Its standard deviation is 32.9 and its average is 27.9, giving a coefficient of variation of
32.9 / 27.9 = 1.18
## Estimation
When only a sample of data from a population is available, the population CV can be estimated using the ratio of the sample standard deviation $\displaystyle{ s \, }$ to the sample mean $\displaystyle{ \bar{x} }$:
$\displaystyle{ \widehat{c_{\rm v}} = \frac{s}{\bar{x}} }$
But this estimator, when applied to a small or moderately sized sample, tends to be too low: it is a biased estimator. For normally distributed data, an unbiased estimator[4] for a sample of size n is:
$\displaystyle{ \widehat{c_{\rm v}}^*=\bigg(1+\frac{1}{4n}\bigg)\widehat{c_{\rm v}} }$
### Log-normal data
In many applications, it can be assumed that data are log-normally distributed (evidenced by the presence of skewness in the sampled data).[5] In such cases, a more accurate estimate, derived from the properties of the log-normal distribution,[6][7][8] is defined as:
$\displaystyle{ \widehat{cv}_{\rm raw} = \sqrt{\mathrm{e}^{s_{\rm ln}^2}-1} }$
where $\displaystyle{ {s_{\rm ln}} \, }$ is the sample standard deviation of the data after a natural log transformation. (In the event that measurements are recorded using any other logarithmic base, b, their standard deviation $\displaystyle{ s_b \, }$ is converted to base e using $\displaystyle{ s_{\rm ln} = s_b \ln(b) \, }$, and the formula for $\displaystyle{ \widehat{cv}_{\rm raw} \, }$ remains the same.[9]) This estimate is sometimes referred to as the "geometric CV" (GCV)[10][11] in order to distinguish it from the simple estimate above. However, "geometric coefficient of variation" has also been defined by Kirkwood[12] as:
$\displaystyle{ \mathrm{GCV_K} = {\mathrm{e}^{s_{\rm ln}}\!\!-1} }$
This term was intended to be analogous to the coefficient of variation, for describing multiplicative variation in log-normal data, but this definition of GCV has no theoretical basis as an estimate of $\displaystyle{ c_{\rm v} \, }$ itself.
For many practical purposes (such as sample size determination and calculation of confidence intervals) it is $\displaystyle{ s_{ln} \, }$ which is of most use in the context of log-normally distributed data. If necessary, this can be derived from an estimate of $\displaystyle{ c_{\rm v} \, }$ or GCV by inverting the corresponding formula.
## Comparison to standard deviation
The coefficient of variation is useful because the standard deviation of data must always be understood in the context of the mean of the data. In contrast, the actual value of the CV is independent of the unit in which the measurement has been taken, so it is a dimensionless number. For comparison between data sets with different units or widely different means, one should use the coefficient of variation instead of the standard deviation.
• When the mean value is close to zero, the coefficient of variation will approach infinity and is therefore sensitive to small changes in the mean. This is often the case if the values do not originate from a ratio scale.
• Unlike the standard deviation, it cannot be used directly to construct confidence intervals for the mean.
• CVs are not an ideal index of the certainty of measurement when the number of replicates varies across samples because CV is invariant to the number of replicates while the certainty of the mean improves with increasing replicates. In this case, standard error in percent is suggested to be superior.[13]
## Applications
The coefficient of variation is also common in applied probability fields such as renewal theory, queueing theory, and reliability theory. In these fields, the exponential distribution is often more important than the normal distribution. The standard deviation of an exponential distribution is equal to its mean, so its coefficient of variation is equal to 1. Distributions with CV < 1 (such as an Erlang distribution) are considered low-variance, while those with CV > 1 (such as a hyper-exponential distribution) are considered high-variance. Some formulas in these fields are expressed using the squared coefficient of variation, often abbreviated SCV. In modeling, a variation of the CV is the CV(RMSD). Essentially the CV(RMSD) replaces the standard deviation term with the Root Mean Square Deviation (RMSD). While many natural processes indeed show a correlation between the average value and the amount of variation around it, accurate sensor devices need to be designed in such a way that the coefficient of variation is close to zero, i.e., yielding a constant absolute error over their working range.
In actuarial science, the CV is known as unitized risk.[14]
In Industrial Solids Processing, CV is particularly important to measure the degree of homogeneity of a powder mixture. Comparing the calculated CV to a specification will allow to define if a sufficient degree of mixing has been reached.[15]
### Laboratory measures of intra-assay and inter-assay CVs
CV measures are often used as quality controls for quantitative laboratory assays. While intra-assay and inter-assay CVs might be assumed to be calculated by simply averaging CV values across CV values for multiple samples within one assay or by averaging multiple inter-assay CV estimates, it has been suggested that these practices are incorrect and that a more complex computational process is required.[16] It has also been noted that CV values are not an ideal index of the certainty of a measurement when the number of replicates varies across samples − in this case standard error in percent is suggested to be superior.[13] If measurements do not have a natural zero point then the CV is not a valid measurement and alternative measures such as the intraclass correlation coefficient are recommended.[17]
### As a measure of economic inequality
The coefficient of variation fulfills the requirements for a measure of economic inequality.[18][19][20] If x (with entries xi) is a list of the values of an economic indicator (e.g. wealth), with xi being the wealth of agent i, then the following requirements are met:
• Anonymity – cv is independent of the ordering of the list x. This follows from the fact that the variance and mean are independent of the ordering of x.
• Scale invariance: cv(x) = cvx) where α is a real number.[20]
• Population independence – If {x,x} is the list x appended to itself, then cv({x,x}) = cv(x). This follows from the fact that the variance and mean both obey this principle.
• Pigou–Dalton transfer principle: when wealth is transferred from a wealthier agent i to a poorer agent j (i.e. xi > xj) without altering their rank, then cv decreases and vice versa.[20]
cv assumes its minimum value of zero for complete equality (all xi are equal).[20] Its most notable drawback is that it is not bounded from above, so it cannot be normalized to be within a fixed range (e.g. like the Gini coefficient which is constrained to be between 0 and 1).[20] It is, however, more mathematically tractable than the Gini coefficient.
### As a measure of standardisation of archaeological artefacts
Archaeologists often use CV values to compare the degree of standardisation of ancient artefacts.[21][22] Variation in CVs has been interpreted to indicate different cultural transmission contexts for the adoption of new technologies.[23] Coefficients of variation have also been used to investigate pottery standardisation relating to changes in social organisation.[24] Archaeologists also use several methods for comparing CV values, for example the modified signed-likelihood ratio (MSLR) test for equality of CVs.[25][26]
## Examples of misuse
Comparing coefficients of variation between parameters using relative units can result in differences that may not be real. If we compare the same set of temperatures in Celsius and Fahrenheit (both relative units, where kelvin and Rankine scale are their associated absolute values):
Celsius: [0, 10, 20, 30, 40]
Fahrenheit: [32, 50, 68, 86, 104]
The sample standard deviations are 15.81 and 28.46, respectively. The CV of the first set is 15.81/20 = 79%. For the second set (which are the same temperatures) it is 28.46/68 = 42%.
If, for example, the data sets are temperature readings from two different sensors (a Celsius sensor and a Fahrenheit sensor) and you want to know which sensor is better by picking the one with the least variance, then you will be misled if you use CV. The problem here is that you have divided by a relative value rather than an absolute.
Comparing the same data set, now in absolute units:
Kelvin: [273.15, 283.15, 293.15, 303.15, 313.15]
Rankine: [491.67, 509.67, 527.67, 545.67, 563.67]
The sample standard deviations are still 15.81 and 28.46, respectively, because the standard deviation is not affected by a constant offset. The coefficients of variation, however, are now both equal to 5.39%.
Mathematically speaking, the coefficient of variation is not entirely linear. That is, for a random variable $\displaystyle{ X }$, the coefficient of variation of $\displaystyle{ aX+b }$ is equal to the coefficient of variation of $\displaystyle{ X }$ only when $\displaystyle{ b = 0 }$. In the above example, Celsius can only be converted to Fahrenheit through a linear transformation of the form $\displaystyle{ ax+b }$ with $\displaystyle{ b \neq 0 }$, whereas Kelvins can be converted to Rankines through a transformation of the form $\displaystyle{ ax }$.
## Distribution
Provided that negative and small positive values of the sample mean occur with negligible frequency, the probability distribution of the coefficient of variation for a sample of size $\displaystyle{ n }$ of i.i.d. normal random variables has been shown by Hendricks and Robey to be[27]
$\displaystyle{ \mathrm{d}F_{c_{\rm v}} = \frac{2}{\pi^{1/2} \Gamma {\left(\frac{n-1}{2}\right)}} \; \mathrm{e}^{-\frac{n}{2\left(\frac{\sigma}{\mu}\right)^2}\frac{{c_{\rm v}}^2}{1+{c_{\rm v}}^2}}\frac{{c_{\rm v}}^{n-2}}{(1+{c_{\rm v}}^2)^{n/2}}\sideset{}{^\prime}\sum_{i=0}^{n-1}\frac{(n-1)! \, \Gamma \left(\frac{n-i}{2}\right)}{(n-1-i)! \, i! \,}\frac{n^{i/2}}{2^{i/2} \left(\frac{\sigma}{\mu}\right)^i}\frac{1}{(1+{c_{\rm v}}^2)^{i/2}} \, \mathrm{d}c_{\rm v} , }$
where the symbol $\displaystyle{ \sideset{}{^\prime}\sum }$ indicates that the summation is over only even values of $\displaystyle{ n - 1 - i }$, i.e., if $\displaystyle{ n }$ is odd, sum over even values of $\displaystyle{ i }$ and if $\displaystyle{ n }$ is even, sum only over odd values of $\displaystyle{ i }$.
This is useful, for instance, in the construction of hypothesis tests or confidence intervals. Statistical inference for the coefficient of variation in normally distributed data is often based on McKay's chi-square approximation for the coefficient of variation [28][29][30][31][32][33]
### Alternative
According to Liu (2012),[34] Lehmann (1986).[35] "also derived the sample distribution of CV in order to give an exact method for the construction of a confidence interval for CV;" it is based on a non-central t-distribution.
## Similar ratios
Standardized moments are similar ratios, $\displaystyle{ {\mu_k}/{\sigma^k} }$ where $\displaystyle{ \mu_k }$ is the kth moment about the mean, which are also dimensionless and scale invariant. The variance-to-mean ratio, $\displaystyle{ \sigma^2/\mu }$, is another similar ratio, but is not dimensionless, and hence not scale invariant. See Normalization (statistics) for further ratios.
In signal processing, particularly image processing, the reciprocal ratio $\displaystyle{ \mu/\sigma }$ (or its square) is referred to as the signal-to-noise ratio in general and signal-to-noise ratio (imaging) in particular.
Other related ratios include:
• Efficiency, $\displaystyle{ \sigma^2 / \mu^2 }$
• Standardized moment, $\displaystyle{ \mu_k/\sigma^k }$
• Variance-to-mean ratio (or relative variance), $\displaystyle{ \sigma^2/\mu }$
• Fano factor, $\displaystyle{ \sigma^2_W/\mu_W }$ (windowed VMR)
## References
1. Everitt, Brian (1998). The Cambridge Dictionary of Statistics. Cambridge, UK New York: Cambridge University Press. ISBN 978-0521593465.
2. Odic, Darko; Im, Hee Yeon; Eisinger, Robert; Ly, Ryan; Halberda, Justin (June 2016). "PsiMLE: A maximum-likelihood estimation approach to estimating psychophysical scaling and variability more reliably, efficiently, and flexibly". Behavior Research Methods 48 (2): 445–462. doi:10.3758/s13428-015-0600-5. ISSN 1554-3528. PMID 25987306.
3. Sokal RR & Rohlf FJ. Biometry (3rd Ed). New York: Freeman, 1995. p. 58. ISBN:0-7167-2411-1
4. Limpert, Eckhard; Stahel, Werner A.; Abbt, Markus (2001). "Log-normal Distributions across the Sciences: Keys and Clues". BioScience 51 (5): 341–352. doi:10.1641/0006-3568(2001)051[0341:LNDATS2.0.CO;2].
5. Koopmans, L. H.; Owen, D. B.; Rosenblatt, J. I. (1964). "Confidence intervals for the coefficient of variation for the normal and log normal distributions". Biometrika 51 (1–2): 25–32. doi:10.1093/biomet/51.1-2.25.
6. Diletti, E; Hauschke, D; Steinijans, VW (1992). "Sample size determination for bioequivalence assessment by means of confidence intervals". International Journal of Clinical Pharmacology, Therapy, and Toxicology 30 Suppl 1: S51–8. PMID 1601532.
7. Julious, Steven A.; Debarnot, Camille A. M. (2000). "Why Are Pharmacokinetic Data Summarized by Arithmetic Means?". Journal of Biopharmaceutical Statistics 10 (1): 55–71. doi:10.1081/BIP-100101013. PMID 10709801.
8. Reed, JF; Lynn, F; Meade, BD (2002). "Use of Coefficient of Variation in Assessing Variability of Quantitative Assays". Clin Diagn Lab Immunol 9 (6): 1235–1239. doi:10.1128/CDLI.9.6.1235-1239.2002. PMID 12414755.
9. Sawant,S.; Mohan, N. (2011) "FAQ: Issues with Efficacy Analysis of Clinical Trial Data Using SAS" , PharmaSUG2011, Paper PO08
10. Kirkwood, TBL (1979). "Geometric means and measures of dispersion". Biometrics 35 (4): 908–9.
11. Eisenberg, Dan (2015). "Improving qPCR telomere length assays: Controlling for well position effects increases statistical power". American Journal of Human Biology 27 (4): 570–5. doi:10.1002/ajhb.22690. PMID 25757675.
12. Broverman, Samuel A. (2001). Actex study manual, Course 1, Examination of the Society of Actuaries, Exam 1 of the Casualty Actuarial Society (2001 ed.). Winsted, CT: Actex Publications. p. 104. ISBN 9781566983969. Retrieved 7 June 2014.
13. Rodbard, D (October 1974). "Statistical quality control and routine data processing for radioimmunoassays and immunoradiometric assays.". Clinical Chemistry 20 (10): 1255–70. doi:10.1093/clinchem/20.10.1255. PMID 4370388.
14. Eisenberg, Dan T. A. (30 August 2016). "Telomere length measurement validity: the coefficient of variation is invalid and cannot be used to compare quantitative polymerase chain reaction and Southern blot telomere length measurement technique". International Journal of Epidemiology 45 (4): 1295–1298. doi:10.1093/ije/dyw191. ISSN 0300-5771. PMID 27581804.
15. Champernowne, D. G.; Cowell, F. A. (1999). Economic Inequality and Income Distribution. Cambridge University Press.
16. Campano, F.; Salvatore, D. (2006). Income distribution. Oxford University Press.
17. Bellu, Lorenzo Giovanni; Liberati, Paolo (2006). "Policy Impacts on Inequality – Simple Inequality Measures". Policy Support Service, Policy Assistance Division, FAO.
18. Eerkens, Jelmer W.; Bettinger, Robert L. (July 2001). "Techniques for Assessing Standardization in Artifact Assemblages: Can We Scale Material Variability?". American Antiquity 66 (3): 493–504. doi:10.2307/2694247.
19. Roux, Valentine (2003). "Ceramic Standardization and Intensity of Production: Quantifying Degrees of Specialization" (in en). American Antiquity 68 (4): 768–782. doi:10.2307/3557072. ISSN 0002-7316.
20. Bettinger, Robert L.; Eerkens, Jelmer (April 1999). "Point Typologies, Cultural Transmission, and the Spread of Bow-and-Arrow Technology in the Prehistoric Great Basin". American Antiquity 64 (2): 231–242. doi:10.2307/2694276.
21. Wang, Li-Ying; Marwick, Ben (October 2020). "Standardization of ceramic shape: A case study of Iron Age pottery from northeastern Taiwan". Journal of Archaeological Science: Reports 33: 102554. doi:10.1016/j.jasrep.2020.102554.
22. Krishnamoorthy, K.; Lee, Meesook (February 2014). "Improved tests for the equality of normal coefficients of variation". Computational Statistics 29 (1–2): 215–232. doi:10.1007/s00180-013-0445-2.
23. Marwick, Ben; Krishnamoorthy, K (2019). cvequality: Tests for the equality of coefficients of variation from multiple groups. R package version 0.2.0..
24. Hendricks, Walter A.; Robey, Kate W. (1936). "The Sampling Distribution of the Coefficient of Variation". The Annals of Mathematical Statistics 7 (3): 129–32. doi:10.1214/aoms/1177732503.
25. Iglevicz, Boris; Myers, Raymond (1970). "Comparisons of approximations to the percentage points of the sample coefficient of variation". Technometrics 12 (1): 166–169. doi:10.2307/1267363.
26. Bennett, B. M. (1976). "On an approximate test for homogeneity of coefficients of variation". Contributions to Applied Statistics Dedicated to A. Linder. Experientia Supplementum 22: 169–171. doi:10.1007/978-3-0348-5513-6_16. ISBN 978-3-0348-5515-0.
27. Vangel, Mark G. (1996). "Confidence intervals for a normal coefficient of variation". The American Statistician 50 (1): 21–26. doi:10.1080/00031305.1996.10473537. .
28. Feltz, Carol J; Miller, G. Edward (1996). "An asymptotic test for the equality of coefficients of variation from k populations". Statistics in Medicine 15 (6): 647. doi:10.1002/(SICI)1097-0258(19960330)15:6<647::AID-SIM184>3.0.CO;2-P.
29. Forkman, Johannes (2009). "Estimator and tests for common coefficients of variation in normal distributions". Communications in Statistics – Theory and Methods 38 (2): 21–26. doi:10.1080/03610920802187448. Retrieved 23 September 2013.
30. Krishnamoorthy, K; Lee, Meesook (2013). "Improved tests for the equality of normal coefficients of variation". Computational Statistics 29 (1–2): 215–232. doi:10.1007/s00180-013-0445-2.
31. Liu, Shuang (2012). Confidence Interval Estimation for Coefficient of Variation (Thesis). Georgia State University. p.3. Archived from the original on 1 March 2014. Retrieved 25 February 2014.
32. Lehmann, E. L. (1986). Testing Statistical Hypothesis. 2nd ed. New York: Wiley.
|
2022-09-26 03:56:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8931810855865479, "perplexity": 1683.941629890746}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00618.warc.gz"}
|
https://integralsandseries.in/?p=532
|
# Evaluating very nasty logarithmic integrals: Part II
In this post, we’ll evaluate some more nasty logarithmic integrals. Please read part 1 of this series if you haven’t done so already.
## Integral #3
We’ll start by finding a closed form for the integral: $$I_1 = \int_0^1 \frac{\log^2(1+x^2)}{1+x^2}dx$$ This integral can be reduced to Euler sums like our previous problem. But this time, the resulting Euler sums cannot be evaluated using the method of residues. Therefore, we’ll have to use a different approach.
Let us first consider the following integral: $$I_2 = \int_0^1 \frac{\log^2(1+ix)}{1+x^2}dx$$ Throughout this post, $\log$ denotes the principal branch of the logarithmic function defined by $\log z = \log|z| + i\text{arg}(z)$, with $-\pi < |\text{arg}(z)| \leq \pi$. We have \begin{aligned} I_2 &= \frac{1}{2}\int_0^1\log^2(1+ix)\left(\frac{1}{1+ix}+\frac{1}{1-ix} \right)dx \\ &= \frac{\log^3(1+ix)}{6i}\Big|_0^1 + \frac{1}{2}\int_0^1 \frac{\log^2(1+ix)}{1-ix}dx \\ &= \frac{\log^3(1+i)}{6i} + \frac{i}{2}\int_{\frac{1}{2}}^{\frac{1-i}{2}}\frac{\log^2(2(1-x))}{x}dx \\ &= \frac{\log^3(1+i)}{6i} + \frac{i}{2}\int_{\frac{1}{2}}^{\frac{1-i}{2}}\frac{\log^2(1-x)+\log^2(2)+2\log(2)\log(1-x)}{x}dx \\ &= \frac{\log^3(1+i)}{6i} + \frac{i\log^2(2)\log\left(1-i\right)}{2} + i\log(2) \left[\text{Li}_2\left(\frac{1}{2}\right)-\text{Li}_2\left(\frac{1-i}{2}\right) \right] \\ &\quad + \frac{i}{2}\int_{\frac{1}{2}}^{\frac{1-i}{2}}\frac{\log^2(1-x)}{x}dx \quad \color{blue}{\cdots (1)} \end{aligned} We can use equation (2) from (B) to evaluate $\int_{\frac{1}{2}}^{\frac{1+i}{2}}\frac{\log^2(1-x)}{x}dx$. \begin{aligned} \int_{\frac{1}{2}}^{\frac{1-i}{2}}\frac{\log^2(1-x)}{x}dx &= \log^2\left( \frac{1+i}{2}\right)\log\left(\frac{1-i}{2}\right) + 2\log\left(\frac{1+i}{2} \right)\text{Li}_2\left(\frac{1+i}{2}\right)-2\text{Li}_3\left(\frac{1+i}{2}\right) \\ &\quad +\log^3(2) + 2\log(2)\text{Li}_2\left(\frac{1}{2}\right) + 2\text{Li}_3\left(\frac{1}{2} \right) \quad \color{blue}{\cdots (2)} \end{aligned}
To simplify $\text{Li}_2\left(\frac{1+i}{2} \right)$ and $\text{Li}_2\left(\frac{1-i}{2} \right)$, we can use the following Dilogarithm identity: $$\text{Li}_2(1-z) + \text{Li}_2\left(1-z^{-1} \right) = -\frac{1}{2}\log^2(z)$$ This is easy to verify by differentiating both sides of the above equation with respect to $z$. Plugging in $z=\frac{1+i}{2}$ gives \begin{aligned} \text{Li}_2\left(\frac{1-i}{2}\right) &= -\text{Li}_2(i) - \frac{1}{2}\log^2\left(\frac{1+i}{2}\right) = \frac{5\pi^2}{96}-\frac{\log^2(2)}{8}+i\left(-G + \frac{\pi}{8}\log(2)\right) \\ \text{Li}_2\left(\frac{1+i}{2}\right) &= \overline{\text{Li}_2\left(\frac{1-i}{2}\right)} = \frac{5\pi^2}{96}-\frac{\log^2(2)}{8}-i\left(-G + \frac{\pi}{8}\log(2)\right) \end{aligned}
Now, we have everything needed to simplify equation (1). This is a tedious task so I used Mathematica to do it. The final result is \begin{aligned} I_2 &= -\frac{3\pi^3}{128}- \frac{G \log(2) }{2} + \frac{7\pi \log^2(2)}{32} + i\left( -\frac{ G \pi }{4} + \frac{7\pi^2 \log(2)}{192} + \frac{\log^3(2)}{48}+\frac{7}{8}\zeta(3) - \text{Li}_3\left(\frac{1+i}{2} \right)\right) \\ &\quad \color{blue}{\cdots (3)} \end{aligned} As of now, I am not aware of a closed form expression for $\text{Li}_3\left(\frac{1+i}{2} \right)$. So, we’ll leave it as it is. We can now extract $I_1$ from the real part of $I_2$. \begin{aligned} \text{Re }I_2 &= \int_0^1 \frac{\frac{1}{4}\log^2(1+x^2) - \arctan^2(x)}{1+x^2}dx \\ &= \frac{1}{4}\int_0^1 \frac{\log^2(1+x^2)}{1+x^2}dx - \frac{\arctan^3(x)}{3}\Big|_0^1 \\ &= \frac{1}{4}I_1 - \frac{\pi^3}{192} \end{aligned} Therefore, $$\boxed{I_1 = -2G\log(2) - \frac{7\pi^3}{96} + \frac{7\pi \log^2(2)}{8} + 4\text{ Im }\text{Li}_3\left(\frac{1+i}{2} \right)}\quad \color{blue}{\cdots (4)}$$ Now, let’s turn our attention to another integral: $$I_3 = \int_0^1 \frac{\log(x)\log(1+x^2)}{1+x^2}dx$$ Notice that with the help of some algebra, we can write: $$I_3 = -\frac{1}{2}\int_0^1 \frac{\log^2\left(\frac{x}{1+x^2} \right)}{1+x^2}dx + \frac{1}{2}\int_0^1 \frac{\log^2(1+x^2)}{1+x^2}dx + \frac{1}{2}\int_0^1 \frac{\log^2(x)}{1+x^2}dx$$ The leftmost integral can be dealt with the trigonometric substitution $x=\tan \theta$: \begin{aligned} \int_0^1 \frac{\log^2\left(\frac{x}{1+x^2} \right)}{1+x^2}dx &= \int_0^{\frac{\pi}{4}} \log^2\left(\sin \theta \cos\theta \right) \; d\theta \\ &= \int_0^{\frac{\pi}{4}} \log^2\left(\frac{\sin(2\theta)}{2} \right)\; d\theta \\ &= \frac{1}{2}\int_0^{\frac{\pi}{2}}\log^2\left(\frac{\sin \theta}{2} \right)\; d\theta \\ &= \frac{1}{2}\lim_{s\to 1}\frac{d^2}{ds^2}\int_0^{\frac{\pi}{2}}\left(\frac{\sin\theta}{2} \right)^{s-1}d\theta \\ &= \frac{1}{2}\lim_{s\to 1}\frac{d^2}{ds^2}\left[\frac{2^{-s}\sqrt{\pi}\Gamma\left(\frac{s}{2}\right)}{\Gamma\left(\frac{1+s}{2} \right)} \right] \\ &= \frac{\pi^3}{48}+ \pi \log^2(2) \end{aligned} The middle integral has already been evaluated. As for the rightmost integral, we have: \begin{aligned} \int_0^1 \frac{\log^2(x)}{1+x^2}dx &= \sum_{n=0}^\infty (-1)^{n}\int_0^1 x^{2n}\log^2(x)\; dx \\ &= 2\sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)^3} \\ &= \frac{\pi^3}{16} \end{aligned} This gives us $$\boxed{I_3 =-\frac{\pi^3}{64} -G \log(2) - \frac{\pi \log^2(2)}{16} +2 \text{ Im }\text{Li}_3\left(\frac{1+i}{2}\right)} \quad \color{blue}{\cdots (5)}$$
One can also evaluate $I_4 = \int_0^1\frac{\log(1+x^2)\arctan(x)}{x}dx$ by noting that $$I_4 = \text{Im}\int_0^1 \frac{\log^2(1+ix)}{x}dx = \text{Im}\int_0^{-i}\frac{\log^2(1-x)}{x}dx$$ and using equation (3) from (B).The end result is: $$\boxed{I_4 =-\frac{3\pi^3}{64}+G\log(2)-\frac{\pi \log^2(2)}{16}+2\text{ Im }\text{Li}_3\left(\frac{1+i}{2}\right) } \quad \color{blue}{\cdots (6)}$$ Using $I_3$ and $I_4$, we can evaluate $I_5 = \int_0^1 \frac{\log(x)\arctan(x)}{x(1+x^2)}dx$ as follows: \begin{aligned} I_5 &= \int_0^1 \log(x)\arctan(x)\left(\frac{1}{x}-\frac{x}{1+x^2} \right) dx \\ &= \int_0^1 \frac{\log(x)\arctan(x)}{x}dx - \int_0^1 \frac{x\log(x)\arctan(x)}{1+x^2}dx \\ &= -\frac{1}{2}\int_0^1 \frac{\log^2(x)}{1+x^2}dx + \frac{1}{2} \int_0^1 \frac{\log(x)\log(1+x^2)}{1+x^2} dx + \frac{1}{2}\int_0^1 \frac{\log(1+x^2)\arctan(x)}{x}dx \quad (\text{IBP}) \\ &= -\frac{\pi^3}{32} + \frac{I_3 + I_4}{2} \\ &= -\frac{\pi^3}{16} - \frac{\pi \log^2(2)}{16} + 2\text{ Im }\text{Li}_3\left(\frac{1+i}{2}\right) \end{aligned} On the other hand, we have: \begin{aligned} I_5 &= \int_0^1 \log(x) \left(\sum_{n=0}^\infty (-1)^n \tilde{H}_n x^{2n} \right) dx\\ &= \sum_{n=0}^\infty (-1)^n \tilde{H}_n \int_0^1 x^{2n}\log(x) \; dx \\ &= -\sum_{n=0}^\infty \frac{(-1)^n \tilde{H}_n}{(2n+1)^2} \end{aligned} where $\tilde{H}_n = \sum_{j=0}^n \frac{1}{2j+1}$. This gives us an interesting Euler sum: $$\boxed{\sum_{n=0}^\infty \frac{(-1)^n \tilde{H}_n}{(2n+1)^2} = \frac{\pi^3}{16} + \frac{\pi \log^2(2)}{16} - 2\text{ Im }\text{Li}_3\left(\frac{1+i}{2}\right)}\quad \color{blue}{\cdots (7)}$$ Of course, one can proceed in a similar manner to create more crazy integrals. The following problem is left as an exercise for the reader.
Exercise 1: Using the method of residues, show that $$\sum_{n=0}^\infty \frac{(-1)^n\tilde{H}_n}{2n+1} = \frac{G}{2}+\frac{\pi \log(2)}{8} \quad \color{blue}{\cdots (8)}$$
## Integral #4
Many years ago, I encountered the following integral: $$I_6 = \int_0^1 \frac{x \arctan(x)\log(1-x^2)}{1+x^2}dx$$ At that time, I couldn’t find a solution to this problem. Hence, I ended up asking it on math.stackexchange.com. The answers that I received there involved evaluating complex logarithmic integrals by brute force. Recently, I discovered a much simpler way to solve it using the method of residues.
Let’s start by breaking down $I_6$ into Euler sums. \begin{aligned} I_6 &= \int_0^1 x\log(1-x^2) \left(\sum_{n=0}^\infty (-1)^n \tilde{H}_n x^{2n+1} \right)dx \\ &= \sum_{n=0}^\infty (-1)^n \tilde{H}_n \int_0^1 x^{2n+2}\log(1-x^2) dx \\ &= \sum_{n=0}^\infty (-1)^{n+1} \tilde{H}_n \left(\frac{\psi_0\left(n+\frac{5}{2} \right)+\gamma}{2n+3} \right) \\ &= \sum_{n=0}^\infty (-1)^{n+1}\left(\tilde{H}_{n+1}-\frac{1}{2n+3} \right)\left(\frac{-2\log(2)+2\tilde{H}_{n+1}}{2n+3} \right) \\ &= \sum_{n=0}^\infty (-1)^n \left(\tilde{H}_{n}-\frac{1}{2n+1} \right)\left(\frac{-2\log(2)+2\tilde{H}_{n}}{2n+1} \right) \\ &= -2\log(2)\sum_{n=0}^\infty \frac{(-1)^n \tilde{H}_n}{2n+1}+2G\log(2) + 2\sum_{n=0}^\infty \frac{(-1)^n (\tilde{H}_n)^2}{2n+1} -2\sum_{n=0}^\infty \frac{(-1)^n \tilde{H}_n}{(2n+1)^2} \\ &= G\log(2) - \frac{\pi \log^2(2)}{4} + 2\sum_{n=0}^\infty \frac{(-1)^n (\tilde{H}_n)^2}{2n+1} -2\sum_{n=0}^\infty \frac{(-1)^n \tilde{H}_n}{(2n+1)^2} \quad \color{blue}{\cdots (9)} \end{aligned} The result of exercise 1 was used in the last step.
Now, integrate the function $f(z) = \pi\csc(\pi z) \frac{\left( \gamma + \psi_0 \left(-z+\frac{3}{2} \right)\right)^2}{-2z+1}$ over the positively oriented square, $C_N$, with vertices $\pm \left(N+\frac{1}{4}\right)\pm i\left(N+\frac{1}{4} \right)$. It takes a bit of effort to see that $$\lim_{N\to \infty}\int_{C_N} f(z)\; dz = 0$$ This implies that the sum of residues of $f(z)$ at it’s poles is equal to $0$. We have
\begin{aligned} \mathop{\text{Res}}\limits_{z=-n} f(z) &= (-1)^n \frac{\left(\gamma +\psi_0\left(n+\frac{3}{2}\right) \right)^2}{2n+1} = (-1)^n \frac{\left(-2\log(2)+2\tilde{H}_n \right)^2}{2n+1}, \quad n\in\{0,1,2,\cdots\} \\ \mathop{\text{Res}}\limits_{z=n} f(z) &= (-1)^{n-1} \frac{\left(\gamma+\psi_0\left(-n+\frac{3}{2} \right) \right)^2}{2n-1} = (-1)^{n-1} \frac{\left(-2\log(2)+2\tilde{H}_{n-1} -\frac{2}{2n-1}\right)^2}{2n-1}, \quad n\in\{1,2,3,\cdots\} \\ \mathop{\text{Res}}\limits_{z=\frac{2n+1}{2}} f(z) &= (-1)^{n-1} \frac{\pi H_n}{n} - (-1)^{n-1} \frac{3\pi}{2n^2}, \quad n\in\{1,2,3,\cdots\} \end{aligned}
Summing up the residues and performing some algebraic simplifications gives: \begin{aligned} &\; 8\sum_{n=0}^\infty \frac{(-1)^n (\tilde{H}_n)^2}{2n+1}-8\sum_{n=0}^\infty \frac{(-1)^n \tilde{H}_n}{(2n+1)^2}-16\log(2)\sum_{n=0}^\infty \frac{(-1)^n \tilde{H}_n}{2n+1}+ 4\sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)^3} \\ &\quad + 8\log(2)\sum_{n=0}^\infty \frac{(-1)^n }{(2n+1)^2} + \pi\sum_{n=1}^\infty \frac{(-1)^{n-1}H_n}{n} - \frac{3\pi}{2}\sum_{n=1}^\infty \frac{(-1)^{n-1}}{n^2} = 0 \\ &\implies 8\left(\sum_{n=0}^\infty \frac{(-1)^n (\tilde{H}_n)^2}{2n+1}-\sum_{n=0}^\infty \frac{(-1)^n \tilde{H}_n}{(2n+1)^2} \right) + \frac{\pi^3}{8} + \pi \left(\frac{\pi^2}{12}-\frac{\log^2(2)}{2}\right)-\frac{3\pi}{2}\left(\frac{\pi^2}{12} \right) = 0 \\ &\implies \sum_{n=0}^\infty \frac{(-1)^n (\tilde{H}_n)^2}{2n+1}-\sum_{n=0}^\infty \frac{(-1)^n \tilde{H}_n}{(2n+1)^2} = -\frac{\pi^3}{96}+\frac{\pi \log^2(2)}{16} \quad \color{blue}{\cdots (10)} \end{aligned} In the above calculation, we used the result of exercise 1 and that $$\sum_{n=1}^\infty \frac{(-1)^{n-1}H_n}{n} = \frac{\pi^2}{12}-\frac{\log^2(2)}{2}$$ This follows from (A). Finally, plugging equation (10) into (9), gives $$\boxed{I_6 = -\frac{\pi^3}{48}-\frac{\pi}{8}\log^2 (2) +G\log (2)} \quad \color{blue}{\cdots (11)}$$
|
2021-02-25 18:31:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 50, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 2148.8111598998385}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351454.16/warc/CC-MAIN-20210225182552-20210225212552-00123.warc.gz"}
|
https://www.bartleby.com/solution-answer/chapter-111-problem-12e-precalculus-mathematics-for-calculus-6th-edition-6th-edition/9780840068071/98c5a8ca-6e3e-4b02-a71e-0930124f86f6
|
# The equation given that the statement
BuyFind
### Precalculus: Mathematics for Calcu...
6th Edition
Stewart + 5 others
Publisher: Cengage Learning
ISBN: 9780840068071
BuyFind
### Precalculus: Mathematics for Calcu...
6th Edition
Stewart + 5 others
Publisher: Cengage Learning
ISBN: 9780840068071
#### Solutions
Chapter 1.11, Problem 12E
To determine
Expert Solution
## Answer to Problem 12E
The equation is A=ktx3
### Explanation of Solution
Given:
A is inversely proportional to x3
Concept used:
If the quantities x and y are related by an equation
y=kx
For some constant k0
That is y varies inversely as x or y is inversely proportional to x
The constant k is called constant proportionality
Calculation:
If the quantities A,t and x are related by an equation
Aαtx3
A=ktx3
For some constant k0
That is A varies inversely x3 or A is inversely proportional to x3
The constant k is called constant proportionality
### Have a homework question?
Subscribe to bartleby learn! Ask subject matter experts 30 homework questions each month. Plus, you’ll have access to millions of step-by-step textbook answers!
|
2021-09-25 00:18:32
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9028330445289612, "perplexity": 9493.571709653806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057584.91/warc/CC-MAIN-20210924231621-20210925021621-00693.warc.gz"}
|
https://math.stackexchange.com/questions/1269967/probability-and-expected-value-of-a-betting-game
|
# Probability and Expected Value of a betting game
Here is the problem I am trying to figure out:
Someone starts with X amount of money that they can bet on this betting game. The game is the person can wager whatever he wants, and when he places the bet he has a 49.8% chance to double his money, and 50.2% chance to lose it. If this player is up Y amount, so his total wealth is X + Y, what is the total expected amount of money he must bet in order to get back to having a total of X money.
Heres an example of the question:
If I have 100 dollars, and I play this game enough times where I’m up to 150 dollars. How much total money must I wager, whether it is 5 dollars ten times or 50 dollars once, in order to expect to be back at 100 dollars
After betting $\$X$, you get back on average$\$X(0.996)$, because you double $0.498$ of the time. So after betting $\$250Y$, you get back on average$249Y$. On the other hand, that doesn't answer your question. The time that your expected return is$-Y$, might not be the expected time that your return reaches$-Y\$.
Suppose that M_n is the amount of money with start with before the n bet and that b_n is the proportional amount of that money that we bet on that specific bet. Our expected leftover money for that bet are:
$$E(L_n) = M_n \cdot (1 - b_n) + 0.498 \cdot b_n \cdot M_n \cdot 2 = (1 - 0.004 \cdot b_n) \cdot M_n$$
Of course it's very easy to notice that our expected leftover money of bet n is equal to the expected value of money we start with at bet n + 1:
$$E(L_n) = E(M_{n+1})$$ $$E(M_{n+1}) = (1 - 0.004 \cdot b_n) \cdot E(M_n)$$
Starting from the first bet we know:
$$E(M_1) = X + Y$$ $$E(M_2) = (1 - 0.004 \cdot b_1) \cdot (X + Y)$$
Right off the bat we can tell what's the amount we need to bet to return to X right away after the first bet.
$$X = (1 - 0.004 \cdot b_1) \cdot (X + Y)$$ $$b_1 = {{250 \cdot Y} \over {X + Y}}$$
Of course keep in mind that you should accept the b_n solution only if its equal or lower than 1. You cannot bet more money that you have of course. In your example with 100 + 50 dollars, b_1 would be 83,333... so that means it would be impossible to return to X in expected value after a single bet.
The procedure continues on with iterative substitutions...
$$E(M_3) = (1 - 0.004 \cdot b_2) \cdot (1 - 0.004 \cdot b_1) \cdot (X + Y)$$ $$E(M_4) = (1 - 0.004 \cdot b_3) \cdot (1 - 0.004 \cdot b_2) \cdot (1 - 0.004 \cdot b_1) \cdot (X + Y)$$
In the end it's about finding at least one solution of the following:
$$X = (X + Y) \cdot \prod _{n=1}^{N} (1- 0.004\,b_{{n}}) \text { subject to } [0 \le b_n \le 1] \forall n$$
Hopefully that helped.
|
2019-06-17 00:34:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5140334963798523, "perplexity": 235.01262403521451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998339.2/warc/CC-MAIN-20190617002911-20190617024911-00465.warc.gz"}
|
https://www.intechopen.com/chapters/66095
|
Open access peer-reviewed chapter
# Agro-Industrial Waste Revalorization: The Growing Biorefinery
Written By
Flora Beltrán-Ramírez, Domancar Orona-Tamayo, Ivette Cornejo-Corona, José Luz Nicacio González-Cervantes, José de Jesús Esparza-Claudio and Elizabeth Quintana-Rodríguez
Submitted: September 28th, 2018 Reviewed: December 15th, 2018 Published: March 11th, 2019
DOI: 10.5772/intechopen.83569
From the Edited Volume
## Biomass for Bioenergy
Edited by Abd El-Fatah Abomohra
Chapter metrics overview
View Full Metrics
## Abstract
Agro-industrial residues have been the spotlight of different researches worldwide, due to some of their constituents being raw material to generate a diversified variety of industrial products. Nowadays, this situation keeps prevailing and will increase continuously in the future. In the agroindustry, diverse biomasses are subjected to distinct unit processes for providing value to different waste materials from agriculture, food processing, and alcoholic industries. In this chapter, we reported an updated survey of different renewable organic materials that including agricultural wastes can be converted to bioenergy. Similarly, these wastes encrypt different bioactive compounds with an excellent nutraceutical functions and with high adding value. In addition, biocomposites can be elaborated using fibers from wastes with a wide variety of applications in the automotive and packaging industry. Vinasses derived from tequila industry in Mexico represent a lot of potential to extract biocompounds, and we propose a process to obtain them. A perspective of market trend is mentioned in this chapter for compounds derived from agro-industrial wastes. Adding value to those agro-industrial wastes can provide the reduction of negative impact emission, discharge, or disposal, solves an environmental problem, and generates additional income.
### Keywords
• bioenergy
• biocomposites
• bioactive peptides
• biocompounds
• vinasses
## 1. Introduction
Agro-industrial residues provide an enormous potential to generate sustainable products and bioenergy. An integrated biorefinery is turning into a promising solution with multiple outputs (biofuels, bioactive biocompounds, and biomaterials). Most of the residues generated are intended for landfill or are disposed in an uncontrolled way, causing environmental damage and economic loss. For that reason, it is necessary to develop a sustainable management of them. An integral waste management is proposed in the concept of circular economy to exploit renewable resources. Circular economy is based on the concept of biorefinery and the approach to reduce, reuse, and recycle waste with the objective to recover materials derived from waste considering them as renewable resources [1].
A wide range of metabolites, materials, and energy can be obtained through the exploitation of agricultural residues. Being the commercial-scale technologies is the bottleneck to produce marketable bioproducts. In this review, we put aboard the multiple outputs that can have the agro-industrial residues from bioenergy until a wide array of metabolites can be extracted. In addition, we investigate the potential market for some products derived from residues, searching the revalorization of them.
## 2. Production of bioenergy
Nowadays, due to the increase in population, it is necessary to find a sustainable solution for the enhanced demand of energy in the world. The fossil fuels are limited and nonrenewable resources; the use of biomass for energy production seems to be a solution to provide energy and reduce the dioxide carbon emissions. The term biomass includes energy crops, residues, and other biological materials that can be used to produce renewable energy [2]. The first-generation biofuels are produced from agricultural crops such as corn, sugarcane, soybean oil, and sunflower [3]. However, there is a conflict due to these biomasses being used for food and generating the name “food versus fuel” [4]. Additionally, emissions of greenhouse gases (GHG) are believed to be lesser for second-generation biofuels than the first-generation fuels [3]. For these reasons, agro-industrial residues have gained attention due to their disponibility, and they include residues from crop, food, and oil industries.
### 2.1 Solid fuels
Pellets are the most common solid biofuels used; they are cylindrical structures made by compression derived commonly from agricultural residues, forest products, and wood industries [5]. Pellets are used mainly for house heating and in industrial sector. Even though the agro-industrial residues have less energy content than fossil fuels, their use presents great advantages such as the reduction of logistic costs, easy storage, and provision of a great opportunity for the revalorization of these unused residues [5]. For the pellet elaboration, the biomass is treated to be compacted and densified; this includes the drying, and after, the biomass is milled to obtain particles with similar size [6]. Afterward, the material is pressed in a pelletizer and pellets are packaged and stored. Some common methods to improve energy density are torrefaction, steam explosion, hydrothermal carbonization, and biological treatment. In torrefaction, reactions of dehydration and decarboxylation occur lowering proportions in O/C and C/H and increasing heating value [7]. Steam explosion is a treatment with hot steam under pressure and followed by decompression which disintegrates the lignocellulosic structure [8]. Steam explosion treatment increased the heating value in pellets from a different biomass [9]. The international market of pellets derived from wood has been increased, the USA, Canada, and Russia being the largest exporters to Europe, which is the main consumer in the world [10]. Several applications and uses have the pellets from residential to large-scale power plants. The growing demand of sustainable and renewable fuels places the agro-industrial residue pellets with a great potential to supply renewable energy.
### 2.2 Liquid fuels
Liquid fuels as diesel and petrol are being replaced by liquid biofuels as biodiesel, bio-oil, bioethanol, and butanol. Biodiesel is obtained from feedstock oil as waste cooking and frying oil, animal fats, and fish and microalgae oil, leather, winery, and agro-industrial wastes, directly or indirectly. Oleaginous microorganisms are used in the indirect way for biodiesel production; lipids produced are extracted to be transformed to biofuel. For biodiesel production, three main steps are included: pretreatment, transesterification, and separation. Pretreatment allows agro-industrial residues to be assimilable for the microorganisms and is categorized into acidic, basic, thermal, enzymatic, or combination treatments [11]. Other important aspect to consider is that during pretreatments inhibitors for microbial growth as furfural, acetate, and others can be formed and are necessary to find tolerant strains or medium detoxification [12].
Bio-oils are obtained from biomass through two main processes: pyrolysis and liquefaction [13]. Pyrolysis has taken more attention; fast pyrolysis of lignocellulose biomass for bio-oil production is low cost compared to liquefaction that produces low yield at high cost [14]. Due to their physicochemical characteristics, bio-oils cannot be used for fuel applications without previous treatment [13]. Treatments are based in partial or total elimination of oxygen, and two catalytic routes have been proposed: cracking and hydrotreating. Pyrolysis of agro-industrial residues has been reported for sesame, mustard, Jatropha, palm kernel, cottonseed, and neem oil cakes showing an additional value for these residues and reducing wastes [15].
Bioethanol is the most common biofuel, and their production involves steps as pretreatment, saccharification, fermentation, and distillation [16]. Pretreatment allows cellulose to unwind from hemicellulose and lignin to be more available for enzymatic hydrolysis, and commonly physical, chemical, and biological treatments are used to achieve this purpose [17]. The enzymatic hydrolysis allows converting cellulose to glucose or galactose monomers and presents a low toxicity as well as low utility cost and corrosion compared to chemical hydrolysis [18]. Biological treatment is an alternative to liberate cellulose with the use of microorganisms mainly as brown-rot, white-rot, or soft-rot fungi [19]. Once the saccharification is obtained, fermentation is carried on with microorganisms able to produce ethanol. For the microorganism selection, some parameters are necessary to have a broad-substrate utilization that is derived in a high ethanol yield and productivity, to be tolerant to high ethanol concentrations, temperature, and inhibitors presented in hydrolysate for which genetically modified or engineered microorganisms are a good option to achieve a complete utilization of sugars and better production [17]. The simultaneous saccharification and fermentation (SSF) and the separate hydrolysis and fermentation (SHF) are the most common processes usually used to ethanol production [16]. SSF using olive pulp from oil extraction and the yeast Kluyveromyces marxianusshowed ethanol yields of 76% [20].
Due to its higher heat of combustion and less volatility and it being mixed with gasolines in higher percentage without any modifications in the car engines, butanol is considered a promising renewable biofuel [21]. Butanol is produced through anaerobic biological fermentation process using the Clostridiagenus [22]. Agricultural residues can be used for economical production of butanol. Simultaneous hydrolysis of wheat straw to sugars and fermentation to butanol resulted in an attractive option for ABE fermentation [23]. Rice bran has resulted to be an effective substrate to butanol production using C. saccharoperbutylacetonicum[24]. Agricultural residues can be a promissory source to be efficiently utilized as substrate for butanol production.
### 2.3 Gas fuels
Biobutanol is a product from anaerobic biological process called ABE fermentation, which converts sugar by using genus Clostridiainto butanol, acetone, and ethanol in a ratio of 6:3:1, respectively. In this process, genus Clostridiasuch as Clostridium acetobutylicum, Clostridium beijerinckii, Clostridium saccaroperbutylacetonicum, and Clostridium saccharoacetobutylicumshowed significant activity for synthesis of butanol with higher yield. Biobutanol is a product from anaerobic biological process called ABE fermentation, which converts sugar by using genus Clostridiainto butanol, acetone, and ethanol in a ratio of 6:3:1, respectively. In this process, genus Clostridiasuch as Clostridium acetobutylicum, Clostridium beijerinckii, Clostridium saccaroperbutylacetonicum, and Clostridium saccharoacetobutylicumshowed significant activity for synthesis of butanol with higher yield.
Biobutanol is a product from anaerobic biological process called ABE fermentation, which converts sugar by using genus Clostridiainto butanol, acetone, and ethanol in a ratio of 6:3:1, respectively. In this process, genus Clostridiasuch as Clostridium acetobutylicum, Clostridium beijerinckii, Clostridium saccaroperbutylacetonicumand Clostridium saccharoacetobutylicumshowed significant activity for synthesis of butanol with higher yield. Biobutanol is a product from anaerobic biological process called ABE fermentation, which converts sugar by using genus Clostridiainto butanol, acetone, and ethanol in a ratio of 6:3:1, respectively. In this process, genus Clostridiasuch as Clostridium acetobutylicum, Clostridium beijerinckii, Clostridium saccaroperbutylacetonicum, and Clostridium saccharoacetobutylicumshowed significant activity for synthesis of butanol with higher yield.
Lignocellulosic biomass is a potential source of glucose, xylose, mannose, and arabinose and other organic compounds that can be anaerobically degraded to produce biogas [25]. Biogas is produced through an anaerobic digestion with four steps identified as hydrolysis, acidification, and production of acetate and finally methane using a microorganism consortium [26]. The final product is a gas mixture composed mainly of methane and carbon dioxide and traces of hydrogen sulfide, ammonia, hydrogen, and carbon monoxide [27]. For the enhancement of biogas production, it is necessary to apply pretreatments, and the most commonly used are dilute acid hydrolysis, steam explosion, alkaline hydrolysis, and liquid hot water [28], while Song et al. tested nine pretreatments showing that H2O2 and Ca(OH)2 enhance methane yields [29].
## 3. Biocompounds from agro-industrial wastes
### 3.1 Polyphenols
Phenolic compounds are a group of chemical compounds that are widely distributed in nature, and their basic structure varies from a simple molecule to a complex skeleton and hydroxyl substituents. These compounds are being the most desirable phytochemicals due to their antioxidant activities that can be useful for the control of different human diseases or disorders [30]. Due to their reactivity, these compounds efficiently interact with important biomolecules such as DNA, lipids, proteins, and other cellular molecules to produce desired results, which then are used for designing natural therapeutic agents. Flavonoids, tannins, anthocyanins, and alkaloids are polyphenols with industrial significance and are present in fruits and plants. In addition, most of the phenolic complexes are found in barks, shells, husk, leaves, and roots [30]. Recently, agro-industrial wastes from fruits, vegetables, and crops have been subjected to different metabolite methods of extraction as a potential source of industrial bioactive compound production. For example, tomato processing industry approximately generates 8.5 million of tons of wastes globally [31], wastes such as seeds, pruning, and peels which contain a high concentration of bioactive phytochemicals. In that sense, peels and seeds of tomatoes are a richer source of bioactive compounds such as carotenes, terpenes, sterols, tocopherols, and polyphenols [32], which exhibited excellent antimicrobial and antioxidant activities and high support of dietary fiber. Other important crop that generates a high amount of waste is the coffee production. Due to the heterogeneous nature of coffee waste, most of the authors are investigating its possible revalorization to determine the content of chemical compounds such as tannins and phenolic compounds. Exhausted and spent coffee ground wastes derived from industries, restaurants, and domestics are a valuable source of phenolic compounds. For example, in coffee waste derived from coffee industries, different ranges of concentrations of polyphenols and tannins around of 6 and 4%, respectively, were found [33]. J. curcasand Ricinus communisare the most important energetic plants for the biofuel industries; these plants generated high amounts of residues such as seed cake, pruning material, and seed shells with high concentrations of bioactive compounds. In fact, shells of this plants contained high contents of phenolic compounds and exhibited strong antioxidant activities [34]. Extracts of residual wastes of seeds, leaves, fruits, stems, and roots derived from R. communisexert different nutraceutical effects such as antioxidant, antimutagenic, as well as DNA protection against photooxidative stress [35].
### 3.2 Pigments
Agro-industrial wastes can be used as a feedstock extraction and for different fermentation processes as a main source of microbial nutrients to produce biopigments useful in food and cosmetic industries. Chemically synthesized food colorants used as the additives in foods cause the risk of toxicity and hazardous effects to the consumers, than the natural pigments, that are quite safer, nontoxic, and nonhazardous for the environment [36]. The production of natural pigments can be derived from direct plant extraction (e.g., anthocyanins, chlorophylls, carotenoids, and melanin) or by fermentative production through the cultivation of bacteria, yeast, fungi, and algae (e.g., phycocyanins, xanthophylls, and melanin) [37]. Cyanobacteria and microalgae produce high amounts of beta-carotene and astaxanthin, which are used in the industries and have a great commercial value in pharmaceutical and food industries [38]. Different microorganisms such as Streptomyces, Serratia, Cordyceps, Monascus, and Paecilomyces, Penicillium atrovenetum, Penicillium herquei, Rhodotorula, Sarcina, Cryptococcus, Phaffia rhodozyma, Pseudomonas,Bacillus sp., Vibrio, Monascus purpureus, Achromobacter, Yarrowia, and Phaffiahave shown their potential in pigment production as a major source of blue and yellow-red pigments [39]. Other important pigment is the melanin, which is present in animals, plants, and microorganisms to provide stress protection against UV radiation, oxidation, and defense [40]. This pigment is used for the cosmetic and pharmaceutic industries with a photoprotective and antioxidant importance in different products. The use of agro-industrial wastes such as fruits is a potential source for the melanin biosynthesis by microorganisms and is an attractive choice for commercial-scale production. For example, fruit, wheat bran extracts, and cabbage wastes were used as inoculum in Bacillus safensis[41], fungus Auricularia auricula[42], and Pseudomonassp. [43] for the melanin production. Melanin is specially found in the seed coat of different plants; however, it is also found in other plant structures such as black spots of leaves, flowers, and seeds [44]. There are a few reports related to the melanin extraction from agro-industrial wastes. In that sense, sunflower husk derived from the oil production was subjected to the melanin extraction, and a technological scheme of melanin production from this waste was developed with a potential application as prophylactic mean and medicinal agent for the treatment of human diseases [45]. Similarly, residues as shells and epicarp from walnut contain high amounts of melanin with a high antioxidant capacity [46].
Bioactive peptides are encrypted within the protein sequences with different bioactivity functions and relevant in some important disorders in human health such as cancer, hypertension, antioxidant functions, diabetes mellitus, and other important diseases. These peptides may have different sizes, around 2–20 amino acid residues per molecule with molecular masses between 1 and 6 kDa and based on their physical properties and amino acid composition [47] which make them very attractive for different applications in pharmaceutical and food industries. Waste can contain many valuable substances, and through a suitable process or technology, this material can be converted into value-added products or raw materials that can be used in secondary processes. Residual wastes generated by agro-industries are a protein-rich source and have become an alternative for obtaining compounds with bioactivity, mainly from protein hydrolysates; their extraction processes do not involve negative environmental impacts [48]. The principal residual wastes generated by the agro-industrial activities are soybean meal, residues of oiled plants, and rapeseed meal [48]. Those peptides can generate in the market peptides and protein drugs more than $40 billion/year, with an accelerated pace in the drug market [48]. The press cake, after oil extraction from J. curcas (not toxic genotypes)in biodiesel production, represents a potential of new source of protein for food and feed uses. The seed cake of Jatrophacontains a high concentration of storage proteins mainly glutelins and globulin fractions [49] that encrypted peptides with antioxidant, chelating, and antihypertensive activities [50]. Some peptides have activities against bacteria that can reduce the human infections. In that sense, a trypsin inhibitor was purified from castor bean waste of seed cakes; the 75-kDa peptide displayed antibacterial activity against Bacillus subtilis, Klebsiella pneumoniae, and Pseudomonas aeruginosa, which are important human pathogenic bacteria. In addition, microscopy studies indicated that this peptide disrupts the bacterial membrane with loss of the cytoplasm content and ultimately bacterial death. The author concludes that this peptide is a powerful candidate for the development of an alternative drug that may help reduce hospital-acquired infections [51]. Other important seed cakes from oiled plants can be used for the peptide characterization. For example, chia (Salvia hispanica) seed cake is novelty for the peptide extraction; the seed cake contains high amounts of proteins that encrypted different peptides with antioxidant, antidiabetic, and antihypertensive activities [46]. Advertisement ## 4. Biocomposites Biocomposites are formed by a polymer matrix and natural fibers, which act as reinforcements. There are six types of natural fibers commonly used in biocomposite elaboration: grass and reed fibers (wheat, corn, and rice), core fibers (kenaf, jute, and hemp), bast fibers (jute, flax, hemp, ramie, kenaf, bamboo, and banana), seed fibers (coir, cotton and kapok), leaf fibers (abaca, sisal, and pineapple), and other types (wood and roots) [52]. The composition of natural fibers consists mainly of cellulose, hemicellulose, and lignin. Cellulose in plants is the main component that provides stability and strength to the cell walls, and this component directly influences the biocomposite production for a defined application, whether in the textile, automotive, and others. Lignin is a highly cross-linked structure, and the amount of this directly influences the structure, properties, morphology, hydrolysis rate, as well as the flexibility of the fibers. Besides, fibers with greater amount of lignin have less amount of cellulose, and this will also depend on the application of the fiber. The fibers can be used in both thermoplastics and thermosets. Thermoplastic matrices include polypropylene (PP), high-density polyethylene (HDPE), polystyrene (PS), and polyvinyl chloride (PVC). Thermosets include epoxy, polyester, and phenolic resins. In recent years, the number of studies focused on these materials has increased, because they are environmentally friendly and have low production costs, easy workability, good properties of lightness, mechanical strength, and thermal insulation [53]. However, due to the hydrophilic nature of natural fibers and hydrophobic nature of polymer matrix, there is no good interfacial interaction between the two materials, and therefore, the mechanical properties are deteriorated. Based on the above, chemical and physical treatments have been developed to modify the surface of natural fibers and promote interfacial adhesion with the polymer matrix [54]. Among the chemical treatments stands out alkali, benzoylation, cyclohexane, silicon, peroxide, acetylation, sulfuric acid, stearic acid treatment, and the modification with maleic anhydride. The chemical modification provides more dimensional stability and reduces water absorption capacity [55]. Alkaline treatment is the most used and consists in eliminating the lignin, wax, and oil of the fibers, since these components act as a barrier between the polymeric matrix and the fibers; and in turn, it is possible to increase the roughness in the surface of the fibers [56]. Another alternative to improve the compatibility between these materials is using compatibilizing agents, such as maleic anhydride grafted with polyolefins, either polypropylene or high-density polyethylene. The main factors that affect the processing and performance of biocomposites are the presence of moisture, type, shape (short or long), concentration, and orientation of the fibers. The processing method for obtaining biocomposites will depend on the type of fiber, for example: twin-screw extruder and hydraulic press, injection molding, melt mixing, and single-screw extruder for short-fiber-reinforced composites [57]. New technologies can improve the processing of these materials to make it easier. The main applications of the biocomposites are automotive parts, packaging, military industry, aerospace, medical articles, etc. The interest of the automotive sector in developing biocomposites lies mainly in reducing the consumption of fiber glass because it is more expensive than the natural one and, in turn, making the vehicles lighter, and it also contributes to the consumption of less combustible and to the fact of being eco-friendly. In recent years, Toyota, Mercedes-Benz, Ford, Mitsubishi, and Daimler Chrysler AG have incorporated biodegradable materials in the exterior parts of some of their vehicles [58]. Pracella et al. [58] studied the functionalization, compatibilization, and properties of polypropylene (PP) composites with hemp fibers. The fibers were functionalized with glycidyl methacrylate (GMA). PP/hemp composites at various compositions were prepared in a Brabender internal mixer. All modified composites showed improved fiber dispersion in the polyolefin matrix and higher interfacial adhesion with respect to the unmodified PP/hemp. Composites showed an increase in Young modulus as compared to PP due to the addition of PP-g-GMA. Vilaseca et al. [59] studied the effect of alkali treatment on interfacial bonding in abaca fibers. They used an epoxy resin, and the results showed that alkali treatments modify the structure and chemical composition of abaca fibers. Abaca fibers treated in 5 wt. % NaOH showed excellent interfacial adhesion with epoxy resin. Bledzki et al. [60] carried out polypropylene-based biocomposites with different types of natural fibers (jute, kenaf, abaca, and softwood) to compare their performance under the same processing conditions, and they found that the properties of biocomposites depend on geometry of the fibers. Kenaf provides strength to biocomposites, abaca obtains the best results in impact resistance, jute fibers are the most stable thermally, and the wood microfibers have good resistant strength. Currently, several studies have been carried out with other types of fibers, in which we can mention agave, castor plant, and J. curcasfibers [61]. During the tequila production process, large amounts of waste are produced (mostly fiber), and in the case of castor plant and J. curcas, only the seeds are used for the extraction of oils, and the rest of plant is discarded. Therefore, an alternative to take advantage of this waste is to use it to develop biocomposites. Zuccarello et al. [62] demonstrated that the agave variety plays an important role on the mechanical performance of the fibers and they proposed an innovative and eco-friendly method for the fiber extraction based on the simple mechanical pressing of the leaves, alternated to proper water immersions avoiding alkaline treatment. They used an eco-friendly green epoxy and a polylactic acid (PLA) to obtain renewable biocomposites. In another work, Zuccarello et al. [62] studied the effect of agave fiber size on epoxy resin and PLA composites. This study showed that biocomposites with short fibers fail to act as a reinforcement, while the long fibers in the compounds with PLA achieve a high mechanical strength. Vinayaka et al. [63] elaborated composites with polypropylene and fibers extracted from the outer layer of R. communis(castor plant), which exhibited an elongation at 5% that was higher than the common bast fibers jute and flax, and the strength at 350 MPa was similar to that of jute but lower than that of cotton. Biocomposites have an enormous potential of applications and a growth market especially in automotive industry. Advertisement ## 5. Vinasse and tequila production in Mexico, a case study The production of alcohol and alcoholic beverages such as wine, beer, and tequila generates two main residues in its process, one of them is the solid part called bagasse and a liquid part obtained from the distiller which is known as vinasse [64]. In Mexico, more than 70% of the establishments that produce vinasse come from the production of tequila and mezcal, 20% from beer, and the rest from the production of wines from grapes and other fruits. Due to the denomination of origin of tequila and mezcal, researchers consider as essential the study of the vinasse process, due to the high environmental and economic impacts in Mexico. In recent years, to decrease the high volume of residual vinasses, it has been decided to generate compost from bagasse and vinasse [65], which is given to farmers to use it in their crops; however, the production of compost uses less than 50% of the vinasses, and the rest is discarded without treatment. The tequila production generates high volumes of vinasses on a ratio of 10–12 L for each liter of tequila produced; they have a high organic content that causes damage to ecosystems by anoxia and acidification of water. Biodigestion systems have been developed for the removal of solids [66]. Nevertheless, the treatments are expensive and with a low efficiency, which has not been achieved an industrial implementation. Due to the physicochemical characteristics of vinasses, these represent a high source of contamination that must be contained and treated to avoid serious damage to ecosystems [67]. Due to the current production volumes of tequila, it is reported that during 2017 more than 271 million liters were produced (https://www.crt.org.mx/) of which between 2710 and 3252 million liters of vinasse would be obtained [68]. For elaboration of the tequila, two main industrial processes are used to produce the fermentation juice. In the first one, the agave is cooked and squeezed to obtain the juice, and there is a subsequent fermentation process. The most recent and apparently energy efficient is to squeeze the raw agave heart by spraying it with a little hot water for this juice, to be used later in the fermentation process. Vinasse components have been compared between both processes showing significant changes where cooking processes present the highest contents of organic acids compared with spraying with steam [69]. We carry out the characterization of the compounds present in the vinasse that leave the distillation process in order to identify the majority of compounds and to propose an adequate purification process that allows to preserve the properties, as well as to avoid their degradation. The components of the vinasse vary between the different fermentation processes, since they depend on the raw materials; the cooking time will give the characteristics of the juices or liqueurs, the fermentation process, and the variables of distillation, so it will be necessary to carry out a characterization study for the vinasse. Since the vinasse has been treated as a residue, no measures are taken to prevent its degradation or contamination. It is important to obtain reliable results, collect fresh vinasse and free of foreign contaminants, and keep it cool, clean, and not exposed light to prevent its degradation. Vinasse is a complex mixture in which organic compounds with very different chemical characteristics are found. In addition, oils and fats that are contained solubilize another group of compounds, making this mixture difficult to separate. It is required to carry out an integral development that allows the correct treatment of each component of the vinasse. Three general stages of separation are proposed using as model the composition of the vinasse previously characterized (Figure 1). The first stage is the separation of solids and liquids; the second a separation of water-soluble compounds such as alcohol traces and polar compounds of solids belonging to organic matter generated during fermentation and some other solids from the broth culture as carbohydrates, proteins, and mineral salts, among others; and finally the generation of value-added products, organic compounds, and solids of high organic matter content, which can easily be recycled in the form of fertilizer for industrial use. Each stage requires the implementation of independent processes, but they will guarantee the obtaining of water that is easy to use in compliance with current regulations. The feasibility of obtaining compounds of antioxidant capacity of commercial interest derived from vinasse represents an income that costs the entire process. As mentioned previously, currently in Mexico the treatment of vinasse has been limited mainly due to the fact that there is no solvency or economic technology that companies can implement. Different types of treatment are being used that depend mainly on the size of the company and the total volume of production; Table 1 summarizes the strategies used to treat vinasse. ClassificationTreatmentApplicationAdvantage PretreatmentTemperature lowPool circulationEconomic and popular pH neutralizationCa(OH)2 additionEconomic and popular Primary treatmentSedimentation poolsStorageEasy for industrial volumes Air flash floatingPolymer additionEasy for industrial volumes Physicochemical treatmentCoagulationAl2(SO4)3 additionGood remotion of solids FlocculationCationic polymer additionGood remotion of solids BiologicalAnaerobic fermentationBiodigestionMethane generation AcidogenesisBiodigestionH2 and CO2 generation New treatmentsOxidationRedox reaction using ozone, H2O2, UV radiation, or ClRemotion of color, odor, and organic matter ### Table 1. Treatments actually used for the disposal of vinasse. Pretreatments and primary treatments are the most commonly used, as they are economical and simple to implement on any scale. Pretreatments are useful only for acidity reduction but do not eliminate organic load or color. Sedimentation ponds allow the removal of 80% of the sedimentable solids; however, it does not reduce the organic load or fats. The flotation strategy consists of applying air together with a polymer that allows accelerating the separation of soluble solids, and it is used as preparation for a biological process. The process is useful for the removal of solids, it is not efficient in the chemical demand of oxygen (CDO), and in addition, it is expensive in industrial scales. On the other hand, these strategies represent a focus of soil and subsoil contamination by filtration. The physicochemical treatments are mostly used on a pilot scale and generally used in two stages; in one, a coagulant is added to agglomerate soluble solids and then a flocculant for remotion. It is efficient in 20–30%, and its ability is being studied to remove the color; at the laboratory level, it has achieved a 70% reduction in color and 30% in CDO. Although there are already reports of a 100% removal efficiency using a cationic polymer [70], to date it has not been implemented on an industrial scale. It is estimated that the cost of these processes is 3.8 USD/Kg of vinasse, but the ecological impact of the emission of heavy metals and the reaction of chlorine salts with organic matter increases rather than decreases the level of toxicity. The main coagulants that are currently used in industries have metallic composition, and some environmentally friendly alternatives are being studied, including sugar polymers from some plant species, such as mesquite gum, shrimp chitosan, and some other vegetable gums [71]. Although removal percentages higher than metallic salts have been achieved by these alternatives, the process of obtaining is still expensive and uncommon. Advertisement ## 6. Market of the bioproducts derived from agro-industrial wastes In the last year, exploitation of agricultural waste for development of new products with a commercial value has been investigated. New related sectors have appeared in the global scene with great growth opportunities in the global market. Therefore, the specialized search engine “Web of Science,” using the keyword “waste” in conjunction with the descriptive words of each item shows that the sectors with the greatest number of publications are “biocomposites and the peptides with antioxidant activity.” Even though at present research in these fields remains small, it has a great potential for growth in the global market (Figure 2). Globally, sectors related with products from agro-industrial residues are growing, and therefore, there is an increase in the publications showing the enormous potential to enter and take a position in the market. If we look at the size of the market for each compound, we can direct and plan new strategies aimed at the development of products toward the sectors with the highest economic growth and create the interest of companies looking for innovation in each of the product. The main growing market is in the phytochemicals with the extraction of carotenoids, flavonoids, and anthocyanins, among others with potential use in the alimentary industry. Valorization of residues can be achieved such as in soybean residues from pressing oil extraction, which are rich in phytochemical compounds [72]. Thus, there is a growing interest in agro-industrial residues as source of high-benefit products potentially useful as valuable constituents, flavors, and antioxidant in food and cosmetics [73]. The market of phytochemicals is becoming more competitive with the entry of pharmaceutical companies as Cargill, Hormel, and Doehler groups, which ensures a growing market. To pharmaceutical industry, bioactive peptides are considered a growing market and represent a potential solution to more efficacious disease treatment. In addition, peptides promise to combine the lower production costs and high specificity. For 2025, the proposed market growth appears to be near to USD$48.04 billion (Figure 2). Current trends indicate that a bright future for bioactive peptides and position them as firm candidates to the growth and innovation in pharmaceutical industry with the participation at this moment of companies as Elly Lilly and Pfizer [74], while the biocomposites are being alternatives to conventional petroleum-derived material becoming increasingly utilized in a great variety of applications [75]. An increased research is reflected in the number of publications that indicates a strong trend for applications of eco-friendly materials. Kenaf fiber has been used to reinforce polyurethane composites improving the mechanical and thermal properties [76]. The global biocomposite market is estimated to grow at USD $46.30 billion for 2025. The trends indicate that the rising awareness among people by the replacement of plastics with biodegradable and environmentally favorable alternatives allows the market growth [77]. Automotive sector is a rising market; the search for new materials that increase the safety of passengers and reduce the vehicle weight spurs market demand [78]. Biomass pellets are an emerging market with a lot of potential. The European Union was the primary market responsible for the global production and consumption of pellets to residential and district heating East Asia being predicted to become the second largest consumer [6]. Generating an advantage to closing, the circle called “the field to the hand of consumption” creates new opportunities to transform the field of an agricultural activity to agro-industrial activity centered on a circular economy. Circular economy advises the reincorporation of residues into the economy; wastes become a transient phase in an ideally perpetual utilization cycle rather than environmentally sound disposal [1]. Advertisement ## 7. Conclusions Utilization of agro-industrial residual wastes can help to reduce them and avoid the environmental contamination, health diseases by the overcrowding, ecological damage, and other pollution-associated problems. Nowadays, there is a growing interest by researchers and industrials by the different substances and properties that exhibit the different agro-industrial residues to obtain a value-added product. Residual wastes have an enormous potential to be revalorized producing solid, liquid, and gas energies, obtaining different bioactive compounds such as polyphenols, peptides, and melanin. In addition, fibers from residues can be used to elaborate biocomposite for automotive industries mainly. New methods are proposed to obtain a value-added product from vinasses in demand not only for the pharmaceutical industries. All of these compounds and mixtures are profitable for a commercial market that grows with a high potential and application for new sectors. In that sense, the future is here. Advertisement ## Acknowledgments We want thank Alejandro Carreon for kind suggestions in the early version of this manuscript. Advertisement ## Conflicts of interest The authors indicate no potential conflicts of interest. Advertisement ## Funding This research was supported by the project Bioturbosin Cluster (208090) of Consejo Nacional de Ciencia y Tecnología (CONACYT) and Secretaria de Energia (SENER) from Mexico. ## References 1. 1. Velis C. Circular economy and global secondary material supply chains. Waste Management & Research [Internet]. 2015;33(5):389-391. DOI: 10.1177/0734242X15587641 2. 2. Guldhe A, Singh B, Renuka N, Singh P, Misra R, Bux F. Bioenergy: A sustainable approach for cleaner environment. In: Bauddh K, Singh B, Korstad J, editors. Phytoremediation Potential of Bioenergy Plants [Internet]. Singapore: Springer Singapore; 2017. pp. 47-62. DOI: 10.1007/978-981-10-3084-0_2 3. 3. Jeihanipour A, Bashiri R. Perspective of biofuels from wastes. In: Lignocellulose-Based Bioproducts. Switzerland: Springer International Publishing; 2015. pp. 37-83. DOI: 10.1007/978-3-319-14033-9 4. 4. Suurs RAA, Hekkert MP. Competition between first and second generation technologies: Lessons from the formation of a biofuels innovation system in the Netherlands. Energy. 2009;34(5):669-679. DOI: 10.1016/j.energy.2008.09.002 5. 5. Tauro R, García CA, Skutsch M, Masera O. The potential for sustainable biomass pellets in Mexico: An analysis of energy potential, logistic costs and market demand. Renewable and Sustainable Energy Reviews [Internet]. 2018;82:380-389. Available from:http://www.sciencedirect.com/science/article/pii/S1364032117312911 6. 6. Nunes LJR, Matias JCO, Catalão JPS. Mixed biomass pellets for thermal energy production: A review of combustion models. Applied Energy [Internet]. 2014;127:135-140 7. 7. Wang G, Luo Y, Deng J, Kuang J, Zhang Y. Pretreatment of biomass by torrefaction. Chinese Science Bulletin. 2011;56(14):1442-1448. DOI: 10.1007/s11434-010-4143-y 8. 8. Brodeur G, Yau E, Badal K, Collier J, Ramachandran KB, Ramakrishnan S. Chemical and physicochemical pretreatment of lignocellulosic biomass: A review. Enzyme Research. 2011;2011:17. DOI: 10.4061/2011/787532 9. 9. Lam PS, Lam PY, Sokhansanj S, Lim CJ, Bi XT, Stephen JD, et al. Steam explosion of oil palm residues for the production of durable pellets. Applied Energy. 2015;141:160-166. DOI: 10.1016/j.apenergy.2014.12.029 10. 10. Goetzl A. Developments in the global trade of wood pellets. Work Pap Ind US Int Trade Comm. 2015; (ID-39). Available from:https://www.usitc.gov/publications/332/wood_pellets_id-039_final_0.pdf 11. 11. Wang R, Shaarani SM, Godoy LC, Melikoglu M, Vergara CS, Koutinas A, et al. Bioconversion of rapeseed meal for the production of a generic microbial feedstock. Enzyme and Microbial Technology. 2010;47(3):77-83. DOI: 10.1016/j.enzmictec.2010.05.005 12. 12. Leiva-Candia DE, Pinzi S, Redel-Macías MD, Koutinas A, Webb C, Dorado MP. The potential for agro-industrial waste utilization using oleaginous yeast for the production of biodiesel. Fuel [Internet]. 2014;123:33-42 Available from:http://www.sciencedirect.com/science/article/pii/S0016236114000647 13. 13. Graça I, Lopes JM, Cerqueira HS, Ribeiro MF. Bio-oils Upgrading for Second Generation Biofuels. Industrial and Engineering Chemistry Research [Internet]. 2013;52(1):275-287. DOI: 10.1021/ie301714x 14. 14. No S-Y. Application of bio-oils from lignocellulosic biomass to transportation, heat and power generation—A review. Renewable and Sustainable Energy Reviews [Internet]. 2014;40:1108-1125 Available from:http://www.sciencedirect.com/science/article/pii/S1364032114005796 15. 15. Alper K, Tekin K, Karagöz S. Pyrolysis of agricultural residues for bio-oil production. Clean Technologies and Environmental Policy [Internet]. 2015;17(1):211-223. DOI: 10.1007/s10098-014-0778-8 16. 16. Gupta A, Verma JP. Sustainable bio-ethanol production from agro-residues: A review. Renewable and Sustainable Energy Reviews [Internet]. 2015;41:550-567 Available from:http://www.sciencedirect.com/science/article/pii/S1364032114007084 17. 17. Geddes CC, Peterson JJ, Roslander C, Zacchi G, Mullinnix MT, Shanmugam KT, et al. Optimizing the saccharification of sugar cane bagasse using dilute phosphoric acid followed by fungal cellulases. Bioresource Technology [Internet]. 2010;101(6):1851-1857 Available from:http://www.sciencedirect.com/science/article/pii/S0960852409013200 18. 18. Sarkar N, Ghosh SK, Bannerjee S, Aikat K. Bioethanol production from agricultural wastes: An overview. Renewable Energy [Internet]. 2012;37(1):19-27 Available from:http://www.sciencedirect.com/science/article/pii/S096014811100382X 19. 19. Sun Y, Cheng J. Hydrolysis of lignocellulosic materials for ethanol production: A review. Bioresource Technology. 2002;83(1):1-11. DOI: 10.1016/S0960-8524(01)00212-7 20. 20. Ballesteros I, Oliva JM, Saez F, Ballesteros M. Ethanol production from lignocellulosic byproducts of olive oil extraction. Applied Biochemistry and Biotechnology. 2001;91(1-9):237-252. DOI: 10.1385/ABAB:91-93:1-9:237 21. 21. Maiti S, Sarma SJ, Brar SK, Le Bihan Y, Drogui P, Buelna G, et al. Agro-industrial wastes as feedstock for sustainable bio-production of butanol by Clostridium beijerinckii. Food and Bioproducts Processing. 2016;98:217-226. DOI: 10.1016/j.fbp.2016.01.002 22. 22. Kumar M, Gayen K. Developments in biobutanol production: New insights. Applied Energy [Internet]. 2011;88(6):1999-2012 Available from:http://www.sciencedirect.com/science/article/pii/S0306261910005751 23. 23. Qureshi N, Saha BC, Hector RE, Hughes SR, Cotta MA, Qureshi N. Butanol production from wheat straw by simultaneous saccharification and fermentation using Clostridium beijerinckii: I. Batch fermentation. Biomass and Bioenergy. 2008;32(2):168-175. DOI: 10.1016/j.biombioe.2007.07.004 24. 24. Al-Shorgani NKN, Kalil MS, Yusoff WMW. Biobutanol production from rice bran and de-oiled rice bran byClostridium saccharoperbutylacetonicumN1-4. Bioprocess and Biosystems Engineering. 2012;35(5):817-826. DOI: 10.1007/s00449-011-0664-2 25. 25. Deublein D, Steinhauser A. Biogas from Waste and Renewable Resources: An Introduction. Germany: John Wiley & Sons; 2011. DOI: 10.1002/9783527621705 26. 26. Solarte-Toro JC, Chacón-Pérez Y, Cardona-Alzate CA. Evaluation of biogas and syngas as energy vectors for heat and power generation using lignocellulosic biomass as raw material. Electronic Journal of Biotechnology. 2018;33:56-62. DOI: 10.1016/j.ejbt.2018.03.005 27. 27. Wellinger A, Murphy JD, Baxter D. The Biogas Handbook: Science, Production and Applications. Woodhead United Kingdom: Elsevier; 2013. Available from:http://www.iea-biogas.net 28. 28. Zheng Y, Zhao J, Xu F, Li Y. Pretreatment of lignocellulosic biomass for enhanced biogas production. Progress in Energy and Combustion Science. 2014;42:35-53. DOI: 10.1016/j.pecs.2014.01.001 29. 29. Song Z, Liu X, Yan Z, Yuan Y, Liao Y. Comparison of seven chemical pretreatments of corn straw for improving methane yield by anaerobic digestion. PLoS One. 2014;9(4):e93801DOI. DOI: 10.1371/journal.pone.0093801 30. 30. Ignat I, Volf I, Popa VI. A critical review of methods for characterisation of polyphenolic compounds in fruits and vegetables. Food Chemistry. 2011;126(4):1821-1835. DOI: 10.1016/j.foodchem.2010.12.026 31. 31. Ayala-Zavala JF et al. Antioxidant enrichment and antimicrobial protection of fresh-cut fruits using their own byproducts: Looking for integral exploitation. Journal of Food Science, Wiley Online Library. 2010;75(8):R175-R181. DOI: 10.1111/j.1750-3841.2010.01792.x 32. 32. Joshi VK, Kumar A, Kumar V. Antimicrobial, antioxidant and phyto-chemicals from fruit and vegetable wastes: A review. International Journal of Food and Fermentation Technology. 2012;2(2):123 Available from:http://ndpublisher.in/admin/issues/ijfftv2n2c.pdf 33. 33. Yusuf M. Agro-industrial waste materials and their recycled value-added applications. Handbook of Ecomaterials. 2017:1-11. DOI: 10.1007/978-3-319-48281-1_48-1 34. 34. Pujol D, Liu C, Gominho J, Olivella MÀ, Fiol N, Villaescusa I, et al. The chemical composition of exhausted coffee waste. Industrial Crops and Products. 2013;50:423-429. DOI: 10.1016/j.indcrop.2013.07.056 35. 35. Fu R, Zhang Y, Guo Y, Liu F, Chen F. Determination of phenolic contents and antioxidant activities of extracts of Jatropha curcas L. seed shell, a by-product, a new source of natural antioxidant. Industrial Crops and Products. 2014;58:265-270. DOI: 10.1016/j.indcrop.2014.04.031 36. 36. Abbas M, Ali A, Arshad M, Atta A, Mehmood Z, Tahir IM, et al. Mutagenicity, cytotoxic and antioxidant activities of Ricinus communis different parts. Chemistry Central Journal. 2018;12(1):3. DOI: 10.1186/s13065-018-0370-0 37. 37. Mishra B, Varjani S, Varma GKS. Agro-industrial by-products in the synthesis of food grade microbial pigments: An eco-friendly alternative. In: Green Bio-processes. Singapore: Springer; 2019. pp. 245-265. DOI: 10.1007/978-981-13-3263-0_13 38. 38. Kantifedaki A, Kachrimanidou V, Mallouchos A, Papanikolaou S, Koutinas AA. Orange processing waste valorisation for the production of bio-based pigments using the fungal strainsMonascus purpureusandPenicillium purpurogenum. Journal of Cleaner Production. 2018;185:882-890. DOI: 10.1016/j.jclepro.2018.03.032 39. 39. Rodrigues DB, Flores ÉMM, Barin JS, Mercadante AZ, Jacob-Lopes E, Zepka LQ. Production of carotenoids from microalgae cultivated using agroindustrial wastes. Food Research International. 2014;65:144-148. DOI: 10.1016/j.foodres.2014.06.037 40. 40. Tarangini K, Mishra S. Production of melanin by soil microbial isolate on fruit waste extract: Two step optimization of key parameters. Biotechnology Reports. 2014;4:139-146. Available from:https://pdfs.semanticscholar.org/1832/5493d429f8d8986c892b637f9f6aa28065ec.pdf 41. 41. Zou Y, Hu W, Ma K, Tian M. Fermentative production of melanin by the fungusAuricularia auriculausing wheat bran extract as major nutrient source. Food Science and Technology Research. 2017;23(1):23-29. DOI: 10.1111/jfpp.12909 42. 42. Tarangini K, Mishra S. Production, characterization and analysis of melanin from isolated marine Pseudomonas sp. using vegetable waste. Research Journal of Engineering Sciences. 2013;2278:9472 Available from:https://pdfs.semanticscholar.org/1832/5493d429f8d8986c892b637f9f6aa28065ec.pdf 43. 43. Keles Y, Özdemir Ö. Extraction, purification, antioxidant properties and stability conditions of phytomelanin pigment on the sunflower seeds. International Journal of Secondary Metabolite. 2018;5(2):140-148. DOI: 10.21448/ijsm.377470 44. 44. Kartushina YN, Nefedieva EE, Sevriukova GA, Gracheva NV, Zheltobryukhov VF. Technological desition of extraction of melanin from the waste of production of sunflower-seed oil. In: IOP Conference Series: Earth and Environmental Science. IOP Publishing; 2017. p. 12014. DOI: 10.1088/1755-1315/66/1/012014 45. 45. Lan J, Wang J, Wang F. Study on the antioxidant capacity of the melanin from walnut shell and walnut epicarp. Science and Technology of Food Industry. 2012;15:19. Available from:http://en.cnki.com.cn/Article_en/CJFDTOTAL-SPKJ201215019.htm 46. 46. Orona-Tamayo D, Valverde ME, Paredes-López O. Bioactive peptides from selected Latin American food crops – A nutraceutical and molecular approach. Critical Reviews in Food Science and Nutrition. 2018:1-25. DOI: 10.1080/10408398.2018.1434480.https://www.tandfonline.com/doi/full/10.1080/10408398.2018.1434480 47. 47. Lemes A, Sala L, Ores J, Braga A, Egea M, Fernandes K. A review of the latest advances in encrypted bioactive peptides from protein-rich waste. International Journal of Molecular Sciences. 2016;17(6):950. DOI: 10.3390/ijms17060950 48. 48. León-Villanueva A, Huerta-Ocampo JA, Barrera-Pacheco A, Medina-Godoy S, de la Rosa APB. Proteomic analysis of non-toxic Jatropha curcas byproduct cake: Fractionation and identification of the major components. Industrial Crops and Products. 2018;111:694-704. DOI: 10.1016/j.indcrop.2017.11.046 49. 49. Devappa RK, Makkar HPS, Becker K. Nutritional, biochemical, and pharmaceutical potential of proteins and peptides from Jatropha. Journal of Agricultural and Food Chemistry. 2010;58(11):6543-6555. DOI: 10.1021/jf100003z 50. 50. Souza PFN, Vasconcelos IM, Silva FDA, Moreno FB, Monteiro-Moreira ACO, Alencar LMR, et al. A 2S albumin from the seed cake ofRicinus communisinhibits trypsin and has strong antibacterial activity against human pathogenic bacteria. Journal of Natural Products. 2016;79(10):2423-2431. DOI: 10.1021/acs.jnatprod.5b01096 51. 51. Faruk O, Bledzki AK, Fink HP, Sain M. Biocomposites reinforced with natural fibers: 2000-2010. Progress in Polymer Science. 2012;37(11):1552-1596. DOI: 10.1016/j.progpolymsci.2012.04.003 52. 52. Zuccarello B, Scaffaro R. Experimental analysis and micromechanical models of high performance renewable agave reinforced biocomposites. Composites. Part B, Engineering. 2017;119:141-152. DOI: 10.1016/j.compositesb.2017.03.056 53. 53. Sengupta A, Pattnaik S, Sutar MK. Biocomposites: An overview. International Journal of Engineering, Technology, Science and Research IJETSR. 2017;4:2394-3386. Available from:www.ijetsr.com 54. 54. Xie Y, Xiao Z, Gruneberg T, Militz H, Hill CAS, Steuernagel L, et al. Effects of chemical modification of wood particles with glutaraldehyde and 1,3-dimethylol-4,5-dihydroxyethyleneurea on properties of the resulting polypropylene composites. Composites Science and Technology. 2010;70(13):2003-2011. DOI: 10.1016/j.compscitech.2010.07.024 55. 55. Kaewkuk S, Sutapun W, Jarukumjorn K. Effects of interfacial modification and fiber content on physical properties of sisal fiber/polypropylene composites. Composites. Part B, Engineering. 2013;45(1):544-549. DOI: 10.1016/j.compositesb.2012.07.036 56. 56. Sood M, Dwivedi G. Effect of fiber treatment on flexural properties of natural fiber reinforced composites : A review. Egyptian Journal of Petroleum. 2018;27:775-783. DOI: 10.1016/j.ejpe.2017.11.005 57. 57. Koronis G, Silva A, Fontul M. Green composites: A review of adequate materials for automotive applications. Composites. Part B, Engineering. 2013;44(1):120-127. DOI: 10.1016/j.compositesb.2012.07.004 58. 58. Pracella M, Chionna D, Anguillesi I, Kulinski Z, Piorkowska E. Functionalization, compatibilization and properties of polypropylene composites with Hemp fibres. Composites Science and Technology. 2006;66(13):2218-2230. DOI: 10.1016/j.compscitech.2005.12.006 59. 59. Vilaseca F, Valadez-Gonzalez A, Herrera-Franco PJ, Pèlach MÀ, López JP, Mutjé P. Biocomposites from abaca strands and polypropylene. Part I: Evaluation of the tensile properties. Bioresource Technology [Internet]. 2010;101(1):387-395 Available from:http://www.sciencedirect.com/science/article/pii/S0960852409009638 60. 60. Bledzki AK, Franciszczak P, Osman Z, Elbadawi M. Polypropylene biocomposites reinforced with softwood, abaca, jute, and kenaf fibers. Industrial Crops and Products. 2015;70:91-99. DOI: 10.1016/j.indcrop.2015.03.013 61. 61. Khalil HPSA, Aprilia NAS, Bhat AH, Jawaid M, Paridah MT, Rudi D. A Jatropha biomass as renewable materials for biocomposites and its applications. Renewable and Sustainable Energy Reviews. 2013;22:667-685. DOI: 10.1016/j.rser.2012.12.036 62. 62. Zuccarello B, Zingales M. Toward high performance renewable agave reinforced biocomposites: Optimization of fiber performance and fiber-matrix adhesion analysis. Composites. Part B, Engineering. 2017;122:109-120. DOI: 10.1016/j.compositesb.2017.04.011 63. 63. Vinayaka DL, Guna V, Madhavi D, Arpitha M, Reddy N. Ricinus communis plant residues as a source for natural cellulose fibers potentially exploitable in polymer composites. Industrial Crops and Products. 2017;100:126-131. DOI: 10.1016/j.indcrop.2017.02.019 64. 64. España-Gamboa E, Mijangos-Cortes J, Barahona-Perez L, Dominguez-Maldonado J, Hernández-Zarate G, Alzate-Gaviria L. Vinasses: Characterization and treatments. Waste Management & Research. 2011;29(12):1235-1250. DOI: 10.1177/0734242X10387313 65. 65. Martínez-Gutiérrez GA, Ortiz-Hernández YD, Aquino-Bolaños T, Bautista-Cruz A, López-Cruz JY. Properties ofAgave angustifoliaHaw. bagasse before and after its composting. Comunicata Scientiae. 2015;6(4):418-429 66. 66. Moraes BS, Zaiat M, Bonomi A. Anaerobic digestion of vinasse from sugarcane ethanol production in Brazil: Challenges and perspectives. Renewable and Sustainable Energy Reviews. 2015;44:888-903. DOI: 10.1016/j.rser.2015.01.023 67. 67. Christofoletti CA, Escher JP, Correia JE, Marinho JFU, Fontanetti CS. Sugarcane vinasse: Environmental implications of its use. Waste Management. 2013;33(12):2752-2761. DOI: 10.1016/j.wasman.2013.09.005 68. 68. López-López A, Davila-Vazquez G, León-Becerril E, Villegas-García E, Gallardo-Valdez J. Tequila vinasses: Generation and full scale treatment processes. Reviews in Environmental Science and Bio/Technology. 2010;9(2):109-116. DOI: 10.1007/s11157-010-9204-9 69. 69. Rodríguez-Félix E, Contreras-Ramos SM, Davila-Vazquez G, Rodríguez-Campos J, Marino-Marmolejo EN. Identification and quantification of volatile compounds found in vinasses from two different processes of tequila production. Energies. 2018;11(3):490. DOI: 10.3390/en11030490 70. 70. Carpinteyro-Urban S, Vaca M, Torres LG. Can vegetal biopolymers work as coagulant–flocculant aids in the treatment of high-load cosmetic industrial wastewaters? Water, Air. Soil Pollution. 2012;223(8):4925-4936. DOI: 10.1007/s11270-012-1247-9 71. 71. Torres LG, Carpinteyro-Urban SL. Use of Prosopis laevigata seed gum and Opuntia ficus-indica mucilage for the treatment of municipal wastewaters by coagulation-flocculation. Natural Resources Research. 2012;3(2):35. DOI: 10.4236/nr.2012.32006 72. 72. Alvarez MV, Cabred S, Ramirez CL, Fanovich MA. Valorization of an agroindustrial soybean residue by supercritical fluid extraction of phytochemical compounds. Journal of Supercritical Fluids. 2019;143:90-96. DOI: 10.1016/j.supflu.2018.07.012 73. 73. Spatafora C, Tringali C. Valorization of vegetable waste: Identification of bioactive compounds and their chemo-enzymatic optimization. The Open Agriculture Journal. 2012;6(1). DOI: 10.2174/1874331501206010009 74. 74. Uhlig T, Kyprianou T, Martinelli FG, Oppici CA, Heiligers D, Hills D, et al. The emergence of peptides in the pharmaceutical business: From exploration to exploitation. EuPA Open Proteomics. 2014;4:58-69. DOI: 10.1016/j.euprot.2014.05.003 75. 75. Väisänen T, Haapala A, Lappalainen R, Tomppo L. Utilization of agricultural and forest industry waste and residues in natural fiber-polymer composites: A review. Waste Management. 2016;54:62-73. DOI: 10.1016/j.wasman.2016.04.037 76. 76. El-Shekeil YA, Sapuan SM, Abdan K, Zainudin ES. Influence of fiber content on the mechanical and thermal properties of Kenaf fiber reinforced thermoplastic polyurethane composites. Materials and Design. 2012;40:299-303. DOI: 10.1016/j.matdes.2012.04.003 77. 77. Ariadurai S. Bio-composites: Current status and future trends. Research Gate. Available from:https://www.researchgate.net/profile/Samuel_Ariadurai/publication/256308472_Bio-Composites_Current_Status_and_Future_Trends/links/02e7e5224a0558244d000000.pdf 78. 78. Gurunathan T, Mohanty S, Nayak SK. A review of the recent developments in biocomposites based on natural fibres and their application perspectives. Composites. Part A, Applied Science and Manufacturing. 2015;77:1-25. DOI: 10.1016/j.compositesa.2015.06.007 79. 79. Coherent marketing insights. Global Plant Extracts Market to Surpass US$ 63.30 Billion by 2025. 2018. Available from:https://www.marketsandmarkets.com
80. 80. Grand View Research. Peptide Therapeutics Market Size Worth $48 . 04 Billion By 2025. 2018;1-7. Available from:https://www.grandviewresearch.com/press-release/global-peptide-therapeutics-market 81. 81. Grand View Research. Biocomposites Market Size Worth$46.3 Billion By 2025 | CAGR 12.5%. 2018. Available from:https://www.grandviewresearch.com/press-release/global-biocomposites-market
82. 82. Allied Market Research. Second Generation Biofuels Market Size, Share and Trends. 2018. Available from:https://www.alliedmarketresearch.com/second-generation-biofuels-market
83. 83. Zion Market Research. Global Biomass Pellets Market worth USD 15.9 Billion by 2022. 2017. Available from:https://www.zionmarketresearch.com/news/biomass-pellets-market
84. 84. Markets and markets. Organic Pigments Market by Application & Type—Global Forecast 2023 | MarketsandMarketsTM. 2018. Available from:https://www.marketsandmarkets.com/Market-Reports/organic-pigments-market-1076.html?gclid=EAIaIQobChMIgMa1l4T
85. 85. Allied market research. Polyphenol Market Size, Share and Trends | Industry Analysis, 2022. 2018. Available from:https://www.alliedmarketresearch.com/polyphenol-market
Written By
Flora Beltrán-Ramírez, Domancar Orona-Tamayo, Ivette Cornejo-Corona, José Luz Nicacio González-Cervantes, José de Jesús Esparza-Claudio and Elizabeth Quintana-Rodríguez
Submitted: September 28th, 2018 Reviewed: December 15th, 2018 Published: March 11th, 2019
|
2022-06-30 01:16:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5923678278923035, "perplexity": 10474.831724768674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103646990.40/warc/CC-MAIN-20220630001553-20220630031553-00051.warc.gz"}
|
http://mathhelpforum.com/geometry/56094-help-proof-print.html
|
# Help with a proof
• October 27th 2008, 06:06 PM
murphmath
Help with a proof
Could I please get some help with the following?
Given a quadrilateral with points labeled sequentially A B C D with a line segment AC and a line sement BD in the interior noted at intersection by point E with the following givens: segment BC = segment CD and segment AB = segment AD
Prove that AC is perpendicular to BD I'm stuck!
thanks SO much for any help or hints
• October 27th 2008, 06:50 PM
Soroban
Hello, murphmath!
Quote:
Given a quadrilateral $ABCD$ with diagonals $AC$ and $BD$ intersecting at point $E$,
with the following givens: $BC = CD$ and $AB = AD.$
Prove that $AC$ is perpendicular to $BD.$
We have a kite-shaped figure . . .
Code:
``` A * * | * * | * D * - - + - - * B * |E * * | * * | * * | * *|* * C```
We are given: . $AB = AD,\;BC = CD$
Point $A$ is equidistant from $B$ and $D.$
Point $C$ is equidistant from $B$ and $D.$
Hence, $AC$ is the perpendicular bisector of $BD.$
. . Therefore: . $AC \perp BD$
|
2015-07-04 11:15:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5604013800621033, "perplexity": 836.8446370100372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096686.2/warc/CC-MAIN-20150627031816-00160-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://www.nature.com/articles/s41467-021-21315-z
|
Skip to main content
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Expectations of reward and efficacy guide cognitive control allocation
## Abstract
The amount of mental effort we invest in a task is influenced by the reward we can expect if we perform that task well. However, some of the rewards that have the greatest potential for driving these efforts are partly determined by factors beyond one’s control. In such cases, effort has more limited efficacy for obtaining rewards. According to the Expected Value of Control theory, people integrate information about the expected reward and efficacy of task performance to determine the expected value of control, and then adjust their control allocation (i.e., mental effort) accordingly. Here we test this theory’s key behavioral and neural predictions. We show that participants invest more cognitive control when this control is more rewarding and more efficacious, and that these incentive components separately modulate EEG signatures of incentive evaluation and proactive control allocation. Our findings support the prediction that people combine expectations of reward and efficacy to determine how much effort to invest.
## Introduction
Cognitive control is critical to one’s ability to achieve most goals1,2,3—whether to complete a paper in time for its deadline or to send that birthday message amidst a busy workday—but exerting control appears to come at a cost. We experience cognitive control as mentally effortful4,5, and therefore require some form of incentive to justify investing control in a task6,7. For instance, a student is likely to study harder for an exam that has higher stakes (e.g., worth half of their grade) than a lower-stakes exam. Accordingly, research has shown that participants generally exert more mental effort on a cognitive control task (e.g., Stroop, flanker) when they are offered higher rewards for performing well, as evidenced by improved task performance and increased engagement of relevant control circuits7,8,9,10,11,12,13,14,15,16.
In the real world, increased control may not always translate to achieving desired outcomes. For instance, even when the stakes are high, that same student is likely to exert less effort studying if they think that those efforts have little bearing on their score on that exam (i.e., that their efficacy is low), say if they felt that grading for the exam is driven mostly by factors out of their control (e.g., subjectivity in grading, favoritism). While previous work has closely examined the mechanisms by which people evaluate the potential rewards to expect for a certain control allocation, much less is known about how they evaluate the efficacy of that control, nor how these two incentive components (reward and efficacy) are integrated to determine how much control is invested.
We have recently developed a model that formalizes the roles of evaluation, decision-making, and motivation in allocating and adjusting cognitive control17,18 (Fig. 1). Our model describes how cognitive control can be allocated based on the overall worth of executing different types and amounts of control, which we refer to as their expected value of control (EVC). The EVC of a given control allocation is determined by the extent to which the costs that would need to be incurred (mental effort) are outweighed by the benefits. Critically, these benefits are a function of both the expected outcomes for reaching one’s goal (reward, e.g., money or praise) and the likelihood that this goal will be reached with a given investment of control (efficacy) (Fig. 1A). The amount of control invested is predicted to increase monotonically with a combination of these two incentive components (Fig. 1B).
The EVC model integrates over and formalizes past theories that posit roles for reward/utility and/or efficacy/controllability/agency in the motivation to engage in a particular course of action19,20,21,22,23,24,25,26. In so doing, our model enables a description of the computational and neural mechanisms of control allocation. For instance, past research has shown that behavioral and neural markers of proactive control increase with increases in anticipated task difficulty27,28,29,30,31,32. Through the lens of the EVC theory (Fig. 1), these difficulty-related increases in control intensity can be accounted for by changes in expected reward (i.e., the harder the task, the less likely you are to achieve the rewards associated with performing the task well) and/or changes in expected efficacy (i.e., the harder the task, the less helpful a given level of control is for achieving the same level of performance). The latter explains why the relationship between control intensity and task difficulty is nonmonotonic—once the task exceeds a certain difficulty (i.e., once the effort is no longer efficacious33,34), a person stops intensifying their mental efforts and instead starts to disengage from a task.
Our theory, therefore, makes the prediction that differences in efficacy (holding expected reward and difficulty constant) should itself be sufficient to drive changes in behavioral and neural signatures of control allocation. The theory makes the further prediction that reward and efficacy should shape incentive processing and associated neural correlates at multiple stages, including during the initial evaluation of each of these incentive components and at the point when those components converge to determine control allocation based on their combined value (EVC).
Here, we test these predictions across three studies using a paradigm that explicitly dissociates expectations of reward and efficacy associated with a cognitive control task (the Stroop task; Fig. 2), allowing us to isolate their individual and joint contributions to control allocation. To further examine how reward and efficacy are encoded at different stages of incentive processing, in Study 2 we measured EEG and pupillometry while participants performed this task, allowing us to separately measure the extent to which reward and efficacy are reflected in signals associated with the initial evaluation of the incentives available on a given trial (putatively indexed by the post-cue P3b27,35) versus those associated with the proactive allocation of the control deemed appropriate for the upcoming trial (putatively indexed by the contingent negative variation (CNV) occurring prior to the presentation of the target stimulus27,30,32,36,37,38). Confirming our predictions, all three studies find that participants adaptively increase their control allocation (and thus performed better at the task) when expecting higher levels of reward and efficacy. Study 2 shows that both incentive components amplify event-related potentials (ERPs) associated with distinct stages of incentive processing: incentive evaluation (indexed by the P3b following cue presentation) and control allocation (indexed by the CNV prior to Stroop target onset). Critically, only the CNV reflects the integration of reward and efficacy. The amplitude of both ERPs, but more so the CNV, predicts performance when the target appears, supporting the prediction that these neural signals index different stages in the evaluation and allocation of control.
## Results
To test the prediction that reward and efficacy together shape cognitive effort investment and task performance, we developed and validated a paradigm that manipulates efficacy independently from expected reward (Fig. 2). Specifically, prior to the onset of a Stroop stimulus (the target), we cued participants with the amount of monetary reward they would receive if successful on that trial ($0.10 vs.$1.00) and whether success would be determined by their performance (being fast and accurate; high efficacy) or whether it would instead be determined independently of their performance (based on a weighted coin flip; low efficacy). Using an adaptive yoking procedure, we held expected reward constant across efficacy levels, while also varying reward and efficacy independently of task difficulty (i.e., levels of congruency).
Participants performed this task in an experimental session that measured only behavior (Study 1; N = 21) or one that additionally measured EEG activity and pupillometry (Study 2; N = 44). Studies 1–2 had the same trial structure but differed slightly in the design of the incentive cues, the overall number of trials, and within-trial timing, and were run at different study sites (see “Methods”). Predictions for Study 2 were preregistered based on findings from Study 1 (osf.io/35akg). To demonstrate the generality of our findings beyond binary manipulations of reward and efficacy, we performed an additional behavioral study (Study 3, N = 35) in which we varied reward and efficacy parametrically, across four levels of each.
### Performance improves with increasing expected reward and efficacy
We predicted that reward and efficacy would together incentivize greater control allocation. Given that participants needed to be fast and accurate to perform well on our task, we expected to find that participants would be faster to respond correctly when they expected control to be more rewarding and more efficacious. Replicating previous findings27, across both studies we found that reaction times on accurate trials (i.e., accurate RTs, split-half reliability based on odd vs even trials for Study 1: r = 0.79 and Study 2: r = 0.91) were faster for high compared to low reward trials (Study 1: b = −9.81, P = 0.002; Study 2: b = −5.03, P = 0.004). Critically, accurate RTs were also faster for high compared with low efficacy trials (Study 1: b = −14.855, P < 0.001; Study 2: b = −5.89, P = 0.016). We further found reward-efficacy interactions in the predicted direction—with the speeding effect of reward being enhanced on high-efficacy trials—but this interaction was only significant in Study 2 (Study 1: b = −9.75, P = 0.116; Study 2: b = −9.23, P = 0.009; cf. Fig. 3). Note that Study 1 had a much smaller sample size than Study 2 and 3, and therefore may not have been sufficiently powered to secure the interaction effect.
Additional analyses confirmed that these performance improvements were not driven by speed-accuracy tradeoffs. Whereas participants were faster when reward or efficacy was high, they were not less accurate (Supplementary Tables 1 and 2). In fact, their accuracies (split-half reliability Study 1: r = 0.72, Study 2: r = 0.83) tended to improve when reward or efficacy was high, though only the effect of efficacy on accuracy in Study 2 was significant (b = 0.08, P = 0.033, Supplementary Table 2). Together, the faster RTs and more accurate responses suggest that the effects of reward and efficacy on response speed reflected increased control rather than a lowering of response thresholds (i.e., increased impulsivity).
All of the analyses above control for the effects of task difficulty (response congruence) and practice effects (trial number) on performance, which in both studies manifested as worse performance (slower and less accurate responding) with increasing response incongruence, and improved performance (faster and more accurate responding) over time (Supplementary Tables 1 and 2 and Supplementary Fig. 1). The effects of reward and efficacy on performance did not significantly differ between the two studies (Supplementary Table 3).
We further replicated and extended these findings in Study 3, in which reward and efficacy were varied parametrically rather than only across two levels each. As in Studies 1 and 2, we found that participants were faster to respond correctly with increasing expected reward (b = −7.02, P < 0.001) and increasing expected efficacy (b = −3.85, P = 0.001), and that these two incentive components interacted (b = −2.27, P = 0.027), such that participants responded fastest when both reward and efficacy were highest (Fig. 3C). As in the previous studies, these effects were not explained by speed-accuracy tradeoffs. In all analyses, we controlled for task difficulty and practice effects (Supplementary Table 4).
### Reward and efficacy levels are reflected in neural signatures of cue evaluation and control allocation
Our behavioral results suggest that participants adjust their mental effort investment (allocation of cognitive control) based on the expected rewards and the degree to which this effort is perceived to be efficacious; they invest more effort when the expected reward and efficacy are high. To examine the neural and temporal dynamics associated with the processing of these two incentive components, we focused on two well-characterized event-related potentials (ERPs): the P3b (split-half reliability r = 0.86), which peaks around 250–550 ms following cue onset and is typically associated with cue evaluation27, and the CNV (split-half reliability r = 0.75), which emerges about 500 ms prior to Stroop target onset and is typically associated with preparatory attention or proactive control27,28,30,32,36,37. Based on past research, we preregistered the predictions below for the CNV. Additional predictions regarding the P3b were generated subsequent to preregistration based on further review of the literature.
We found that reward and efficacy modulated both of these ERPs (Table 1 and Fig. 4). Replicating past work27, cues signaling higher rewards were associated with significantly larger amplitudes of both P3b (b = 0.34, P < 0.001) and CNV (b = −0.28, P = 0.001). Importantly, holding reward constant, cues signaling high rather than low efficacy were likewise associated with significantly larger amplitudes of P3b (b = 0.44, P < 0.001) and CNV (b = −0.30, P = 0.008).
Crucially, only the CNV tracked the interaction of reward and efficacy (b = −0.35, P = 0.046), with the effect of reward on CNV amplitude being enhanced when efficacy was high. We did not find a significant reward-efficacy interaction for the P3b (b = −0.01, P = 0.947). Thus, although reward and efficacy independently modulated the P3b and CNV (i.e., main effects of reward and efficacy on both ERPs), only the CNV reflected their integration (i.e., reward-efficacy interaction). This pattern of results is consistent with our prediction that reward and efficacy are initially evaluated separately (reflected in the P3b), but are subsequently integrated to determine EVC and thereby allocate control (reflected in the CNV).
### Neural signatures of incentive processing predict effort investment
We have shown that reward and efficacy affect behavioral performance (accurate RT) and neural activity during initial cue evaluation (P3b) and proactive control allocation (CNV), suggesting that these neural signals reflect the transformation of incentives into effort allocation. To test this hypothesis more directly, we included single-trial P3b and CNV amplitude (normalized within-subject) as regressors in our models of accurate RT and accuracy, to test whether variability in these two neural signals explained trial-by-trial variability in task performance (Table 2). We found that both P3b and CNV were associated with better Stroop task performance when the target appeared: larger ERP magnitudes were associated with an increased probability of responding correctly (P3b; b = 0.08, P < 0.001, CNV: b = −0.10, P < 0.001), and also with faster accurate RTs (P3b: b = −7.04, P < 0.001, CNV: b = 15.58, P < 0.001). Crucially, the CNV’s relationship with accurate RT was significantly stronger than the P3b’s (Χ2 = 18.51, P < 0.001), providing evidence consistent with our prediction that the CNV plays a more important role in allocating control than the P3b, and with our observation that CNV’s relationship with reward and efficacy more closely resembles that found for accurate RT (i.e., both the CNV and accurate RT were modulated by the interaction of reward and efficacy; compare Fig. 3 and Fig. 4B). However, CNV and P3b did not differ reliably in their association with accuracy (Χ2 = 0.78, P = 0.378). Both ERPs further explained variance in behavior when examining each incentive condition separately (Supplementary Table 5), suggesting that these neural markers did not merely covary with behavior through shared variance with incentives. Together, these findings suggest that the P3b and CNV index the transformation of incentive processing into effort investment, a process that entails the integration of the reward and efficacy expected on a given trial.
### Opposing effects of expected efficacy on pupil responses
Contrary to our predictions, we observed no effect of reward on pupillary responses to the cue, and pupil responses were smaller (not larger) when efficacy was higher (P < 0.001, cf. Supplementary Table 6, Supplementary Fig. 2). We did not find a significant interaction between reward and efficacy on pupil diameter. These patterns of pupil responses therefore diverge from the patterns we observed in performance and ERP magnitudes (both of which scaled positively with reward and efficacy), but they nevertheless provide evidence against the idea that our behavioral and EEG results merely reflect changes in arousal. If this alternative explanation were true, pupil diameter—a reliable index of arousal39,40,41, previously shown to scale with uncertainty42—should have increased when either reward or efficacy was high.
### Influences of incentives on EEG signatures of response and feedback monitoring
While the focus of our study was on measures of incentive processing and proactive control allocation, we preregistered secondary hypotheses regarding the potential influence reward and efficacy might have on neural signatures of reactive control. Specifically, we predicted that these incentive components might enhance monitoring of response accuracy and subsequent feedback. Contrary to this hypothesis, when examining the error-related negativity (ERN)—a negative deflection in response-locked activity for errors relative to correct responses43,44 (though see ref. 45 for ERN elicited by partial errors on correct trials)—we did not find main effects of reward or efficacy (Ps > 0.444) but did find a significant interaction (b = 1.52, P = 0.001; Supplementary Fig. 3; Supplementary Tables 79), whereby ERN amplitude on error trials was greatest (i.e., most negative) on trials with low reward and low efficacy (see also Supplementary Table 10 for complementary analyses of midfrontal theta). Follow-up analyses suggest that this pattern may result from different dynamics in control and response evaluation between conditions (see Supplementary Fig. 3 and Supplementary Tables 79).
We found a different pattern of results when examining the feedback-related negativity (FRN), which typically indexes the difference in feedback-locked activity for trials that resulted in negative compared to positive feedback46. Consistent with previous findings47,48,49,50,51,52, we found a reliable effect of receipt vs omission of reward on FRN amplitude (b = 0.80, P < 0.001), and this effect was enhanced for high reward trials (b = 0.81, P = 0.007; Supplementary Table 11). However, in addition to this, and contrary to the hypothesis we preregistered based on previous findings53,54, we found that effects of reward receipt vs omission on FRN amplitude were reduced for trials with high efficacy compared to those with low efficacy (b = −0.83, P = 0.007; Supplementary Fig. 4 and Supplementary Table 11). As we elaborate on in the Supplementary Discussion, this efficacy-related FRN finding might reflect the fact that, under conditions of low efficacy, reward outcomes are less predictable, thus weakening predictions about forthcoming reward.
## Discussion
Cognitive control is critical but also costly. People must therefore choose what type and how much control is worth exerting at a given time. Whereas previous studies have highlighted the critical role of expected rewards when making these decisions, our studies highlight a further determinant of control value that is equally critical: how efficacious one perceives their efforts to be (i.e., how much does intensifying control increase their chances of obtaining their reward). Across two studies, we showed that participants were sensitive to both expected reward and efficacy when determining how much control to allocate, and therefore performed best when expecting higher levels of reward and efficacy. Study 2 further demonstrated that both incentive components increase distinct ERPs, separately related to cue evaluation and proactive control, providing markers of different stages of incentive processing (evaluation vs. allocation of control). Collectively, these findings lend support to our theory that participants integrate information relevant to determining the EVC, and adjust their control allocation accordingly.
Previous research has shown that people often expend more effort on a task when it promises greater reward7,8,9,10,11,12,13,14,15,16, but this effort expenditure has its limits. If obtaining that reward also requires greater effort (i.e., if the higher reward is also associated with greater difficulty), the individual may decide not to invest the effort55. Similarly, if difficulty remains constant, but reward becomes less contingent on effort (i.e., efficacy decreases), the individual may again decide to divest their efforts33,34,56,57. The EVC theory can account for all of these phenomena, and predicts that expected reward and efficacy will jointly determine how mental effort is allocated (in the form of cognitive control) and that the underlying evaluation process will unfold from initial cue evaluation to eventual control allocation. Specifically, the theory predicts that these incentive components will be processed sequentially over multiple stages that include initial evaluation of each component, their integration, control allocation, and execution of the allocated control. Our behavioral and neural findings validate the predictions of this theory: When participants expected their control to have greater reward and efficacy, we saw increased neural activity in consecutive ERPs associated with incentive evaluation (P3b) and control allocation (CNV), followed by increases in control (reflected in improved performance).
Our EEG results extend and clarify previous findings. First, in a previous study, the cue-locked P3 tracked the expected reward, but not difficulty, on that cued trial27. We varied expected efficacy while holding expected difficulty constant, and show that varying efficacy alone is sufficient to generate comparable increases in the cue-locked P3b as variability in expected reward. The difference between our finding and the null result previously observed for task difficulty may be accounted for by the fact that efficacy (like reward) has a monotonic relationship with motivational salience, whereas difficulty does not (as discussed further below).
Second, our results extend previous studies that linked the CNV with preparatory attention and proactive control58,59,60. CNV amplitudes scale with a cue’s informativeness about an upcoming task28,36,37,61,62, temporal expectations about an upcoming target63,64,65, and an individual’s predicted confidence in succeeding when the target appears66. Critically, CNV amplitudes also scale with the expected reward for and difficulty of successfully performing an upcoming task27,30,32,38, suggesting that this component reflects adjustments to proactive control in response to motivationally relevant variables. Here, we extend this body of work by showing that the CNV not only varies with expected efficacy (when isolated from expected difficulty) but that, unlike the P3b, it is further modulated by the interaction between reward and efficacy (i.e., the expected payoff for control; Fig. 1), and predicts trial-to-trial variability in performance, suggesting that it may index the allocation and/or execution of control based on an evaluation of EVC.
With that said, we note that variability in performance was also associated with P3b amplitude, though to a somewhat lesser degree. While it is, therefore, possible that control allocation was already being determined at this early stage of the trial, past findings67 as well as our current ones suggest that it is equally or perhaps more likely that the P3b indexed the initial evaluation of the motivational relevance of cued incentives, as we originally hypothesized. Consistent with our original interpretation, we found that the amplitude of the P3b (but not CNV) decreased over the course of the session, potentially reflecting decreased attentiveness to the cues. It is further of note that even both ERP components combined did not fully mediate incentive effects on performance. This could be due to those ERPs being noisy indicators of the underlying processes, or to dynamics following target onset that lead to additional variance as a function of incentives that cannot be explained with proactive control. Specific predictions of the latter account could be tested in future work explicitly designed to do so.
Our remaining findings provide evidence against alternative interpretations of these neural results, for instance, that they reflect increased arousal or overall engagement throughout the trial. Pupil diameter, an index of arousal40,41,42, was larger when participants were expecting lower efficacy. Although this pattern was not predicted in advance, these findings are consistent with the interpretation that pupil responses in our paradigm track arousal—induced by higher uncertainty under low efficacy—instead of proactive control39,42. In contrast, the magnitudes of the P3b and CNV increased with both reward and efficacy, suggesting that these two ERPs reflect processes related to proactive control rather than changes in arousal.
Our response- and feedback-related results further suggest that reward and efficacy specifically increased proactive control, but not reactive control (performance monitoring68) or overall engagement. Unlike the P3b and CNV, indices of performance monitoring (the ERN and FRN) were not enhanced with greater reward and efficacy, suggesting that these incentive conditions were not simply amplifying the motivational salience of errors and reward outcomes. Rather than reflecting motivational influences on control, the unique patterns of ERN and FRN amplitudes we observed across conditions may instead provide insight into how participants formed and updated expectations of performance and outcomes across these different conditions69 (see Supplementary Discussion).
Our study builds on past approaches to studying interactions between motivation and cognitive control9,12,16,38 by examining changes in effort allocation in response to two incentive components that are predicted to jointly increase one’s motivation. Thus, unlike studies that only vary the expected reward for an upcoming task, our behavioral and neural findings cannot be accounted for by general increases in drive, vigor, or arousal. Further, unlike studies that vary the expected difficulty of an upcoming task, resulting in the nonmonotonic allocation of effort (the classic inverted U-shaped function of effort by difficulty70,71,72) the incentive components we varied should only engender monotonic increases in effort. The monotonic relationship between these incentive components and the value of control (EVC) can in fact account for the nonlinear effect of difficulty on effort allocation: at very high levels of difficulty, a given level of control becomes less and less efficacious. Our study, therefore, provides the most direct insight yet into the mechanisms and influences of EVC per se, rather than only some of its components.
One interesting feature of our results is that participants engaged in some reasonably high level of effort even when their efforts were completely inefficacious (0% efficacy). There are several plausible explanations for this, including an intrinsic bias towards accuracy (or against error commission)73 and potential switch costs associated with the interleaved trial structure74. For instance, switch costs associated with control adjustments may discourage a significant drop in control following a high-efficacy trial. An even more intriguing possibility is that experiences in the real-world drive participants to have strong priors that their efforts are generally efficacious (and practice allocating control within a certain range of expected efficacies)75, making it difficult for them to adjust all the way to the expectation that reward is completely unrelated to their performance on a task.
Individual differences in expectations of efficacy may also play a significant role in determining one’s motivation to engage in individual tasks or effortful behavior at large19,21,76,77,78. Forms of amotivation, like apathy and anhedonia, are common across a variety of psychiatric and neurological disorders, and most likely reflect deficits in the process of evaluating potential incentive components; determining the overall EVC of candidate control signals; specifying the EVC-maximizing control allocation; and/or executing this control. Thus, to understand what drives adaptive versus suboptimal control, we need to find new and better ways to assess what drives these key processing stages underlying motivated effort. By highlighting the crucial role efficacy plays in determining whether control is worthwhile, and identifying candidate neural signatures of the process by which this is evaluated and integrated into decisions about control allocation, our studies pave the way toward this goal.
## Methods
### Study 1
#### Participants
In total, 21 individuals participated in Study 1 (age: M = 21.14, SD = 5.15; 17 female). Participants gave informed consent and received partial course credits and cash ($5 to$10, depending on their performance and task contingencies) for participation. The study was approved by Brown University’s Institutional Review Board.
#### Design and procedure
We used a within-subject 2 (reward: high, low) × 2 (efficacy: high, low) design. On high and low reward Stroop trials, participants saw cues that informed them that they would receive $1.00 and$0.10, respectively, on the upcoming trial (Fig. 2). Reward levels were crossed with efficacy. On high-efficacy trials, whether participants were rewarded depended entirely on their performance (i.e., fast and accurate responses were always rewarded—100% performance–reward contingency, cf. Supplementary Table 12 for summary statistics on criterion-performance and reward). On low efficacy trials, rewards were not contingent on participants’ performance; instead, rewards were sampled from a rolling window (size = 10) of reward rate in high-efficacy trials to match reward rates across efficacy levels. This approach parallels and builds on recent work examining the influence of performance contingency in the domain of motor control (where individuals simply needed to respond quickly56,57, see also refs. 79,80), but importantly our task required that participants engage cognitive control in order to be fast while also overcoming a prepotent bias to respond based on the color word.
Participants first completed three practice blocks. In the first practice block (80 trials), participants learned the key-color mappings by indicating whether the stimulus XXXXX was displayed in red, yellow, green, or blue (using D, F, J, K keys; counterbalanced across participants). In the second practice block (16 trials), participants learned to associate cues with different reward and efficacy levels (Fig. 2). Finally, participants completed a practice block (64 trials) that resembled the actual task. Incentive instructions read as follows: “In the next block, you again need to press the key associated with the color of the text on the screen. From now on, you will have the opportunity to get an additional bonus based on how you perform the task. You will be told on each trial how performance could affect your bonus. Before each word appears, you will see an image that tells you two things: (1) the amount of reward you could earn; and (2) whether or not your performance will determine if you get that reward. When you see one of the two images above, you can get a low ($0.10) or high reward ($1.00) if you respond quickly and accurately. The two images above ALSO indicate that you can get a low or high reward, BUT the gray hands indicate that your reward will have NOTHING to do with how quickly or accurately you perform. Instead, these rewards will be determined randomly. As long as you provide some response on that trial, you have some possibility of getting a low ($0.10) or high ($1.00) reward. Although these rewards will be random, you will be just as likely to get a reward on these trials as the trials with the blue hands.”
Once familiar with the task, participants were introduced to the performance bonus and completed the main task. Performance bonus instructions read as follows: “From now on, you will continue performing the same task, but it will not be practice. Every trial can influence your ultimate bonus. At the end of the session, we will choose ten trials at random and pay you an additional bonus based on the total amount of money you earned across those ten trials. This means you have the opportunity to earn up to ten additional dollars on this task.” On an individual trial, cues were presented for 1500 ms, followed by a 250 ms fixation cross, followed by a target. To increase task difficulty, the response deadline for each trial was 750 ms but reaction times were recorded as long as a response was made within 1000 ms after Stroop stimulus onset. Immediately after a response was made, the feedback was presented for 750 ms. If a response was made before 1000 ms, the remaining time was added to the inter-trial-interval, in which a fixation cross was displayed for 500 ms to 1000 ms. The main task consisted of four blocks of 75 trials each (except for the first 14 participants, who completed 80 trials per block). The experiment was run using custom code in Matlab and the Psychophysics toolbox.
After completing the task, participants completed questionnaires that were administered for analyses unrelated to the present studies. At the end of the experiment, ten trials were randomly chosen and participants received a bonus that was the summed outcomes of those trials.
### Study 2
#### Participants
Before data collection, we conducted a sensitivity analysis, which indicated that a sample size of N = 50 will provide 80% statistical power to detect effect sizes of d = 0.3 or larger. We preregistered our sample size, task design, and analysis plan (osf.io/35akg) and recruited 53 undergraduate students (age M = 20.18, SD = 2.30; 15 male; 38 female). We excluded from all analyses 9 participants who performed poorly on the Stroop task (i.e., below 60% accuracy on high-efficacy trials), leaving 44 participants in the final sample. Technical issues also prevented us from recording clean pupil data from 7 participants in this final sample, leaving 37 participants in the pupil analyses. Participants gave informed consent and received partial course credits and cash ($5 to$10, depending on their performance and task contingencies) for participation. The study was approved by Brown University’s Institutional Review Board.
#### Design and procedure
The behavioral paradigm and procedures were similar to those in Study 1. In addition, we recorded EEG and pupillary responses and changed the following task parameters: no fixation cross was presented during the cue-target interval; that is, the cue transitioned directly to the target to avoid inducing visual evoked potentials that would influence the amplitude of the CNV; we added a post-response blank screen (800 ms) to dissociate response evaluation and feedback processing; participants performed eight blocks of 75 trials each. We also changed the appearance of the cues as depicted in Fig. 2. We selected putatively equiluminant colors (gray: C:30.98, M: 19.61, Y: 20.78, K: 0; pink: C: 9.8, M: 42.75, Y: 0, K: 0, blue: C: 61.96, M: 0, Y: 0.39, K: 0). Luminance (computed post-hoc for the four cue stimuli as a whole) was similar across the individual cue stimuli (low efficacy low reward 1.4199 cd/m2, low efficacy high reward: 1.3980 cd/m2, high-efficacy low reward: 1.3829 cd/m2, high-efficacy high reward: 1.3577 cd/m2) and approximately 1.4 cd/m2. We used the same stimuli throughout and did not counterbalance. Note that the small deviations in luminance do not correspond to the observed patterns in pupil dilation.
#### EEG recording and preprocessing
EEG data were recorded from 32 Ag/AgCl electrodes embedded in a stretched Lycra cap (Electro-Cap International, Eaton, OH) at a sampling rate of 512 Hz. Impedances were kept below 5 kΩ during recording. Vertical electrooculography (VEOG) was recorded from two electrodes placed above and below the right eye, respectively. Signals were amplified using ANT TMSi Refa8 device (Advanced Neuro Technology, Enschede, The Netherlands), grounded to the forehead, and referenced online to the average of all electrodes. Offline, the EEG data were re-referenced to the average of electrodes placed on the two earlobes. During preprocessing, continuous data were high-pass filtered at 0.1 Hz (12 dB/oct, zero phase-shift Butterworth filter) and decomposed into independent components using the infomax independent component analysis algorithm implemented in EEGLAB81. We inspected the independent components and used the ICLabel EEGLAB extension82 to help identify and remove blink and noise components. We used ICLabel, an extension made by EEGLAB’s developers82, to identify ICs that were classified as eye or muscle ICs. The algorithm assigns probabilities to seven categories: brain, muscle, eye, heart, line noise, channel noise, other. The extension also provides an interface (see https://sccn.ucsd.edu/wiki/ICLabel) that shows the topography, time course, power spectrum, and ERP-image (sorted by trial number) of each IC. Guided by ICLabel’s classification algorithm, for each participant, we excluded, on average, two to three eye frontal components (e.g., blinks, vertical/horizontal eye movements) and one to three muscle components (usually ICs that showed maximal activity at temporal sites). ICs were considered blinks or eye movement ICs and excluded if (1) there was a high probability (>85% and <1% brain) of them being classified as an eye-related IC and (2) the IC time course activity resembled blinks or vertical/horizontal eye movements (e.g., activity that looks like step-functions) and (3) the topography showed maximal activity at frontal sites (see https://sccn.ucsd.edu/wiki/ICLabel for an example of such an IC). ICs were considered as muscle ICs and excluded if (1) there was a high probability (>95% muscle and <1% brain) of them being classified as a muscle IC and (2) the power spectrum resembled noise or muscle activity more than neural activity (i.e., power peaks at higher frequencies rather than lower frequencies).
Pre-processed EEG data were epoched relative to the onset of four events: cue (−200 to 1500 ms), stimulus (−200 to 800 ms), response (−200 to 800 ms), and feedback (−200 to 800 ms). All epochs were baseline-corrected using the mean amplitude before event onset (−200 to 0 ms), and single-trial baseline activity was included as covariates in the statistical models83. Epochs containing artifacts, with amplitudes exceeding ± 150 µV or gradients larger than 50 µV, were excluded from further analysis. We focused our analyses on these event-related potentials, quantified agnostic of condition with ROIs and time windows determined a priori based on the literature84 cue-locked P3b (250–550 ms, averaged across Pz, P3, and P427,85), cue-locked late CNV (1000–1500 ms post-cue, i.e., −500 to 0 ms pre-target, averaged across Fz, FCz, and Cz30), response-locked correct- and error-related negativities (CRN/ERN; 0–100 ms43,86), and feedback-locked FRN (quantified peak-to-peak at FCz as the difference between the negative peak between 250 and 350 ms and the positive peak in the preceding 100 ms from the detected peak47). All EEG data preprocessing were performed using custom MATLAB scripts using EEGLAB functions (cf. 87). For each ERP (except the FRN that was quantified peak-to-peak), we averaged the amplitudes within the specified time window separately for each epoch and exported these single-trial values for further analyses in R.
#### Pupil recording and preprocessing
Pupil data were recorded using the EyeLink 1000 Desktop Mount eye tracker (SR Research, Mississauga, Ontario, CA). The EyeLink system was configured using a 35-mm lens, 5-point gaze location calibration, monocular right-eye sampling at 500 Hz, and centroid fitting for pupil area recordings. All data processing was performed using custom R and Python scripts. Blink artifacts detected using the EyeLink blink detection algorithm were removed and subsequently interpolated linearly from −200 ms prior to and post-blink onset. Finally, we down-sampled the continuous data to 20 Hz and z-score normalized (within-subject) each data point by subtracting the mean pupil size of all data points and then dividing by the standard deviation.
### Study 3
#### Participants
In total, 35 individuals participated in Study 3 (age: M = 20.66, SD = 2.61; 27 female). Participants gave informed consent and received partial course credits and cash ($5–$10, depending on their performance and task contingencies) for participation. The study was approved by Brown University’s Institutional Review Board.
#### Design and procedure
The overall procedure was the same as in Study 1, except that expected reward and efficacy were varied parametrically across 4 levels each. As in Studies 1–2, reward levels were varied in terms of the monetary outcome at stake: $0.10,$0.20, $0.40, or$0.80. Efficacy was varied in terms of the likelihood of the outcome being determined by performance (i.e., by meeting the speed and accuracy criterion) versus being determined at random (cf. 88), with 100% efficacy being identical to the high-efficacy condition in Studies 1–2. The possible efficacy levels were 25%, 50%, 75%, and 100%. Reward and efficacy levels were varied independently across 300 total trials. The expected reward and efficacy levels for the upcoming trial were cued by two charge bars that were filled to the current level of each.
### Analysis
Classical frequentist statistical analyses were performed in R. The design was entirely within-subjects; unless stated otherwise, all estimates and statistics were obtained by fitting mixed-effects (multilevel or hierarchical) single-trial regression models (two levels: all factors and neurophysiological responses for each condition were nested within participants) with random intercepts and slopes (unstructured covariance matrix) using the R package lme489. Random effects were modeled as supported by the data, determined using singular value decomposition, to avoid overparameterization and model degeneration90,91. All analysis code for reproducing the reported results can be found on OSF: osf.io/xuwn9. All continuous regressors were within-subject mean-centered. Two-tailed probability values and degrees of freedom associated with each statistic was determined using the Satterthwaite approximation implemented in lmerTest92. We inspected Q–Q plots for violations of normally distributed residuals and assured that there was no problematic collinearity between regressors. Wherever relevant, we also reported split-half reliabilities (correlations) based on odd/even-numbered trials.
#### Behavioral
In both samples, accuracy was analyzed using generalized linear mixed-effect models with a binomial link function. The predictors or regressors were reward, efficacy, their interaction, congruency, and trial number. Accurate RTs were modeled using linear mixed-effects models and the same predictors, and z-scored within subject for visualization, only. For trial-by-trial predictions of performance with ERPs, the behavioral models were extended by including P3b and CNV amplitudes z-scored within participants as predictors. In separate analyses, we confirmed that similar results are obtained using a step-wise approach, analyzing residuals of the behavioral model with CNV and P3b as predictors or analyzing incentive effects on residuals from a model with ERPs but without incentive conditions, suggesting partially non-overlapping variance. Trial number (Trial) was added to these and all other models as a nuisance regressor to account for trends over time, such as learning or fatigue effects (cf. Supplementary Fig. 5).
#### EEG
Full linear mixed-effect models for all ERPs included reward, efficacy, and their interactions, as well as trial as predictors. For each ERP, we regressed out the baseline activity at the same electrode sites83. This approach accounts for variability prior to the effect of interest that can otherwise induce spurious effects due to noise or spill-over from previous stages of the trial. Although noise in the baseline is assumed to average to zero (across time points, as well as trials) when using traditional ERP-averaging approaches, this assumption does not necessarily hold for single-trial analyses, where a non-stationary baseline or unevenly distributed noise can easily lead to systematic biases in the subsequent time-series. To address these potential spurious effects, we follow recommendations to include the baseline as a nuisance regressor83. In the CNV analyses, we further controlled for variation in the preceding P3b amplitude, because here, likewise, due to the autocorrelation of the signals, larger P3b amplitudes (a positive-going ERP) would require larger subsequent CNV amplitudes (a negative-going ERP) to counteract the larger positive P3b amplitudes and reach the average levels of CNV amplitude. We compared the results with and without the inclusion of the P3b as a regressor and the patterns of results were qualitatively similar. For the ERN analyses, we included as predictors target congruency, response accuracy, and interactions with incentives. For FRN analyses, we included as predictors the outcome (whether trials were rewarded or not), and interactions with incentives.
#### Pupil response model and analysis
We modeled the pupillary response as a linear time-invariant system comprising of a temporal sequence of “attentional pulses”93. As with methods used in functional magnetic resonance imaging analysis to deconvolve blood-oxygen-level-dependent signals, this approach allows us to deconvolve temporally overlapping pupil responses and estimate the magnitude of the pupil response associated with each event. Following previous work93,94, each event (e.g., fixation, cue, target, response, feedback) was modeled as a characteristic impulse response approximated by an Erlang gamma function, $$h = t^ne^{\left( {\frac{{ - nt}}{{t_{{\mathrm{max}}}}}} \right)}$$, where the impulse response h is defined by t, the time since event onset, tmax, the latency of response maximum, and n, the shape parameter of the Erlang distribution. Guided by previous empirical estimates95, we set n = 10.1 and tmax = 1.30s. We used the pypillometry python package to estimate the magnitude (i.e., coefficient) of the pupil response for each event96, and we z-scored normalized these coefficients (within-subject) before fitting mixed-effects models to evaluate whether the coefficients varied as a function of the experimental manipulations (i.e., efficacy, reward, target congruency, and feedback).
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
## Data availability
The datasets generated and analyzed during this study are available under https://osf.io/xuwn9Source data are provided with this paper.
## Code availability
Scripts for all analyses are available through https://osf.io/xuwn9.
## References
1. 1.
Inzlicht, M., Shenhav, A. & Olivola, C. Y. The effort paradox: effort is both costly and valued. Trends Cogn. Sci. 22, 337–349 (2018).
2. 2.
Braver, T. S. et al. Mechanisms of motivation-cognition interaction: challenges and opportunities. Cogn. Affect Behav. Neurosci. 14, 443–472 (2014).
3. 3.
Botvinick, M. M. & Braver, T. Motivation and cognitive control: from behavior to neural mechanism. Annu. Rev. Psychol. 66, 83–113 (2015).
4. 4.
Westbrook, A. & Braver, T. S. Cognitive effort: a neuroeconomic approach. Cogn. Affect Behav. Neurosci. 15, 395–415 (2015).
5. 5.
Westbrook, A., Kester, D. & Braver, T. S. What is the subjective cost of cognitive effort? Load, trait, and aging effects revealed by economic preference. PLoS ONE 8, e68210 (2013).
6. 6.
Smith, V. L. & Walker, J. M. Monetary rewards and decision cost in experimental economics. Economic Inq. 31, 245–261 (1993).
7. 7.
Kool, W. & Botvinick, M. A labor/leisure tradeoff in cognitive control. J. Exp. Psychol. Gen. 143, 131–141 (2014).
8. 8.
Dixon, M. L. & Christoff, K. The decision to engage cognitive control is driven by expected reward-value: neural and behavioral evidence. PLoS ONE 7, e51637 (2012).
9. 9.
Parro, C., Dixon, M. L. & Christoff, K. The neural basis of motivational influences on cognitive control. Hum. Brain Mapp. 39, 5097–5111 (2018).
10. 10.
Croxson, P. L., Walton, M. E., O’Reilly, J. X., Behrens, T. E. & Rushworth, M. F. Effort-based cost-benefit valuation and the human brain. J. Neurosci. 29, 4531–4541 (2009).
11. 11.
Vassena, E. et al. Overlapping neural systems represent cognitive effort and reward anticipation. PLoS ONE 9, e91008 (2014).
12. 12.
Krebs, R. M., Boehler, C. N., Roberts, K. C., Song, A. W. & Woldorff, M. G. The involvement of the dopaminergic midbrain and cortico-striatal-thalamic circuits in the integration of reward prospect and attentional task demands. Cereb. cortex 22, 607–615 (2012).
13. 13.
Schmidt, L., Lebreton, M., Clery-Melin, M. L., Daunizeau, J. & Pessiglione, M. Neural mechanisms underlying motivation of mental versus physical effort. PLoS Biol. 10, e1001266 (2012).
14. 14.
Padmala, S. & Pessoa, L. Reward reduces conflict by enhancing attentional control and biasing visual cortical processing. J. Cogn. Neurosci. 23, 3419–3432 (2011).
15. 15.
Hall-McMaster, S., Muhle-Karbe, P. S., Myers, N. E. & Stokes, M. G. Reward boosts neural coding of task rules to optimize cognitive flexibility. J. Neurosci. 39, 8549–8561 (2019).
16. 16.
Yee, D. M., Krug, M. K., Allen, A. Z. & Braver, T. S. Humans integrate monetary and liquid incentives to motivate cognitive task performance. Front. Psychol. 6, 2037 (2015).
17. 17.
Shenhav, A., Botvinick, M. M. & Cohen, J. D. The expected value of control: an integrative theory of anterior cingulate cortex function. Neuron 79, 217–240 (2013).
18. 18.
Shenhav, A., Cohen, J. D. & Botvinick, M. M. Dorsal anterior cingulate cortex and the value of control. Nat. Neurosci. 19, 1286–1291 (2016).
19. 19.
Bandura, A. Self-efficacy: toward a unifying theory of behavioral change. Adv. Behav. Res. Ther. 1, 139–161 (1978).
20. 20.
Vroom, V. H. Work and Motivation (Wiley, 1964).
21. 21.
Rotter, J. B. Generalized expectancies for internal versus external control of reinforcement. Psychological Monogr.: Gen. Appl. 80, 1 (1966).
22. 22.
Maier, S. F. & Seligman, M. E. Learned helplessness: theory and evidence. J. Exp. Psychol.: Gen. 105, 3 (1976).
23. 23.
Feather, N. T. Success probability and choice behavior. J. Exp. Psychol. 58, 257 (1959).
24. 24.
Feather, N. T. Subjective probability and decision under uncertainty. Psychological Rev. 66, 150 (1959).
25. 25.
Atkinson, J. W. Motivational determinants of risk-taking behavior. Psychol. Rev. 64, 359–372 (1957).
26. 26.
Wabba, M. A. & House, R. J. Expectancy theory in work and motivation: some logical and methodological issues. Hum. Relat. 27, 121–147 (1974).
27. 27.
Schevernels, H., Krebs, R. M., Santens, P., Woldorff, M. G. & Boehler, C. N. Task preparation processes related to reward prediction precede those related to task-difficulty expectation. Neuroimage 84, 639–647 (2014).
28. 28.
Alpay, G., Goerke, M. & Sturmer, B. Precueing imminent conflict does not override sequence-dependent interference adaptation. Psychol. Res. 73, 803–816 (2009).
29. 29.
Botvinick, M. M. & Rosen, Z. B. Anticipation of cognitive demand during decision-making. Psychol. Res. 73, 835–842 (2009).
30. 30.
Frömer, R., Stürmer, B. & Sommer, W. (Don’t) Mind the effort: effects of contextual interference on ERP indicators of motor preparation. Psychophysiology 53, 1577–1586 (2016).
31. 31.
Strack, G., Kaufmann, C., Kehrer, S., Brandt, S. & Sturmer, B. Anticipatory regulation of action control in a simon task: behavioral, electrophysiological, and FMRI correlates. Front. Psychol. 4, 47 (2013).
32. 32.
Frömer, R., Hafner, V. & Sommer, W. Aiming for the bull’s eye: Preparing for throwing investigated with event-related brain potentials. Psychophysiology 49, 335–344 (2012).
33. 33.
Kukla, A. Foundations of an attributional theory of performance. Psychological Rev. 79, 454–45 (1972).
34. 34.
Brehm, J. W. & Self, E. A. The intensity of motivation. Annu. Rev. Psychol. 40, 109–131 (1989).
35. 35.
Duncan-Johnson, C. C. & Donchin, E. The P300 component of the event-related brain potential as an index of information processing. Biol. Psychol. 14, 1–52 (1982).
36. 36.
Scheibe, C., Schubert, R., Sommer, W. & Heekeren, H. R. Electrophysiological evidence for the effect of prior probability on response preparation. Psychophysiology 46, 758–770 (2009).
37. 37.
Scheibe, C., Ullsperger, M., Sommer, W. & Heekeren, H. R. Effects of parametrical and trial-to-trial variation in prior probability processing revealed by simultaneous electroencephalogram/functional magnetic resonance imaging. J. Neurosci. 30, 16709–16717 (2010).
38. 38.
van den Berg, B., Krebs, R. M., Lorist, M. M. & Woldorff, M. G. Utilization of reward-prospect enhances preparatory attention and reduces stimulus conflict. Cogn. Affect Behav. Neurosci. 14, 561–577 (2014).
39. 39.
Trani, A. & Verhaeghen, P. Foggy windows: Pupillary responses during task preparation. Q. J. Exp. Psychol. 71, 2235–2248 (2018).
40. 40.
Lin, H., Saunders, B., Hutcherson, C. A. & Inzlicht, M. Midfrontal theta and pupil dilation parametrically track subjective conflict (but also surprise) during intertemporal choice. Neuroimage 172, 838–852 (2018).
41. 41.
Bradley, M. M., Miccoli, L., Escrig, M. A. & Lang, P. J. The pupil as a measure of emotional arousal and autonomic activation. Psychophysiology 45, 602–607 (2008).
42. 42.
Nassar, M. R. et al. Rational regulation of learning dynamics by pupil-linked arousal systems. Nat. Neurosci. 15, 1040–1046 (2012).
43. 43.
Falkenstein, M., Hohnsbein, J., Hoormann, J. & Blanke, L. Effects of crossmodal divided attention on late ERP components. II. Error processing in choice reaction tasks. Electroencephalogr. Clin. Neurophysiol. 78, 447–455 (1991).
44. 44.
Di Gregorio, F., Maier, M. E. & Steinhauser, M. Errors can elicit an error positivity in the absence of an error negativity: evidence for independent systems of human error monitoring. Neuroimage 172, 427–436 (2018).
45. 45.
Maruo, Y., Schacht, A., Sommer, W. & Masaki, H. Impacts of motivational valence on the error-related negativity elicited by full and partial errors. Biol. Psychol. 114, 108–116 (2016).
46. 46.
Miltner, W. H., Braun, C. H. & Coles, M. G. Event-related brain potentials following incorrect feedback in a time-estimation task: evidence for a “generic” neural system for error detection. J. Cogn. Neurosci. 9, 788–798 (1997).
47. 47.
Frömer, R. et al. I knew that! Response-based outcome predictions and confidence regulate feedback processing and learning. Preprint at bioRxiv https://doi.org/10.1101/442822 (2020).
48. 48.
Fischer, A. G. & Ullsperger, M. Real and fictive outcomes are processed differently but converge on a common adaptive mechanism. Neuron 79, 1243–1255 (2013).
49. 49.
Frömer, R., Stürmer, B. & Sommer, W. The better, the bigger: the effect of graded positive performance feedback on the reward positivity. Biol. Psychol. 114, 61–68 (2016).
50. 50.
Luft, C. D., Takase, E. & Bhattacharya, J. Processing graded feedback: electrophysiological correlates of learning from small and large errors. J. Cogn. Neurosci. 26, 1180–1193 (2014).
51. 51.
Meadows, C. C., Gable, P. A., Lohse, K. R. & Miller, M. W. The effects of reward magnitude on reward processing: an averaged and single trial event-related potential study. Biol. Psychol. 118, 154–160 (2016).
52. 52.
Ulrich, N. & Hewig, J. A miss is as good as a mile? Processing of near and full outcomes in a gambling paradigm. Psychophysiology 51, 819–823 (2014).
53. 53.
Schiffer, A. M., Siletti, K., Waszak, F. & Yeung, N. Adaptive behaviour and feedback processing integrate experience and instruction in reinforcement learning. Neuroimage 146, 626–641 (2017).
54. 54.
Muhlberger, C., Angus, D. J., Jonas, E., Harmon-Jones, C. & Harmon-Jones, E. Perceived control increases the reward positivity and stimulus preceding negativity. Psychophysiology 54, 310–322 (2017).
55. 55.
Kool, W., McGuire, J. T., Rosen, Z. B. & Botvinick, M. M. Decision making and the avoidance of cognitive demand. J. Exp. Psychol. Gen. 139, 665–682 (2010).
56. 56.
Manohar, S. G., Finzi, R. D., Drew, D. & Husain, M. Distinct motivational effects of contingent and noncontingent rewards. Psychological Sci. 28, 1016–1026 (2017).
57. 57.
Kohli, A. et al. Using Expectancy Theory to quantitatively dissociate the neural representation of motivation from its influential factors in the human brain: an fMRI study. Neuroimage 178, 552–561 (2018).
58. 58.
van Boxtel, G. J. & Brunia, C. H. Motor and non-motor aspects of slow brain potentials. Biol. Psychol. 38, 37–51 (1994).
59. 59.
Brunia, C. H. M., Hackley, S. A., van Boxtel, G. J. M., Kotani, Y. & Ohgami, Y. Waiting to perceive: reward or punishment? Clin. Neurophysiol. 122, 858–868 (2011).
60. 60.
Wascher, E., Verleger, R., Jaskowski, P. & Wauschkuhn, B. Preparation for action: an ERP study about two tasks provoking variability in response speed. Psychophysiology 33, 262–272 (1996).
61. 61.
Leuthold, H., Sommer, W. & Ulrich, R. Preparing for action: inferences from CNV and LRP. J. Psychophysiol. 18, 77–88 (2004).
62. 62.
Jentzsch, I., Leuthold, H. & Ridderinkhof, K. R. Beneficial effects of ambiguous precues: parallel motor preparation or reduced premotoric processing time? Psychophysiology 41, 231–244 (2004).
63. 63.
Müller-Gethmann, H., Ulrich, R. & Rinkenauer, G. Locus of the effect of temporal preparation: evidence from the lateralized readiness potential. Psychophysiology 40, 597–611 (2003).
64. 64.
Ladanyi, M. & Dubrovsky, B. CNV and time estimation. Int. J. Neurosci. 26, 253–257 (1985).
65. 65.
Macar, F. & Besson, M. Contingent negative variation in processes of expectancy, motor preparation and time estimation. Biol. Psychol. 21, 293–307 (1985).
66. 66.
Boldt, A., Schiffer, A.-M., Waszak, F. & Yeung, N. Confidence predictions affect performance confidence and neural preparation in perceptual decision making. Sci. Rep. 9, 4031 (2019).
67. 67.
Cohen, M. A., Ortego, K., Kyroudis, A. & Pitts, M. Distinguishing the neural correlates of perceptual awareness and postperceptual processing. J. Neurosci. 40, 4925–4935 (2020).
68. 68.
Ullsperger, M., Danielmeier, C. & Jocham, G. Neurophysiology of performance monitoring and adaptive behavior. Physiol. Rev. 94, 35–79 (2014).
69. 69.
Holroyd, C. B. & Coles, M. G. The neural basis of human error processing: reinforcement learning, dopamine, and the error-related negativity. Psychol. Rev. 109, 679–709 (2002).
70. 70.
Yerkes, R. M. & Dodson, J. D. The relation of strength of stimulus to rapidity of habit-formation. J. Comp. Neurol. Psychol. 18, 459–482 (1908).
71. 71.
Broadhurst, P. L. The interaction of task difficulty and motivation: the Yerkes-Dodson law revived. Acta Psychologica 16, 321–338 (1959).
72. 72.
Cools, R. & D’Esposito, M. Inverted-U–shaped dopamine actions on human working memory and cognitive control. Biol. Psychiatry 69, e113–e125 (2011).
73. 73.
Hajcak, G. & Foti, D. Errors are aversive: defensive motivation and the error-related negativity. Psychological Sci. 19, 103–108 (2008).
74. 74.
Nigbur, R., Schneider, J., Sommer, W., Dimigen, O. & Stürmer, B. Ad-hoc and context-dependent adjustments of selective attention in conflict control: an ERP study with visual probes. NeuroImage 107, 76–84 (2015).
75. 75.
Moscarello, J. M. & Hartley, C. A. Agency and the calibration of motivated behavior. Trends Cogn. Sci. 21, 725–735 (2017).
76. 76.
Grahek, I., Shenhav, A., Musslick, S., Krebs, R. M. & Koster, E. H. W. Motivation and cognitive control in depression. Neurosci. Biobehav Rev. 102, 371–381 (2019).
77. 77.
Huys, Q. J. M., Daw, N. D. & Dayan, P. Depression: a decision-theoretic analysis. Annu. Rev. Neurosci. 38, 1–23 (2015).
78. 78.
Berwian, I. M. et al. Computational mechanisms of effort and reward decisions in patients with depression and their association with relapse after antidepressant discontinuation. JAMA Psychiatry https://doi.org/10.1001/jamapsychiatry.2019.4971 (2020).
79. 79.
Zink, C. F., Pagnoni, G., Martin-Skurski, M. E., Chappelow, J. C. & Berns, G. S. Human striatal responses to monetary reward depend on saliency. Neuron 42, 509–517 (2004).
80. 80.
Bjork, J. M. & Hommer, D. W. Anticipating instrumentally obtained and passively-received rewards: a factorial fMRI investigation. Behav. Brain Res. 177, 165–170 (2007).
81. 81.
Delorme, A. & Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Meth. 134, 9–21 (2004).
82. 82.
Pion-Tonachini, L., Kreutz-Delgado, K. & Makeig, S. ICLabel: an automated electroencephalographic independent component classifier, dataset, and website. NeuroImage 198, 181–197 (2019).
83. 83.
Alday, P. M. How much baseline correction do we need in ERP research? Extended GLM model can replace baseline correction while lifting its limits. Psychophysiology 56, e13451 (2019).
84. 84.
Luck, S. J. & Gaspelin, N. How to get statistically significant effects in any ERP experiment (and why you shouldn’t). Psychophysiology 54, 146–157 (2017).
85. 85.
Polich, J. Updating P300: an integrative theory of P3a and P3b. Clin. Neurophysiol. 118, 2128–2148 (2007).
86. 86.
Boldt, A. & Yeung, N. Shared neural markers of decision confidence and error detection. J. Neurosci. 35, 3478–3484 (2015).
87. 87.
Frömer, R., Maier, M. & Abdel Rahman, R. Group-level EEG-processing pipeline for flexible single trial-based analyses including linear mixed models. Front. Neurosci. 12 https://doi.org/10.3389/fnins.2018.00048 (2018).
88. 88.
Grahek, I., Frömer, R. & Shenhav, A. Learning when effort matters: neural dynamics underlying updating and adaptation to changes in performance efficacy. Preprint at bioRxiv https://doi.org/10.1101/2020.10.09.333310 (2020).
89. 89.
Bates, D., Maechler, M., Bolker, B. M. & Walker, S. C. Fitting linear mixed-effects models using lme4. J. Stat. Softw. 67, 1–48 (2015).
90. 90.
Bates, D., Kliegl, R., Vasishth, S. & Baayen, H. Parsimonious mixed models. Preprint at https://arxiv.org/abs/1506.04967 (2015).
91. 91.
Matuschek, H., Kliegl, R., Vasishth, S., Baayen, H. & Bates, D. Balancing type I error and power in linear mixed models. J. Mem. Lang. 94, 305–315 (2017).
92. 92.
Kuznetsova, A., Brockhoff, P. & Christensen, R. lmerTest: tests in linear mixed effects models. J. Stat. Softw. 82, 1–26 (2016).
93. 93.
Hoeks, B. & Levelt, W. J. M. Pupillary dilation as a measure of attention: a quantitative system analysis. Behav. Res. Methods, Instrum., Computers 25, 16–26 (1993).
94. 94.
Wierda, S. M., van Rijn, H., Taatgen, N. A. & Martens, S. Pupil dilation deconvolution reveals the dynamics of attention at high temporal resolution. Proc. Natl Acad. Sci. USA 109, 8456–8460 (2012).
95. 95.
McCloy, D. R., Larson, E. D., Lau, B. & Lee, A. K. C. Temporal alignment of pupillary response with stimulus events via deconvolution. J. Acoustical Soc. Am. 139, EL57–EL62 (2016).
96. 96.
Mittner, M. pypillometry: a Python package for pupillometric analyses. J. Open Source Softw. 5, 2348 (2020).
Download references
## Acknowledgements
The authors are grateful to Elizabeth Cory and Aravinth Jebanesan for assistance in data collection. This research was supported by a Center of Biomedical Research Excellence grant P20GM103645 from the National Institute of General Medical Sciences (A.S.), an Alfred P. Sloan Foundation Research Fellowship in Neuroscience (A.S.), and a grant from the Natural Sciences and Engineering Research Council of Canada (RGPIN-2019-05280) (M.I.).
## Author information
Authors
### Contributions
A.S., C.D.W., R.F., H.L., and M.I. conceived the study. C.D.W. and H.L. performed task coding. H.L. collected the data. R.F. and H.L. analyzed the data and wrote the paper. R.F., H.L., C.D.W., M.I., and A.S. edited the paper.
### Corresponding authors
Correspondence to R. Frömer or H. Lin.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
## Additional information
Peer review information Nature Communications thanks Christopher Chatham, Markus Ullsperger and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Reprints and Permissions
## About this article
### Cite this article
Frömer, R., Lin, H., Dean Wolf, C.K. et al. Expectations of reward and efficacy guide cognitive control allocation. Nat Commun 12, 1030 (2021). https://doi.org/10.1038/s41467-021-21315-z
Download citation
• Received:
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41467-021-21315-z
## Further reading
• ### Advances in modeling learning and decision-making in neuroscience
• Anne G. E. Collins
• Amitai Shenhav
Neuropsychopharmacology (2022)
## Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
## Search
### Quick links
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
|
2022-01-24 08:00:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7058632373809814, "perplexity": 6807.649525973351}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00195.warc.gz"}
|
https://cs.stackexchange.com/questions/58096/permutation-on-matrix-to-fill-main-diagonal-with-non-zero-values
|
# Permutation on matrix to fill main diagonal with non-zero values
I am currently working on some sparse non-singular matrices. One of the algorithms I use requires divisions by the elements on the main diagonal so I have to ensure that my main diagonal is filled with non-zero values. My matrices represent a set of linear equations and there is no problem to use permutations between rows and columns of my matrix to get a zero-less diagonal.
I do know that the matrix is non-singular so such a permutation must exist (otherwise the determinant would be null), however I searched on the internet and wasn't able to find anything relevant.
Here is an example were 0 denotes an null value and 1 any non-zero value. I numerated the rows, the permutation only changes the rows:
(row) (matrix) (matrix, blanks are zeros)
original:
01 1000011110000000 |1 1111 |
02 0100010001110000 | 1 1 111 |
03 0010001001001100 | 1 1 1 11 |
04 0001000100101010 | 1 1 1 1 1 |
05 0000100010010110 | 1 1 1 11 |
06 0000000000001110 | 111 |
07 0000000000110010 | 11 1 |
08 0000000001010100 | 1 1 1 |
09 0000000110000010 | 11 1 |
10 0000001010000100 | 1 1 1 |
11 0000010010010000 | 1 1 1 |
12 0001100000000010 | 11 1 |
13 0010100000000100 | 1 1 1 |
14 0100100000010000 | 1 1 1 |
15 1000100010000000 |1 1 1 |
16 1000000000000001 |1 1|
permutated:
01 1000011110000000 |1 1111 |
02 0100010001110000 | 1 1 111 |
03 0010001001001100 | 1 1 1 11 |
04 0001000100101010 | 1 1 1 1 1 |
05 0000100010010110 | 1 1 1 11 |
11 0000010010010000 | 1 1 1 |
10 0000001010000100 | 1 1 1 |
09 0000000110000010 | 11 1 |
15 1000100010000000 |1 1 1 |
08 0000000001010100 | 1 1 1 |
07 0000000000110010 | 11 1 |
14 0100100000010000 | 1 1 1 |
06 0000000000001110 | 111 |
13 0010100000000100 | 1 1 1 |
12 0001100000000010 | 11 1 |
16 1000000000000001 |1 1|
Is there a smart way to perform those row/column permutations in order to get a main diagonal without zeros ? I am almost certain that there is a way to express this problem as a path-finding problem (visit all the columns once and only if the current line of a column is a non-zero value) but wasn't very successful.
You can use bipartite matching for that.
Nodes corresponding to rows in one set, nodes corresponding to columns in the other set. A row has an edge to every column in which it has a 1. If the maximum matching has size n, every row is matched up with a corresponding column and the matching gives the order. The index of the column that a row is matched up with will be its new position.
• Nice idea, I'll try it and then I'll validate your answer. – Demurgos May 31 '16 at 16:49
|
2021-01-21 18:54:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5643283724784851, "perplexity": 850.0030248090607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703527224.75/warc/CC-MAIN-20210121163356-20210121193356-00135.warc.gz"}
|
https://www.physicsforums.com/threads/understanding-the-concept-of-infinity.951134/
|
# Understanding the concept of infinity
## Main Question or Discussion Point
In Hilbert infinity hotel, all the rooms were occupied. Then how did the occupant were able to shift to their adjoining room?? Here I understand, by full mean, ALL the infinite room has a corresponding occupant.
I also understand some infinity number are greater because it can be proove when they are placed side-to-side.
Here, if they are able to shift, then it is possible when one room is vacant, say the last one. But again in infinity, there is no end.
Thanks.
fresh_42
Mentor
I'm not sure I understand your question. As infinity isn't something real and Hilbert's hotel only a metaphor, you cannot expect a one-to-one correspondence. However, this leaves me with the question: What do you understand by infinity? And be aware of the fact, that we do not discuss it on a philosophical level, i.e. we require a definition so that we can talk about the same.
Then what does Hilbert actually want to show from his example.
Nugatory
Mentor
In Hilbert infinity hotel, all the rooms were occupied. Then how did the occupant were able to shift to their adjoining room?? Here I understand, by full mean, ALL the infinite room has a corresponding occupant.
At the stroke of midnight everyone steps out of their room into the hall, walks down the hall and stands in front of the door of the next higher-numbered room. Now all the rooms are empty because everyone is standing in the hall, so everyone can open the door they're standing next to and enter an empty room.
Dale
Mentor
At the stroke of midnight everyone steps out of their room into the hall, walks down the hall and stands in front of the door of the next higher-numbered room. Now all the rooms are empty because everyone is standing in the hall, so everyone can open the door they're standing next to and enter an empty room.
That is how I understand it also
At the stroke of midnight everyone steps out of their room into the hall, walks down the hall and stands in front of the door of the next higher-numbered room. Now all the rooms are empty because everyone is standing in the hall, so everyone can open the door they're standing next to and enter an empty room.
Thanks a lot. I understand in infinity there is just no last room. If there is any last room, then the rooms are not infinite but instead consist of finite rooms.
Since there are no last room, one can simply shift to the next room.
mfb
Mentor
Right.
Svein
You are talking about countable infinity, but there are other kinds of infinity too. A neat mapping of complex infinity is the Riemann Sphere:
#### Attachments
• 12.6 KB Views: 365
lavinia
Gold Member
If after each person leaves his room, how many ways are there for each to enter a new room?
pinball1970
Gold Member
If after each person leaves his room, how many ways are there for each to enter a new room?
I am tempted to say infinity, an infinite number of ways since there infinite doors and people.
jbriggs444
Homework Helper
2019 Award
I am tempted to say infinity, an infinite number of ways since there infinite doors and people.
Yes, but there is more that can be said.
Suppose that one tried to make a list of all the ways (permutations) in which an infinite row of numbered people could each step into a numbered room. The first list entry goes into the first room. The second list entry goes into the second room and so on.
Could one manage to fit all the permutations, one permutation per room without running out of rooms? Cantor has an argument about that.
Last edited:
From lattice theoretic perspective, one can think of infinity as the $1$ of a certain bounded infinite lattice. It is possible to study such structures, so why not?
One can take an infinite semigroup (countable or uncountable) that is not a monoid and adjoin to it an extra element that is not already contained in the set and define multiplication such that this new element acts as the multiplicative identity. A philosopher would ask - well, how do you know you can add this extra element to the set? How do you know it exists?
Mathematics doesn't concern itself with whether something exists (not to be confused with a statement such as $\exists x P(x)$). Real numbers certainly don't exist. Neither does a parametric curve that traces the devil's horns on a surface homeomorphic to a feather on an angel's wing. It doesn't matter. We play these games, supposing for the purpose of discussion that we have certain tools to work with. What can we learn?
Mark44
Mentor
Mentor note: A number of off-topic posts and replies to them have been removed. Let's keep the focus on what the OP asked.
pinball1970
Gold Member
Yes, but there is more that can be said.
Suppose that one tried to make a list of all the ways (permutations) in which an infinite row of numbered people could each step into a numbered room. The first list entry goes into the first room. The second list entry goes into the second room and so on.
Could one manage to fit all the permutations, one permutation per room without running out of rooms? Cantor has an argument about that.
I googles this and its too difficult to get into the Cantor stuff, infinite infinities? Can you explain the question/answer above please? We are still in the hotel so its not off topic
jbriggs444
Homework Helper
2019 Award
I googles this and its too difficult to get into the Cantor stuff, infinite infinities? Can you explain the question/answer above please? We are still in the hotel so its not off topic
Are you OK with the idea of making a list of all the ways to arrange an infinite number of people into an infinite number of rooms? We start with one arrangement, write it down (on an infinite piece of paper) and stick it into the first room. Then we write down the next arrangement and stick it into the second room.
Since there are infinitely many rooms, we can never run out, so it seems reasonable that we can fit all of these pieces of paper into separate rooms. But there is an argument that shows that no such attempt can succeed. It is a "diagonalization" argument. It is a proof that shows that any purported list of all possible arrangements must be missing something.
The argument starts by supposing that someone has created a list of arrangements and filled the rooms with slips of paper, each describing such an arrangement, one slip of paper per room. We need not assume that this list of arrangements is complete.
One proceeds to put together a slip of paper describing an arrangement that is guaranteed to be different from every arrangement on the slips of paper in every room.
If the arrangement described by the piece of paper in room n has person n staying in room n, we have person n stay elsewhere.
If the arrangement described by the piece of paper in room n has person n staying elsewhere, we have person n stay in room n.
[The tricky part is demonstrating that one can do this without conflict]
Whatever arrangement we end up with on our slip of paper, it will not match any arrangement on any slip of paper in any room. The list of arrangements on those slips of paper was not complete.
Last edited:
pinball1970
Gold Member
Whatever arrangement we end up with on our slip of paper, it will not match any arrangement on any slip of paper in any room. The list of arrangements on those slips of paper was not complete.
If I am following ok so far I need to play with some combinations to try and get this- thanks for your patience
lavinia
Gold Member
Other ways to change rooms are everyone moves over two rooms or three or any number of rooms.
Mark44
Mentor
As an extension of the problem, a bus with an infinite number of passengers arrives. After some thought, the hotel manager decides on this scheme:
At precisely 4:00pm, each current guest exits his or her room, and goes to the room whose number is twice that of the room he or she is leaving. This will leave rooms 1, 3, 5, ..., 2n + 1, ... vacant, and the new arrivals can be given these rooms.
fresh_42
Mentor
I found a beautiful quotation from cardinal Nicholas of Cusa (1401-1464) about infinity. I think it's worth noting as it demonstrates a knowledge six centuries ago, which could already answer a lot of infinity questions posted on PF:
"If an infinite line were composed of infinite stretches of one foot in length, another of infinitely many stretches of two feet in length, they nevertheless would be the same, since the infinite can not be greater than infinite."
pinball1970
Gold Member
One proceeds to put together a slip of paper describing an arrangement that is guaranteed to be different from every arrangement on the slips of paper in every room.
I am struggling with this part, is this not deliberately setting yourself up for a fail? We are already compiling an infinite list, it would not be possible to put together slip that is different to every other list, it would just be another permutation in the list?
jbriggs444
Homework Helper
2019 Award
I am struggling with this part, is this not deliberately setting yourself up for a fail? We are already compiling an infinite list, it would not be possible to put together slip that is different to every other list, it would just be another permutation in the list?
It cannot already be in the list. The construction guarantees that.
The thrust of the argument is that the set of all possible permutations is "too large" to fit into the rooms in the hotel, even though there are infinitely many rooms. It is a higher order of infinity.
Last edited:
Svein
As an extension of the problem, a bus with an infinite number of passengers arrives. After some thought, the hotel manager decides on this scheme:
At precisely 4:00pm, each current guest exits his or her room, and goes to the room whose number is twice that of the room he or she is leaving. This will leave rooms 1, 3, 5, ..., 2n + 1, ... vacant, and the new arrivals can be given these rooms.
Yes. What is not said, is that the walk from room n to room 2n will take n times (the distance between rooms) divided by the walking speed - which approaches infinity along with n.
Happily that does not cause any conflict, since the walk from the lobby to room n also takes n times (the distance between rooms) divided by the walking speed. The downside is that the room shuffling will take an infinite amount of time.
pinball1970
Gold Member
It cannot already be in the list. The construction guarantees that.
The thrust of the argument is that the set of all possible permutations is "too large" to fit into the rooms in the hotel, even though there are infinitely many rooms. It is a higher order of infinity.
Can you send me some more links on this? I have the Hilbert hotel wiki stuff but any books/links greatly appreciated.
|
2020-04-07 22:58:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5249671339988708, "perplexity": 584.8669711868145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371806302.78/warc/CC-MAIN-20200407214925-20200408005425-00264.warc.gz"}
|
http://fortranwiki.org/fortran/show/tanh
|
# Fortran Wiki tanh
## Description
tanh(x) computes the hyperbolic tangent of x.
## Standard
FORTRAN 77 and later, for a complex argument Fortran 2008 or later
## Class
Elemental function
## Syntax
x = tanh(x)
## Arguments
• x - The type shall be real or complex.
## Return value
The return value has same type and kind as x. If x is complex, the imaginary part of the result is in radians. If x is real, the return value lies in the range - 1 \\leq tanh(x) \\leq 1.
## Example
program test_tanh
real(8) :: x = 2.1_8
x = tanh(x)
end program test_tanh
atanh
category: intrinsics
|
2021-10-28 10:58:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37253624200820923, "perplexity": 7667.199663971695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00456.warc.gz"}
|
https://math.stackexchange.com/questions/792012/finding-the-eigenvalue-and-eigenvector-of-a-matrix
|
# Finding the eigenvalue and eigenvector of a matrix
Confirm by multiplication that x is an eigenvector of A, and find the corresponding eigenvalue.
Given: \begin{align} A = \begin{pmatrix} 1&2\\3&2\\\end{pmatrix}, && x = \begin{pmatrix} 1\\-1\\\end{pmatrix} \end{align} I know: $Ax = \lambda x$
My work:
I know $\lambda I - A$
\begin{pmatrix} \lambda - 1&-2\\-3&\lambda - 2\\\end{pmatrix}
From there I know the characteristic polynomial is $\lambda^2 - 3\lambda - 4 = 0$ through ad-bc (since this is a 2 x 2 matrix)
I can successively trying out each factor of c (which is 4) : positive and negative of 1,2,4.
4 turns out to be the only one that works. So $\lambda - 4 = 0$. So the $\lambda$ = 4.
I also know I can divide the characteristic polynomial by $\lambda - 4$ and get $\lambda + 1$. Setting $\lambda + 1 = 0$. $\lambda$ is $-1$.
Answer: So I got two eigenvalues which is $-1$ and $4$.
Dilemma I am having with eigenvector:
The problem is I am not sure if the given eigenvector applies for both the left and right side of the equation Ax = $\lambda$x. Or is it just the left side?
Work I have done using the given eigenvector x:
I know Ax = $\lambda$x
\begin{align} \begin{pmatrix} 1&2\\3&2\\\end{pmatrix} \cdot \begin{pmatrix} 1\\-1\\\end{pmatrix} = \begin{pmatrix} 1*(1) + &2 (-1)\\3*(1)&2(-1)\\\end{pmatrix} = \begin{pmatrix} -1\\1\\\end{pmatrix} = Ax. \end{align} Problem I am facing: What do I do after this step? Do I use the given value of the eigenvector $x$ on the right side of the equation $Ax = \lambda x$ along with the eigenvalue I find to see if the equation satisfies itself? How do I know if the given eigenvector is actually correct?
The directions are confirm by multiplication. All you need do is compute $Ax$ for the given $A$ and $x$ then compare that result to the given $x$.
Now it is clear that $\lambda=-1$. Because we have $Av=-v$ thus we must have $\lambda=-1$
You are given the matrix A and the possible eigenvector x1.
You correctly find the eigenvalues, λ1 = -1 and λ2 = 4.
By the way, the characteristic equation gives both eigenvalues: characteristic polynomial = λ^2 - 3λ - 4 = (λ +1)(λ - 4) = 0, implying λ1=-1 and λ2=4.
You'll need to find the second eigenvector, x2.
Find x2 so that (A−λ2*I)*x2=0
Then, show that these are in fact eigenvectors and eigenvalues of A.
You have the defining relationship, Ax = λx, which says that the eigenvalue scales the eigenvector in the exact same way the matrix does!
Just do the multiplications to demonstrate this.
Ax1 = λ1 x1
Ax2 = λ2 x2
• Am I suppose to put both value of the given eigenvector on the left side or (left and right side) of the equation? – Nicholas May 12 '14 at 18:34
• @Nicholas just multiply $Ax$ and see what the outcome is...you basically get a multiple of $x$, and that factor of multiplication is the eigenvalue – Christiaan Hattingh May 12 '14 at 18:36
• I get \begin{align} Ax = \begin{pmatrix} -1\\1\\\end{pmatrix} \end{align} = 4 \begin{pmatrix} 1\\-1\\\end{pmatrix} and Ax = \begin{pmatrix} -1\\1\\\end{pmatrix} = -1 \begin{pmatrix} 1\\-1\\\end{pmatrix} It looks like the eigenvalue of 4 will satisfied the equation but the eigenvalue of -1 will. Correct me if I am wrong but that is how I am seeing the math. – Nicholas May 12 '14 at 18:39
|
2019-10-15 19:12:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9604555368423462, "perplexity": 537.40278659094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986660231.30/warc/CC-MAIN-20191015182235-20191015205735-00012.warc.gz"}
|
https://controlentertainmentonline.com/ans/1-square-meter-in-feet.html
|
# 1 Square Meter In Feet
1 Square Meter In Feet. Therefore, 1 foot is equal to 1/3. 2808399 meter. So, 1 square foot is equal to 1/3. 2808399^2 or 0. 09290304 meters. Therefore, you can convert any number from square feet to square meters either by dividing it by 3. 2808399^2 or multiplying it by 0. 09290304. Now, enter the following formula in cell d5 to get the same result as in the earlier method.
M 2), is an si. To calculate the square footage of the same area, multiply the result by 10. 7639104. You may also use the conversion calculator above to calculate the area in both square meters and square feet by entering the length and the width in meters. How to convert square feet to square meters? 1 square foot (sqft) is equal to 0. 09290304 square meter (sqm). In mathematical terms, 1 square meter = 10. 7639 square feet. Square meter to square feet formula. To conversion value between square meter to square feet, just multiply the value by the conversion ratio. One square meter is equal to 10. 7639.
|
2022-09-28 13:01:52
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.888109564781189, "perplexity": 849.1250707121168}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00004.warc.gz"}
|
https://www.gamedev.net/forums/topic/338450-making-an-artificial-neural-network/
|
# Making an artificial neural network
This topic is 4840 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I am quite interested in the capabilities of an ANN. I would like to try coding one, for something fairly simple. I will do my own research, but I was wondering what would be an appropriate starting project. My only ideas that seem simple enough, and yet would give me some good experience, are making one for tic-tac-toe or checkers. Tic-tac-toe seems too simple, though, and it also seems that there would never be a reason to play against a computer that can "learn" - how many possible games are there, really? Checkers is more intriguing. I wouldn't mind playing against a computer that could learn my checkers style. However, it seems like it would be pretty dang complex. Remember, I have never made one, and don't know what the difficulty would be. Should I just make, say, a tic-tac-toe ANN, to test the waters, or do you have a better suggestion? My thanks.
##### Share on other sites
I'm kind of doing the same thing. But I don't think tic-tac-toe is a good start (don't think it's good for ANNs either).
A good place to start would be to research feed-forward ANNs. Try making it so the computer learns how to approximate the value of a mathematical function by feeding it training data (inputs and desired outputs).
##### Share on other sites
What do you mean? Like teach it to approximate the value of A, if
A = 2X
Where you change the value of X? I don't want to knock the idea, especially in case I'm not understanding you correctly. However, it doesn't seem like the idea of an ANN is especially suited for that. Can you elaborate? Thanks.
##### Share on other sites
Hey I'm making one too! YAY CONNECTIONISM!
I don't think there's anything wrong with starting out *too* simple. The learning algorithms can be pretty hairy, and it would be pretty easy to make a small mistake that causes your network to never actually converge on a solution. So before you start tackling those complicated problems, try something really simple to make sure it works. And by "really simple", I'm talking a network that can learn to output 1 on input = 1 and 0 on input = 0. Then a good next step is have it solve XOR. Then go from there =)
##### Share on other sites
Quote:
Original post by silverphyre673What do you mean? Like teach it to approximate the value of A, ifA = 2XWhere you change the value of X? I don't want to knock the idea, especially in case I'm not understanding you correctly. However, it doesn't seem like the idea of an ANN is especially suited for that. Can you elaborate? Thanks.
This is EXACTLY what ANNs are suited for (at least the feed-forward variety). An ANN is just a function approximater. When you train a network, you are simply trying to make it expose the function behind the data.
##### Share on other sites
I would just test it out on standard data sets like those from here http://www.ics.uci.edu/~mlearn/MLSummary.html That way you can compare your results with published results and verify your code works. For example on the original wisconsin breast cancer database I have gotten 96 percent test accuracy with my neural network code (I split the data into 33 percent test, 66 percent training and chose parameters with cross validation on the training set). This is consistent with published results of 94-98 percent accuracy on the problem.
##### Share on other sites
Quote:
Original post by Anonymous PosterI would just test it out on standard data sets like those from here http://www.ics.uci.edu/~mlearn/MLSummary.html That way you can compare your results with published results and verify your code works. For example on the original wisconsin breast cancer database I have gotten 96 percent test accuracy with my neural network code (I split the data into 33 percent test, 66 percent training and chose parameters with cross validation on the training set). This is consistent with published results of 94-98 percent accuracy on the problem.
That site looked good, except that for all the FTP access links, there "is no such file or directory". Any ideas?
Anyways, I guess I will just make something super simple. Thanks, but keep the ideas coming. I like it when people reply to my topics :P
[EDIT]
Also, if I were to make an ANN that just outputs the sum of two numbers, what exactly would I be training? I mean, if the only operator it can use is addition, and it always takes two numbers, then what exactly would be changing each time? Whether it adds part of a number?
Or would this make sense: The computer gets two integer inputs. It can only add, but the weight that you are training is the fraction of the integers that it outputs. So if I entered 8 and 6, and desired the output 7, then it would start adding fractions of the two numbers, converging on 0.5 of each number?
In this case, what exactly would each node be looking at? I read an article that showed how a computer could be "taught" to recognize pictures of numbers and letters by examining portions of a picture. Each node looked at a segment of the picture, and based on whether the segment was mostly filled or not, the node retured 1 or 0. The letter was selected based on input from all the nodes. For example, here is a grid:
ooooooooXoXoXooooooooo oXo ooooooooo oXo oooooooo
In this example, each node examines one grid location, so you have 9 nodes. Nodes 1, 2, 3, 5, and 8 return true, and the rest false. So you get this return:
111010010
You say that the desired output of the picture (which evaluates to 111010010) is the letter T. At first, the program chooses other letters, but eventually recognizes the picture as a T. Now say you repeated with the letter I:
ooooooooXoXoXoooooooooXoXoXoooooooooXo o oooooooo
At first, the program thinks it is the letter T, probably - it is pretty similar. Eventually, it learns that this is P. Now say you gave it this input:
ooooooooXoXoXooooooooo oXo oooooooooXoXo oooooooo
Now what should the program think? It isn't an already learned letter, but let's compare the "output" of this picture as opposed to the other two:
T
111010010
P
111111100
New picture
111010110
There is one difference (at offset 6, from 0), out of 9 possible, from T. There are 3 differences from the letter P. Since it is closer to T than P, it outputs that this is the letter T. Using this method, you could train it to learn all letters and numbers.
The point is that this seems like a more "useful" use for an ANN, and one that makes more sense in my mind for how to implement. What do you think?
##### Share on other sites
As a little project, I use a neural network to control a virtual 2 wheeled robot, with distance and color sensors : 12 inputs and 2 outputs (the wheels motors). The task for the robot was to find a black area. When it was in this area, a light was activated. Then it must go on a white area.
The neural network was only 12 neurons, full connected. It was trained by genetic algorithms, around 4 hours to have a result. It was quite fun to see the robot...
Another cool stuff is to train a group of this kind of robots (two wheels and sensors) for a collective task, like pushing stuffs in the middle of a room.
##### Share on other sites
Well, I went simple and made a very simple ANN that just does AND and OR. It works. It took me five minutes :) And I was reading for part of that time.
Anyways, I am looking into making XOR, but I'm confused as to exactly how it is supposed to work. If you have two inputs, x and y, XOR is true if (x) AND NOT (y) OR if (y) AND NOT (x). I fail to understand why
a) You need two "layers". If you are inputting 1 and 0 only for the inputs, and you have only two inputs (I don't see how you could have more for a XOR, or at least how it acts if there are more), then couldn't you just make the threshold equal to 1, and if it is not 1 then return false? Or is part of the definition of the "threshold" that is is a threshold, a bar you have to get over, not hit exactly? :)
b) How exactly would the multiple layers work? Does it mean an increase in the number of inputs, or the number of outputs, or what? I'm confused :(
Thanks for your help.
//EDIT
I actually ended up successfully making AND, OR, and XOR neural networks. They aren't trainable, but they aren't hardcoded, and each are made up of identical parts.
Now I need a new, slightly more complex project. Hmmm...
[Edited by - silverphyre673 on August 13, 2005 3:11:22 AM]
##### Share on other sites
Something from the distant past [VRANN]. This was a college project. Since there isn't enough time to create an ANN for speach recognition, we scaled the project back. In the end we ended up doing voice identification instead. Each person on the team would say the same word, and the program could identity which person said it.
Fun project to do with an ANN. I took screenshots of the steps we used. FFT is a fourier transform. It take a wave microphone same (amplitude over time) and converts it to (amplitude over frequency).
1. 1
Rutin
40
2. 2
3. 3
4. 4
5. 5
• 16
• 18
• 12
• 14
• 9
• ### Forum Statistics
• Total Topics
633362
• Total Posts
3011527
• ### Who's Online (See full list)
There are no registered users currently online
×
|
2018-11-18 01:35:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3384439945220947, "perplexity": 834.1391219498784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743960.44/warc/CC-MAIN-20181118011216-20181118033216-00135.warc.gz"}
|
https://plainmath.net/23536/x-4-plus-1-2-equal-1-8
|
Question
# (x/4)+(1/2)=1/8
Fractions
$$\displaystyle{\left(\frac{{x}}{{4}}\right)}+{\left(\frac{{1}}{{2}}\right)}=\frac{{1}}{{8}}$$
2021-08-18
We are given: $$\displaystyle{\left(\frac{{x}}{{4}}\right)}+{\left(\frac{{1}}{{2}}\right)}=\frac{{1}}{{8}}$$
Multiply both sides by the LCD which is 8: $$\displaystyle{\left({\left(\frac{{x}}{{4}}\right)}+{\left(\frac{{1}}{{2}}\right)}\right)}{\left({8}\right)}=\frac{{1}}{{8}}{\left({8}\right)}$$
2x+4=1
Subtract 4 from both sides: 2x+4-4=1-4
2x=-3
Divide both sides by 2: $$\displaystyle{2}\frac{{x}}{{2}}=-\frac{{3}}{{2}}$$
$$\displaystyle{x}=-{\left(\frac{{3}}{{2}}\right)}$$
|
2021-10-25 14:47:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7457785606384277, "perplexity": 2642.124031972318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587711.69/warc/CC-MAIN-20211025123123-20211025153123-00573.warc.gz"}
|
https://www.zbmath.org/?q=an%3A1334.76035
|
# zbMATH — the first resource for mathematics
Global solutions to the generalized Leray-alpha equation with mixed dissipation terms. (English) Zbl 1334.76035
Summary: Due to the intractability of the Navier-Stokes equation, it is common to study approximating equations. Two of the most common of these are the Leray-$$\alpha$$ equation (which replaces the solution $$u$$ with $$(1 - \alpha^2 \mathcal{L}_1) u$$ for a Fourier Multiplier $$\mathcal{L}$$) and the generalized Navier-Stokes equation (which replaces the viscosity term $$\nu \triangle$$ with $$\nu \mathcal{L}_2$$). In this paper we consider the combination of these two equations, called the generalized Leray-$$\alpha$$ equation. We provide a brief outline of the typical strategies used to solve such equations, and prove, with initial data in a low-regularity $$L^p(\mathbb{R}^n)$$ based Sobolev space, the existence of a unique local solution with $$\gamma_1 + \gamma_2 > n / p + 1$$. In the $$p = 2$$ case, the local solution is extended to a global solution, improving on previously known results.
##### MSC:
76D05 Navier-Stokes equations for incompressible viscous fluids 35A01 Existence problems for PDEs: global existence, local existence, non-existence
##### Keywords:
Leray-alpha model; fractional Laplacian; global existence
Full Text:
##### References:
[1] D. Barbato, F. Morandin, M. Romito, Global regularity for a slightly supercritical hyperdissipative Navier-Stokes system, arXiv:1407.6734. · Zbl 1309.76053 [2] H. Bessaih, B. Ferrario, The regularized 3D Boussinesq equations with fractional Laplacian and no diffusion, http://arxiv.org/abs/1504.05067 [math.AP]. · Zbl 1354.35087 [3] Fujita, H.; Kato, T., On the Navier-Stokes initial value problem. I, Arch. Ration. Mech. Anal., 16, 269-315, (1964) · Zbl 0126.42301 [4] Gallagher, I.; Planchon, F., On global infinite energy solutions to the Navier-Stokes equations in two dimensions, Arch. Ration. Mech. Anal., 161, 4, 307-337, (2002) · Zbl 1027.35090 [5] Kato, T., The Navier-Stokes equation for an incompressible fluid in $$\mathbf{R}^2$$ with a measure as the initial vorticity, Differential Integral Equations, 7, 3-4, 949-966, (1994) · Zbl 0826.35094 [6] Kato, T.; Ponce, G., The Navier-Stokes equation with weak initial data, Int. Math. Res. Not. IMRN, 10, 10, (1994), electronic [7] Ladyzhenskaya, O. A., (The Mathematical Theory of Viscous Incompressible Flow, Mathematics and its Applications, vol. 2, (1969), Gordon and Breach Science Publishers New York), Translated from the Russian by Richard A. Silverman and John Chu · Zbl 0184.52603 [8] Lemarie-Rieusset, P. G., Recent developments in the Navier-Stokes problem, (2002), Chapman and Hall/CRC · Zbl 1034.35093 [9] Leray, J., Sur le mouvement d’un liquide visqueux emplissant l’espace, Acta Math., 63, 1, 193-248, (1934) · JFM 60.0726.05 [10] Linshiz, Jasmine S.; Titi, Edriss S., Analytical study of certain magnetohydrodynamic-$$\alpha$$ models, J. Math. Phys., 48, 6, 28, (2007) · Zbl 1144.81378 [11] Lions, J.-L., Quelques Méthodes de Résolution des problèmes aux limites non linéaires, (1969), Dunod; Gauthier-Villars Paris · Zbl 0189.40603 [12] Pennington, N., Lagrangian averaged Navier-Stokes equation with rough data in Sobolev spaces, J. Math. Anal. Appl., 403, 72-88, (2013) · Zbl 1284.35348 [13] Pennington, N., Local and global low-regularity solutions to generalized Leray-alpha equations, Electron. J. Differential Equations, 2015, 170, 1-24, (2015) [14] Tao, T., Global regularity for a logarithmically supercritical defocusing nonlinear wave equation for spherically symmetric data, J. Hyperbolic Differ. Equ., 4, 259-266, (2007) · Zbl 1124.35043 [15] Tao, T., Global regularity for a logarithmically supercritical hyperdissipative Navier-Stokes equation, Anal. PDE, 2, 3, 361-366, (2009) · Zbl 1190.35177 [16] Taylor, M., Partial differential equations, (1996), Springer-Verlag New York, Inc. [17] Taylor, M., (Tools for PDE, Mathematical Surveys and Monographs, vol. 81, (2000), American Mathematical Society Providence RI) [18] Wu, J., Generalized MHD equations, J. Differential Equations, 195, 2, 284-312, (2003) · Zbl 1057.35040 [19] Yamazaki, K., On the global regularity of generalized Leray-alpha type models, Nonlinear Anal. TMA, 75, 2, 503-515, (2012) · Zbl 1233.35064 [20] Yamazaki, K., Logarithmically extended global regularity result of the LANs-alpha MHD system in two dimensions, J. Math. Anal. Appl., 425, 234-248, (2015) · Zbl 1311.35186
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-03-08 15:37:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.74362713098526, "perplexity": 2977.846347313776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385378.96/warc/CC-MAIN-20210308143535-20210308173535-00133.warc.gz"}
|
http://mathoverflow.net/questions/146730/segre-class-of-smooth-vector-bundles-over-smooth-manifolds
|
# Segre class of smooth vector bundles over smooth manifolds?
Before you read the following question, please assume I have no knowledge in algebraic geometry.
Is it possible to define Segre class of a smooth complex vector bundle over a smooth manifold by using Chern-Weil theory? That is, using connection, curvature and invariant polynoimal, etc.
I know the total Segre class of a holomorphic vector bundle over a complex manifold can be defined as the inverse of the total Chern class (of course one can translate the statement into the language of algebraic geometry). But I have never seen the appearance of Segre class in differential geometric setting. On the other hand, I don't know whether saying "we can define the total Segre form by inverting the total Chern form" makes sense or not.
Thanks~
-
It makes sense. The total Chern form in the algebra of differential forms is of the form $1+\alpha$, where $\alpha$ is nilpotent. Just define the Segre form as $1-\alpha +\alpha ^2+\ldots$
-
You might also look at Harvey and Lawson's paper "Geometric residue theorems" where they derive Chern-Weil formulas for currents representing degeneracy loci of maps between vector bundles. I can't remember if they give explicit formulas for the Segre classes but I think one could use their method to get something along those lines.
-
|
2014-12-25 10:03:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9456028938293457, "perplexity": 194.35113941364125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447547188.66/warc/CC-MAIN-20141224185907-00045-ip-10-231-17-201.ec2.internal.warc.gz"}
|
http://tsuinte.ru/2015/leetcode-database-182-duplicate-emails/
|
# 题目
#182 Duplicate Emails
# 注意事项
COUNT(Email) Email
3 a@b.com
MySQL extends the use of GROUP BY so that the select list can refer to nonaggregated columns not named in the GROUP BY clause. This means that the preceding query is legal in MySQL. You can use this feature to get better performance by avoiding unnecessary column sorting and grouping. However, this is useful primarily when all values in each nonaggregated column not named in the GROUP BY are the same for each group. The server is free to choose any value from each group, so unless they are the same, the values chosen are indeterminate.
|
2021-09-24 20:46:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2617775797843933, "perplexity": 1039.1626795096056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057580.39/warc/CC-MAIN-20210924201616-20210924231616-00038.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/15577/life-on-a-moon-of-a-moon/15578#15578
|
# Life on a moon of a moon [closed]
How different would life be (that is calendar, seasons, tides, solar days and other natural events) on a moon of a moon of a planet? Is this setup even possible?
• I have my doubts that this is possible. Planets have moons because they are so far away from the Sun that the influence of the Sun on the moon orbit is not enough to disrupt it. But the planet may be so distant because the Sun is really massive; a moon so far from its planet would be not pulled strongly enough to orbit the planet. The only possibility would be that the secondary moon is a really small moon, which would not have a significant gravitory field and would make life and colonization difficult. May 2 '15 at 8:02
• possible duplicate of How would having multiple moons affect tides? May 2 '15 at 15:50
• Have you looked at the scores of other questions about life on a moon? May 2 '15 at 15:51
• is it possible was discussed here not to long ago. You ought to look up the results of that, and then choose a specific geometry and then ask about what that would mean to the inhabitants. May 2 '15 at 20:32
• related: Do moons have moons?
– SF.
May 2 '15 at 22:51
The largest moon in our solar system is Ganymede. Its mass is about $1.5 \cdot 10^{23}~\text{kg}$, comparable to Mercury ($3.3 \cdot 10^{23}~\text{kg}$) and Mars ($6.4 \cdot 10^{23}~\text{kg}$). So, although Ganymede has no moon, it is large enough to have a moon. It also has a molten core and some atmosphere.
Moons are normally much smaller than what they orbit, not counting Pluto, since it is no longer considered a planet. The heaviest moon compared to its planet is our moon, which is about $\frac{1}{80}$ the mass of Earth ($7.35 \cdot 10^{22}~\text{kg}$ vs. $5.97 \cdot 10^{24}~\text{kg}$).
Assuming a moon $\frac{1}{50}$ the mass of Ganymede, you would have $3 \cdot 10^{21}~\text{kg}$ worth of moon - we'll name it Jimmy. Jimmy's surface gravity will be around $8%$ of Earth's gravity, so Jimmy won't have any atmosphere left. It won't be very Earthlike in other ways either.
Unless Jimmy's orbit is quite close to Ganymede, it will not be a stable orbit over the long term as it will be pulled at by Jupiter and the other planets. Jimmy will be tidally locked to Ganymede, but in close orbit, its day will also be fairly short. $24~\text{h}$ would be entirely possible. Frequent total eclipses would be likely as well, from Ganymede as well as Jupiter. Jimmy will have strong Jovian tides twice be per as well as much weaker solar tides.
Jimmy's seasonal durations will match Jupiter's orbit. But having no atmosphere, your probably won't really care since you will have to spend all of your time inside.
Jimmy is probably about as large as a moon of a moon would ever be, not considering moon captures and catastrophic moon splitting events. And with Jupiter close by, Ganymede will not be successful retaining such moons long term.
Even though Jimmy is close enough to Ganymede for Ganymede to retain Jimmy, the orbit of Jimmy will not be very stable. Over a large period, Ganymede will eventually lose Jimmy. As Jimmy will eventually reach an orbit that favors capture by Jupiter. But I would expect a run of over 100 million years to be possible, if you can avoid resonances that destabilize Jimmy's orbit more quickly.
• About seasonal durations, you may want to compare Are there seasons on Luna?.
– user
May 3 '15 at 14:47
• Seasons would likely exist primarily in a technical sense only, i.e., the most likely orbital plane would tend to be close to the orbit of the Jovian planet. As a practical matter, seasons would be pretty much a non-issue, just as they are on our moon. May 4 '15 at 0:38
A moon orbiting another moon is probably not a stable configuration.
"Tidal forces from the parent planet will tend, over time, to destabilize the orbit of the moon's moon, eventually pulling it out of orbit," says Webster Cash, a professor at the University of Colorado's Center for Astrophysics and Space Astronomy. "A moon's moon will tend to be a short-lived phenomenon."
Source:Popsci
To have any chance of stability, the distances between the planet and the first moon would have to be rather large, so that the planet would not have much gravitational influence on the moon's moon. Probably the planet needs to be also far away from the sun, so that the sun does not disturb the large orbit of the moon around the planet; and the planet has to be huge to hold onto the moon at that distance.
That means tides will be mostly influenced by the first moon, not the planet. Seasons, day cycle, etc will also be largely unaffected by the planet, aside from the possible occasional solar eclipse.
One major problem is the atmosphere: To be able to hold onto an atmosphere, the moon's moon needs to be either big enough to have enough gravity, or be far away from the sun, since the solar wind will strip its atmosphere away otherwise. Our moon is too small to hold an atmosphere at our distance from the sun, and it is one of the biggest moons in the solar system. Titan, bigger and further out, can hold onto an atmosphere, but its distance from the sun means it is quite cold. A moon orbiting titan would have to be rather small, close to titan, and titan would have to be much further away from saturn, to allow a stable orbit
So in summary, such a moon would probably have to be far away from the sun and its planet, and rather small, to avoid getting ripped apart or deorbited by tidal effects. The temperature would be rather cold due to the distances to the sun. If the planet is a super Jupiter / brown dwarf, the moon could get some additional heat from the thermal radiation of the planet. But not too much, because close to the planet, tidal effects would forbid a stable moon-moon orbit.
The largest possible gas giant is around 12 Jupiter's in mass (this is roughly where you start hitting the difference between a gas giant and a small star). Let's call this theoretical planet G1.
G1 is big enough that it could have another gas giant as a moon - say one similar in mass to Jupiter, let's call it G2. If G2 was far enough away from G1, it could have its own moon system, with moons similar in size and composition to earth.
Now the problem with this setup is that in order to keep the orbits stable, G2 has to be pretty far from G1 so that G2's moons aren't perturbed too much. And because of that, G1 and G2 need to be really far away from their parent star so that G2 doesn't get perturbed.
The end result is that while you might be able to get a planetary/moon/moon setup with an earth-like mass and composition, you can't put it in the habitable zone of the star because that's too close for it to be stable.
• It won't be in the solar habital zone, but would rely on tidal heating like Europa etc. May 2 '15 at 20:34
Fun Facts
In our solar system, all planets have moons except Venus & Mercury. The larger planets exhibit an odd commonality in that the mass of their moon system $m_{moon system}$ ~ $\frac {m_{planet}}{10000}$.
For other SE questions, I calculated the minimum mass of a body that could retain water in its atmosphere when its temperature was similar to Earth's at $m_{body}$ ~ $\frac {m_{Earth}}{3}$.
The closest a moon comes to the mass of its parent body is the Pluto - Charon system with a mass ratio of $m_{Charon}$ ~ $\frac {m_{Pluto}}{10}$.
Interestingly the maximum mass of an object in a planet's or moon's L4/L5 without making those locations unstable is $m_{small}$ ~ $\frac {m_{large}}{10}$
What would my system look like?
Moon of a moon:
Mass ~ 2 x 10^24 kg ~ 1/3 Earth
Density ~ 5.5 g/cm^3
Moon:
Mass ~ 2 x 10^25 kg ~ 3x Earth
Density ~ 5.5 g/cm^3
Planet:
Mass ~ 2.2 x 10^29 kg ~ 100x Jupiter
3. Tidal forces will tend to destabilize the configuration, so you'll need the system far away from the planet (tidal forces decrease at a $\frac {1}{r^3}$ so distance is your friend).
5. Because the problem with tides messing up orbits and unstable orbits, you may just wish to put the secondary in the primary's L4/L5. This places them farther apart than they would be if the secondary orbited the primary, but it would make the orbit more stable. Max mass for the secondary would be (as stated above) $m_{secondary}$ ~ $\frac {m_{primary}}{10}$
|
2021-09-24 09:48:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46420419216156006, "perplexity": 912.9103319560805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057508.83/warc/CC-MAIN-20210924080328-20210924110328-00299.warc.gz"}
|
http://www.msri.org/seminars/20688
|
Mathematical Sciences Research Institute
Home » AT Research Seminar: The classification of Taylor towers for functors from based spaces to spectra
Seminar
AT Research Seminar: The classification of Taylor towers for functors from based spaces to spectra April 17, 2014 (01:30 PM PDT - 02:20 PM PDT)
Parent Program: Algebraic Topology MSRI: Simons Auditorium
Speaker(s) Michael Ching (University of Massachusetts, Amherst)
Description No Description
Video
Abstract/Media
The Goodwillie derivatives of a functor from based spaces to spectra possess additional structure that allows the Taylor tower of the functor to be reconstructed. I will describe this structure as a 'module' over the 'pro-operad' formed by the Koszul duals of the little disc operads. For certain functors this structure arises from an actual module over the little L-discs operad for some L. In particular, this is the case for functors that are left Kan extensions from a category of 'pointed framed L-dimensional manifolds' (which are examples of the zero-pointed manifolds of Ayala and Francis). As an application I will describe where Waldhausen's algebraic K-theory of spaces fits into this picture. This is joint work with Greg Arone (and, additionally, with Andrew Blumberg for the application to K-theory).
|
2014-07-28 22:35:20
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8138005137443542, "perplexity": 1916.7415990259549}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510263423.17/warc/CC-MAIN-20140728011743-00403-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://tex.stackexchange.com/questions/128579/tikz-using-the-pos-x-in-a-node
|
# TikZ: using the pos = x in a node
Why doesn't the pos = x option work when the node is used in the plot command?
\documentclass[tikz]{standalone}
\begin{document}
\begin{tikzpicture}
\draw plot[domain = 0:2, samples = 100] ({\x}, {(\x)^2}) node[pos = 1.1]
{$$y = x^2$$};
\end{tikzpicture}
\end{document}
In this picture, f_{quad} is my node that should be at pos = 1.1.
Additionally, it doesn't matter what I make pos equal to. It is always at the origin.
My feeling is that the plot operation is special and the way in which TikZ process it differs with respect to how canonical paths are processed. Quoting the manual:
The plot path operation can be used to append a line or curve to the path that goes through a large number of coordinates.
This suggests that the whole plot is appended to the path and thus that could be a possible reason why nodes can be appended only at the beginning or at the end of a plot path.
However, it is always possible to exploit the decorations.markings library as a workaround.
An example:
\documentclass[tikz,png,border=10pt]{standalone}
\usetikzlibrary{decorations.markings}
\tikzset{insert node/.style args={#1 at #2}{
postaction=decorate,
decoration={
markings,
mark= at position #2
with
{
#1
}
}
}
}
\begin{document}
\begin{tikzpicture}
\draw[insert node={\node[red,left]{$$y = x^2$$};} at 0.65,
insert node={\node[blue,draw,right]{$$y = x^2$$};} at 0.45,
insert node={\node[green!80!black,above]{$$y = x^2$$};} at 1,
] plot[domain = 0:2, samples = 100] ({\x}, {(\x)^2});
\end{tikzpicture}
\end{document}
The result:
• Thanks for your solution. At the present, I will use it, but I plan on waiting a few days to see if anyone has a fix. If not, I will surely accept. – dustin Aug 16 '13 at 19:13
• @dustin: you're welcome. :) – Claudio Fiandrino Aug 16 '13 at 19:22
Regardless of whether you use the pos key, a node will not be placed along a subpath unless an internal "timer" command as been specified, which determines how a node will be positioned along the last subpath (e.g., a line or a curve). Path construction commands specify the appropriate timer and also set other parameters which the timer requires.
In the case of plots, a timer is not specified and it is difficult to see how they could be given that most plots are essentially made up of lots of very very short lineto subpaths. Even if the parameters were set the last subpath would be a very very short straight line (depending on the number of samples) meaning any pos value would place the node at (or very near) the end of the plot.
Furthermore, if you set the pos key when a timer has not been specified (which is the case at the end of a plot) then the node is not moved to any position (even if you use at) and is dumped at the origin (I'm not saying this is desirable, this is just what happens currently). If you remove the pos key the node is placed centered (depending on the anchor) on the last point of the plot.
As has been pointed out, the markings decoration can be used, however, as the manual states, as decorations use TeX for maths it isn't guaranteed to be very accurate with paths made up of lots of very short subpaths.
|
2020-10-01 16:03:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8054129481315613, "perplexity": 1025.0544615376768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402131777.95/warc/CC-MAIN-20201001143636-20201001173636-00639.warc.gz"}
|
https://www.tutorialspoint.com/python-program-to-print-all-disarium-numbers-between-1-to-100
|
# Python program to print all Disarium numbers between 1 to 100
PythonServer Side ProgrammingProgramming
When it is required to print all the disarium numbers between 1 and 100, a simple loop can be run between 1 and 100 and the length of every number can be calculated, and the power of the position can be multipled with the number itself.
If they are equal, it is considered as a disarium number.
A Disarium number is the one where the sum of its digits to the power of their respective position is equal to the original number itself.
Below is a demonstration for the same −
## Example
Live Demo
def length_calculation(my_val):
len_val = 0
while(my_val != 0):
len_val = len_val + 1
my_val = my_val//10
return len_val
def digit_sum(my_num):
remaining = sum_val = 0
len_fun = length_calculation(my_num)
while(my_num > 0):
remaining = my_num%10
sum_val = sum_val + (remaining**len_fun)
my_num = my_num//10
len_fun = len_fun - 1
return sum_val
ini_result = 0
print("The disarium numbers between 1 and 100 are : ")
for i in range(1, 101):
ini_result = digit_sum(i)
if(ini_result == i):
print(i)
## Output
The disarium numbers between 1 and 100 are :
1
2
3
4
5
6
7
8
9
89
## Explanation
• Two methods are defined, that are used to find the number of digits in the number, and to get the product of the digit multiplied with its position.
• An initial result is assigned to 0.
• A loop is iterated over numbers between 1 and 101, (excluding 101), and if the number is same as the product of the digits in the number and the position, it is considered as a disarium number.
• This is displayed as output on the console.
Published on 12-Mar-2021 12:08:15
|
2021-09-17 02:13:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.244486004114151, "perplexity": 1421.761136063707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053918.46/warc/CC-MAIN-20210916234514-20210917024514-00368.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-13-vector-functions-13-3-arc-length-and-curvature-13-3-exercises-page-908/8
|
## Calculus 8th Edition
$2.0454$
As we are given that $r(t)=\lt t, e^{-t},te^{-t}\gt$; $1 \leq t \leq 3$ Length of the curve can be obtained by using formula, such as $L=\int_a^b |r'(t)| dt$ Now, $r'(t)=\lt 1,-e^{-t},e^{-t}-te^{-t}\gt$ ; $|r'(t)|=\sqrt {( 1)^2+(e^{-t})^2+(e^{-t}-te^{-t})^2}dt$ or,$=\sqrt{ 1+e^{-2t}+(e^{-t}-te^{-t})^2}$ or, $=\sqrt{ 1+e^{-2t}+e^{-2t}(1-t)^2}$ Since,$L=\int_{1}^3(\sqrt{ 1+e^{-2t}+e^{-2t}(1-t)^2}) dt= 2.0454$ This is the result calculated by using a calculator.
|
2020-03-30 20:13:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9967056512832642, "perplexity": 488.44480888738724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497301.29/warc/CC-MAIN-20200330181842-20200330211842-00057.warc.gz"}
|
http://www.adm-astronomy.com/congratulations-to-wkqxkk/ae9c61-proving-the-parallelogram-diagonal-theorem-edgenuity-answers
|
Slide 4 Instruction Proving a Quadrilateral Is a Parallelogram Help Center Detailed answers to any questions you might have ... $\begingroup$ Do you know proportional segment theorem (Thales)? A parallelogram is a rhombus if and only if each diagonal bisects a pair of opposite angles. Parallelogram Opposite Sides Converse Sample answer:5. Why? Definition: A square is a parallelogram with … Lv 7. - the answers to estudyassistant.com Section 6.3 Theorem 6-7: If both pairs of opposite sides of a quadrilateral are congruent, then the quadrilateral is a parallelogram. (Hint: Use the Midpoint formula.) Theorem A parallelogram is a rectangle if and only if its diagonals are congruent. $$\Delta RSP\cong PQR$$ $$\Delta QPS\cong SRQ$$ Parallelogram Law: The sum of the squares of the sides is equal to the sum of the squares of the diagonals. Start studying Parallelograms Assignment and Quiz. Converse of the Parallelogram Diagonal Theorem Slide 7 Instruction Proving a Quadrilateral Is a Parallelogram The Single Opposite Side Pair Theorem Single opposite side pair theorem: If one pair of sides of a quadrilateral is both congruent and, then the quadrilateral is a parallelogram. A B C D 1 2 3 4 Given: ABCD Prove: AB CD, BC AD statementsreasons WARM UP Statements of parallelogram and its theorems 1) In a parallelogram, opposite sides are equal. diagonals ac, bd intersect at e. prove: ae = ce and be = de assemble the proof by dragging tiles to the statements and reasons columns. 3 Answers. Parallelogram Diagonals Theorem Converse Given: Prove: is a parallelogram Given: Prove: is a ... (Hint: each side of quadrilateral is a midsegment in a triangle formed by two sides of the parallelogram and a diagonal.) Theorem 6-6: Each diagonal of a parallelogram separates the parallelogram into two congruent triangles. Lesson 6-3 Proving That a Quadrilateral Is a Parallelogram 369 You know that the converses of Theorems 6-3, 6-4, and 6-5 are true. Line CD and AB are equal in length because opposite sides in a parallelogram are are equal. In a parallelogram the opposite sides and angles are equal, and the diagonal bisects the area. Answer: 2 question Proving the parallelogram diagonal theorem given: abcd is a parallelogram. Page 2/5. In a parallelogram, opposite angles are equal. Grampedo. Theorems include: opposite sides are congruent, opposite angles are congruent, the diagonals of a parallelogram bisect … Theorem A parallelogram is a rhombus if and only if each diagonal bisects a pair of opposite angles. The platform swings back and forth, higher and higher, until it goes over the top and The diagonals are going to be equal to sides of the parallelogram. Angles Se… Relevance. ... Rectangles are parallelograms with congruent diagonals theorem If a parallelogram has congruent diagonals, then it is a rectangle. Special Parallelograms: Rhombus, Rectangles This video lesson discusses the properties of the special parallelograms: rectangle and rhombus. THEOREM. Proving the Parallelogram Diagonal Theorem Given ABCD is a parralelogam, Diagnals AC and BD intersect at E Prove AE is conruent to CE and BE is congruent to DE This proves that opposite sides are equal in a parallelogram. Theorem 1: Opposite Sides of a Parallelogram Are Equal. Prove the quadrilateral is a parallelogram by using Theorem 5-7; if the diagonals of a quadrilateral bisect each other, then it is a parallelogram. Theorem and the Parallelogram Diagonals Theorem. Using what you have learned, you can show that the converse of Theorem 6-6 is also true. Parallelogram Opposite Angles Converse 6. Prove theorems pertaining to triangles. MCC9-12.G.CO.11 Prove theorems about parallelograms. Prove theorems pertaining to parallelograms. Congruent shapes Congruent shapes have the same size and the same shape. Proving that a figure is a parallelogram if and only if opposite sides are congruent. Favourite answer. #1 in the Review Queue above, had you write the converses of each of these: Opposite Sides Theorem Converse: If the opposite sides of a quadrilateral are congruent, then the figure is a parallelogram. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Now we will be given the propert proving the parallelogram angle theorem edgenuity answers. Theorem 2. Theorem 8.8 A quadrilateral is a parallelogram if a pair of opposite sides is equal and parallel. $\endgroup$ – Vasya Apr 5 '19 at 18:16 $\begingroup$ That is to prove that EG parallel to … 2) If each pair of opposite sides of a quadrilateral is equal then it is a parallelogram. $$PQ^2+QR^2+RS^2+SP^2=QS^2+PR^2$$ Let us explore some theorems based on the properties of a parallelogram. parallelogram. Food bill before tax 45 sales tax 6.8% tip 24%. 4.5 Proving Quadrilateral Properties - Free download as Powerpoint Presentation (.ppt / .pptx), PDF File (.pdf), Text File (.txt) or view presentation slides online. 3) In a parallelogram, opposite angles are equal. Therefore the diagonals of a parallelogram do bisect each other into equal parts. Rectangle Corollary and Theorem A quadrilateral is a rectangle if and only if it has four right angles. ★★★ Correct answer to the question: Proving the parallelogram diagonal theorem answers please - edu-answer.com 323. Parallel and Perpendicular Lines Building Blocks of Geometry Define segment, ray, angle, collinear, intersect, intersection, and coplanar. Converse of the Parallelogram Diagonal Theorem Slide 7 Instruction Proving a Quadrilateral Is a Parallelogram The Single Opposite Side Pair Theorem Single opposite side pair theorem: If one pair of sides of a quadrilateral is both congruent and, then the quadrilateral is a parallelogram. Review Queue Answers 1. Theorem 6-8: If one pair of opposite sides of a quadrilateral is both parallel and congruent, then the quadrilateral is a parallelogram. Section 7.3 Proving That a Quadrilateral Is a Parallelogram 377 Identifying a Parallelogram An amusement park ride has a moving platform attached to four swinging arms. Prove Theorem 9-1 Opposite sides of a parallelogram are congruent. Proving Quadrilaterals in the Coordinate Plane Mathematical Goals Prove theorems pertaining to lines and angles. 1 decade ago. 6.3. (Proof of theorem appears further down page.) Theorem 6-11 Theorem If the diagonals of a quadrilateral bisect each other, then the quadrilateral is a parallelogram. Square. By the Parallelogram Diagonals Converse, if the diagonals of a quadrilateral bisect each other, then the quadrilateral is a parallelogram. Therefore Triangle ABE and CED are congruent becasue they have 2 angles and a side in common. Hence line CE and EB are equal and AE and ED are equal due to congruent triangles. Each diagonal divides the parallelogram into two congruent triangles. Proving the Parallelogram Diagonal Theorem Given: ABCD is a parallelogram. Find an answer to your question Proving the Converse of the Parallelogram Side Theorem Try it 3 Given: LM ON and LO MN Prove: LMNO is a parallelogram. Converse of the Parallelogram Angle Theorem Converse of the parallelogram angle theorem: If both pairs of angles of a quadrilateral are , then the quadrilateral is a parallelogram. Consider the following figure: Proof: In $$\Delta ABC$$ and $$\Delta CDA$$, \[\begin{align} Square Corollary A quadrilateral is a square if and only if it is a rhombus and a rectangle. Opposite Sides Parallel and Congruent Theorem Congruent Theorem.4. point A is (-5,-1) point B is (6,1) point C is (4,-3) point D is (-7,-5) Answer Save. Let ACDB be a parallelogram, and BC its diagonal; then the opposite sides and angles are equal, and the diagonal BC divides the parallelogram into two equal areas. 4) If in a quadrilateral, each pair of opposite angles is equal then it is a parallelogram. The sum of its interior angle would 360-degree.\endgroup$– Vasya Apr 5 '19 at 16:59$\begingroup$sorry, I thought you meant midpoints. Properties of Quadrilaterals Whenever we have parallelogram we can prove that the opposite sides of a parallelogram are congruent by first proving that two triangles which are made by joining the opposite sides are equal. I find it useful to make a sketch of the problem. Opposite Sides Theorem Converse: If the opposite sides of a quadrilateral are congruent, then the figure is a parallelogram. Conversely, if the opposite angles in a quadrilateral are equal, then it is a parallelogram. This little hack is how to skip long videos in Edgenuity so you can get right to answering the unit tests (Thanks to StevenW for submitting this). Correct answer to the question Proving the Parallelogram Side Theorem Given ABCD is a parallelogram. So, solve 2x = 10 − 3x for x. A parallelogram is a rhombus if and only if the diagonals are perpendicular. A square is a rhombus and a Side in common have the same and..., intersection, and more with flashcards, games, and coplanar theorem 6-11 theorem if the opposite of! Theorem appears further down page. special parallelograms: rectangle and rhombus AE and ED are equal theorem... Parallel and congruent, the diagonals of a parallelogram Proving that a figure is a parallelogram is a with. Proving a quadrilateral are congruent rectangle and rhombus they have 2 angles and a Side in.!$ 45 sales tax 6.8 % tip 24 % Converse of theorem appears further down.. Rectangle if and only if the proving the parallelogram diagonal theorem edgenuity answers are Perpendicular parallelograms with congruent diagonals, then it is a if! Intersection, and more with flashcards, games, and more with flashcards, games, and with! Triangle ABE and CED are congruent, then it is a parallelogram terms, and more with,... Have learned, you can show that the Converse of theorem appears further down.! Proving the parallelogram into two congruent triangles you have learned, you can that. A rhombus if and only if each diagonal divides the parallelogram into two congruent triangles the quadrilateral is a.! 4 Instruction Proving a quadrilateral bisect each other, then the quadrilateral is and! Parallelogram diagonals Converse, if the diagonals of a parallelogram is a rhombus and a rectangle if and if. Ray, angle, collinear, intersect, intersection, and the diagonal bisects the area divides the diagonal... 6.8 % tip 24 % and a Side in common the problem 6-6. Of Quadrilaterals a parallelogram Proving that a figure is a rhombus if and only opposite. Each diagonal divides the parallelogram Side theorem given: abcd is a if! 5 '19 at 16:59 $\begingroup$ sorry, i thought you meant midpoints have 2 angles a. 1: opposite sides and angles are equal proportional segment theorem ( Thales ) tip 24 % the of... Has four right angles has four right angles \begingroup $sorry, i thought you meant midpoints quadrilateral each. 6-6 is also true and more with flashcards, games, and more with,. Section 6.3 theorem 6-7: if the diagonals of a quadrilateral are equal in parallelogram... Size and the diagonal bisects a pair of opposite sides of the problem conversely, if the of. Parallelograms: rhombus, Rectangles this video lesson discusses the properties of Quadrilaterals a parallelogram a. Parallelogram Proving that a figure is a rhombus if and only if each diagonal a! Be equal to sides of a quadrilateral are equal and AE and ED equal... To be equal to sides of a parallelogram same size and the same shape bisects a pair opposite. 2 question Proving the parallelogram diagonals Converse, if the opposite angles are equal, and other tools. A figure is a parallelogram to make a sketch of the special parallelograms:,! The opposite angles in a parallelogram, opposite angles intersect, intersection, and the bisects! Four right angles bisects the area the quadrilateral is equal then it a... If a pair of opposite angles parallelogram, opposite angles Proving that a figure is parallelogram!$ 45 sales tax 6.8 % tip 24 % size and the same size and diagonal! Therefore Triangle ABE and CED are congruent, opposite angles parallelogram into two congruent triangles equal a! The opposite angles each other, then the figure is a parallelogram Quadrilaterals! Might have... $\begingroup$ do you know proportional segment theorem ( Thales ) show! Correct answer to the question Proving the parallelogram diagonals Converse, if the angles! Has four right angles to any questions you might have... $\begingroup$ do you proportional. What you have learned, you can show that the Converse of theorem appears down. Proportional segment theorem ( Thales ) both pairs of opposite sides is equal then it a! Parallelograms with congruent diagonals theorem if a parallelogram if and only if it is a rectangle if only. Definition: a square if and only if its diagonals are going to be equal to sides of a,. And angles: if the diagonals are going to be equal to sides of a parallelogram $45 tax. Intersect, intersection, and coplanar to be equal to sides of a parallelogram, opposite angles Proof... Vasya Apr 5 '19 at 16:59$ \begingroup $do you know proportional segment theorem ( )! Same shape a figure is a parallelogram has congruent diagonals, then it is a Proving... Divides the parallelogram 10 − 3x for x, Rectangles this video lesson discusses the properties of quadrilateral! With flashcards, games, and coplanar$ \endgroup $– Vasya Apr 5 '19 at$... Of parallelogram and its theorems 1 ) in a parallelogram do bisect other... Into equal parts theorems pertaining to Lines and angles are equal Plane Mathematical Goals prove pertaining. Food bill before tax $45 sales tax 6.8 % proving the parallelogram diagonal theorem edgenuity answers 24 % be......$ \begingroup $do you know proportional segment theorem ( Thales ): abcd is a.... Thought you meant midpoints in a parallelogram are are equal, and more with flashcards, games and! Apr 5 '19 at 16:59$ \begingroup $sorry, i thought meant. Questions you might have...$ \begingroup $sorry, i thought you meant midpoints Proving in. Proving Quadrilaterals in the Coordinate Plane Mathematical Goals prove theorems pertaining to Lines and angles are.... Sales tax 6.8 % tip 24 % Mathematical Goals prove theorems pertaining to Lines and angles are equal, the. Plane Mathematical Goals prove theorems pertaining to Lines and angles square if and if... Parallelogram is a parallelogram the opposite sides are equal in length because opposite sides are congruent tax$ 45 tax! Diagonals of a parallelogram Apr 5 '19 at 16:59 $\begingroup$ you... Theorems 1 ) in a parallelogram is a parallelogram has congruent diagonals theorem a! Rectangles this video lesson discusses the properties of the special parallelograms: rectangle and.. Proving Quadrilaterals in the Coordinate Plane Mathematical Goals prove theorems pertaining to Lines and angles equal.: each diagonal of a parallelogram theorem a quadrilateral bisect each other, then the figure a... − 3x for x prove theorem 9-1 opposite sides of a parallelogram is a rectangle if and if... 4 ) if in a parallelogram: each diagonal of a quadrilateral is a parallelogram ) each. Square is a parallelogram sorry, i thought you meant midpoints equal due to congruent triangles to... Diagonal theorem given: abcd is a parallelogram do bisect each other into equal parts if only. 6.3 theorem 6-7: if both pairs of opposite sides are congruent becasue they have 2 angles a. Side in common to make a sketch of the parallelogram Side theorem given: abcd is rhombus. And CED are congruent of a parallelogram Building Blocks of Geometry Define segment, ray,,! '19 at 16:59 $\begingroup$ sorry, i thought you meant.! Be equal to sides of a quadrilateral is a parallelogram parallelogram and its theorems 1 ) in a parallelogram equal.
Bexar County Building Codes, King's College Lagos Alumni, American University Law School Gpa Requirement, Side Impact Collision Statistics, Ford 302 Engine Identification, Food Bank Wavertree Liverpool, Salesperson Advertisement Sample, ,Sitemap
|
2023-03-21 05:04:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6566314697265625, "perplexity": 878.0122720944177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00214.warc.gz"}
|
https://www.scienceforums.net/topic/2588-animal-testing-right-or-wrong/page/20/?tab=comments
|
# Animal Testing - Right or Wrong?
## Recommended Posts
I wonder if any of the people advocating "animal rights" allows mice and rats to live in and freely roam in their houses or whether they set out traps and poisons to kill them. If animals have "rights" in regard to research and can't be killed in the course of research, then how can animal rights advocates allow the killing of mice and rats that set up housekeeping in the homes of people?
To add on to this, one also has to wonder whether or not they pay any attention to all the organisms they squash underfoot, or have no problem using pesticides to kill off insects?
Or if any of them have any pets, as much of their food is derived from animals.
Or if any of them would decide to go without soap (or wear makeup, etc.), as animal fats are often a common product in those items...
--------
The list goes on and on, and after a while I have deducted that the only criterion for whether or not an animal has any "rights" is:
1) They're mammals
2) They happen to be "cute".
It's kinda hypocritical when you think about it, and is probably the reason why animal rights movements doesn't really gather much support in general.
##### Share on other sites
im new here and this is my first reply and im think a compromise can be made between the right and wrong of animal testing. I belive only naughty animals should be tested on. Like the dogs that bite and instead of being put to sleepy byes for ever should be tested on to teach them a real lessonxx P.S guys im singlexxx
You can define "naughty" only in regard to pets. And then only in regard to behavior that people define as objectionable. By its own standards, is a dog or a lab rat being "naughty" when it bites someone? No. Often it is only defending itself to a perceived threat or acting like a meat-eater -- which dogs are.
So I appreciate the effort at compromise, but it won't work.
As the regulations stand now, it is illegal to use dogs or cats obtained from a shelter for research. ALL research animals must be purchased from a licensed vendor. So unwanted dogs and cats cannot be used for research, but instead are put to death in a hypobaric chamber -- which means they suffocate to death.
##### Share on other sites
Hey, they suffocate humanely
##### Share on other sites
Came back as just occurred to me, and not as proof, but am currently reading The Feeling of What Happens, early into it yet. One of the patients Damasio describes had damage to a facial nerve. As treatment, they lesioned a specific region, don't see it better defined than "a specific sector of the frontal lobe," and the patient afterwards reported that the pain was still there but did not show distress and said he felt fine. Do not know specifically what could or could not be done with that for cross-species interpretations of subjective awareness of pain, but provides an avenue for investigation should think
I should also add I do not draw a line specifically at aware or not but use a loose sliding scale. I will not kill the wasps randomly, and certainly would not for general amusement, but I will kill off the ones in the front when my mother visits as she is allergic. The greater the degree of nervous development the greater the need must be before causing harm as I see it (degree of harm would also, of course, be a factor.) When it comes to pigs and primates, to use the example, I think the similarities sufficient that any experiment which would be considered unethical to perform on humans should be equally considered unethical to carry out on them
obviously right......doing the test for correct purpose is appreciable one..but for unwanted thing is punishable
##### Share on other sites
Hey, they suffocate humanely
Actually, no. Remember, this is a loss of air pressure -- being put in a vacuum. The animals go thru the torture of decompression, with bursting blood vessels, bulging eyes, etc. All very painful.
Humane suffocation is carbon dioxide inhalation. Put an animal (usually a rat or mouse) in a chamber and then flood it with CO2. The animal peacefully goes to sleep and then stops breathing. And this is one of the approved methods of euthanasia for animal research.
Merged post follows:
Consecutive posts merged
I think Animal testing is dreadfull! and WELL WRONG! when it comes to testing things for humans!
WHY should they have to suffer? half the tests done arent acurate anyway as their physiology is different!
we have a load of aholes on death row that have been proven guilty and yet they get to die with no data gained, its a total complete and utter waste of potential, I say we use these rapists and child molesters and murderers/terrorists etc... and exploit their physiology, ok, maybe theyre a little less humane than animals with the behaviour that got them there, but the results should be alot more compatible, and who cares if they die??? theyre gunna get fried anyway!?
NO! to animal testing!
Remember the experiments the Nazis did on Jews that were "gunna get fried anyway"? That's where this logic ultimately leads. It also leads to the result Larry Niven discussed in several of his short stories: if medical benefits come from prisoners condemned for death, there won't be enough of these and it becomes very easy to extend the death penalty to more and more crimes to make sure there are enough prisoners for testing. Remember, right now rapists and child molesters cannot get the death penalty. So you are already advocating extending the death penalty to these crimes.
We all have heard, especially in rape cases, where DNA evidence has overturned the conviction of several "rapists". You would have those individuals executed when, in fact, they were innocent. Do you really want that on your conscience?
I think we are all agreed that using animals for painful tests -- such as the Draize test -- for cosmetics is wrong. But medical testing is different. Even if you are using prisoners, most prisoners do not have melanoma, or tuberculosis, or cardiac arrythmias, etc. Would you inject prisoners with melanoma cells? Deliberately infect them with tuberculosis bacteria? If that is the case, how are you different from the Nazis?
Merged post follows:
Consecutive posts merged
When it comes to pigs and primates, to use the example, I think the similarities sufficient that any experiment which would be considered unethical to perform on humans should be equally considered unethical to carry out on them
Basically, that is the rule applied to all animal research, with a few minor exceptions. If you would give a human a pain reliever, then you must do the same for an animal. If you would perform sterile surgery with anesthesia on a human, then you must perform essentially the same surgery and anesthesia on the animal.
The exceptions come when such pain relief would compromise the data. Remember, the ultimate purpose of the animal research is to obtain data that will ultimately lead to a better understanding of disease or physiology in humans. So, altho we might feel constrained to give a human pain relief (never mind the data), if such pain relief would make it impossible to get the data from the animal, we are using animal as a route to get data we can't get from humans.
One example is neurological data from the brains of rats. The normal methods of euthanasia are either 1) carbon dioxide inhalation or 2) pentobarbital injection. Both are painless but both take a long time and result in the destruction of short term neurotransmitters in the brain. So if you are studying models of Parkinson's or other brain disorders in rats, you are allowed to guillotine the rat. That death is so quick that it preserves the neurotransmitters you need to study.
##### Share on other sites
Humane suffocation is carbon dioxide inhalation. Put an animal (usually a rat or mouse) in a chamber and then flood it with CO2. The animal peacefully goes to sleep and then stops breathing. And this is one of the approved methods of euthanasia for animal research.
Minor technical note - there is some dispute about whether this process is humane for ectotherms, which can hold their breath for long periods (24+ hours in some turtles) and for which CO2 buildup is a more pressing physiological issue than O2 loss. However, their differing physiology also makes alternative means of anaesthesia and euthanasia possible, some of which are even simpler and more effective (especially those absorbed via the gills and permeable skin of fish and amphibians).
##### Share on other sites
Actually, no. Remember, this is a loss of air pressure -- being put in a vacuum. The animals go thru the torture of decompression, with bursting blood vessels, bulging eyes, etc. All very painful.
Humane suffocation is carbon dioxide inhalation. Put an animal (usually a rat or mouse) in a chamber and then flood it with CO2. The animal peacefully goes to sleep and then stops breathing. And this is one of the approved methods of euthanasia for animal research.
I thought that CO2 was the indicator that tells us when we're out of breath, and also that excess CO2 can acidify the blood. Why not use something like nitrogen?
##### Share on other sites
I would agree with Mr. Skeptic, saying that CO2 poisoning is a peaceful death is a bit of a stretch. Although it is more humane than many other possible methods it's definitely not "peaceful".
When I was a young child the age of four I accidentally locked myself in a large ice-chest in my parents garage. It was one of the most horrifying moments of my life and I remember it vividly. After a few seconds I could feel a strong burning sensation in my chest. It is a very unique feeling because you continue to breath but you get no oxygen. I started to scream and luckily my dad was going in and out of the garage and he let me out. I suppose I was only in there about half a minute to a minute, but I would not describe it as a peaceful.
##### Share on other sites
Toasty, lack of oxygen =/= high CO2. The former is much, much more distressing than the latter.
Mr. Skeptic - nitrogen would be the same as depriving of O2, resulting in the same suffocation sensation. Excess CO2 is somewhat milder. Is it perfect, no, but when you can't use more potent chemicals like anaesthetics, it works quite well.
Personally, I'm a fan of cervical dislocation, but that takes practice and is a lot more time-intensive.
##### Share on other sites
Toasty, lack of oxygen =/= high CO2. The former is much, much more distressing than the latter.
Mr. Skeptic - nitrogen would be the same as depriving of O2, resulting in the same suffocation sensation. Excess CO2 is somewhat milder. Is it perfect, no, but when you can't use more potent chemicals like anaesthetics, it works quite well.
Personally, I'm a fan of cervical dislocation, but that takes practice and is a lot more time-intensive.
Actually since I was in an air-tight cooler I was breathing the oxygen and exhaling CO2. That leads to high CO2 inside the confined area, does it not?
##### Share on other sites
Hm, it seems that I overestimated the effects of CO2 acidosis.
##### Share on other sites
Actually since I was in an air-tight cooler I was breathing the oxygen and exhaling CO2. That leads to high CO2 inside the confined area, does it not?
Yes, but because we're mammals, low oxygen is much more of an immediate problem for us, and much more distressing. We run out of O2 long before we encounter excess CO2.
##### Share on other sites
I understand now.
So Mokele correct me if I am wrong: I would die from the lack of o2 before I died from the CO2 poisoning? And that if it was CO2 poisoning I would of just passed out. I can understand that. Thank you for clearing that up . In that case, excess CO2 does not seem all that bad for animal testing.
Back to the original post, I think to ask if animal testing is wrong or right is the wrong question. One has to ask, Is animal testing necessary? I believe it is to preserve human life, although it is at the expense of an animal. But we ourselves are humans, so we have somewhat of a bias toward human life you might say.
##### Share on other sites
I thought that CO2 was the indicator that tells us when we're out of breath, and also that excess CO2 can acidify the blood. Why not use something like nitrogen?
As Mokele pointed out, nitrogen triggers lack of oxygen and suffocation. CO2 instead simply shuts down the breathing. When suffocating, an animal is very agitated, often trying to push against the lid of the container, or bashing against the walls. In CO2 inhalation the animal checks out the container in a normal non-agitated manner, then quietly lays down, appears to lose consciousness, and then stops breathing.
Because of the size of container needed, and thus the amount of CO2 needed, it isn't used on any mammals other than mice and rats. Larger animals typically are injected with pentobarbital.
Mokele, I have never seen cervical dislocation used on any animal other than mice. The technique on a larger animal is difficult, resulting in many failures to dislocate. It is not approved on any animals larger than mice. The technique I have seen most often on mice is to gently hold their tails, place the blunt edge of a scissors behind the neck, press down, and then sharply yank the tail.
Also, ectotherms do not fall under Animal Welfare Act or any of the regulations -- either by FDA or USDA. Nor have I seen any "animal rights" people argue for better treatment of zebrafish, for example.
Merged post follows:
Consecutive posts merged
So Mokele correct me if I am wrong: I would die from the lack of o2 before I died from the CO2 poisoning?
Absolutely. In the technique we actually flood the chamber with CO2 from a compressed gas cylinder. We replace the air with CO2. You can't do that by breathing in a controlled place.
And it is one approved method for euthanasia.
And that if it was CO2 poisoning I would of just passed out. I can understand that. Thank you for clearing that up . In that case, excess CO2 does not seem all that bad for animal testing.
Back to the original post, I think to ask if animal testing is wrong or right is the wrong question. One has to ask, Is animal testing necessary? I believe it is to preserve human life, although it is at the expense of an animal. But we ourselves are humans, so we have somewhat of a bias toward human life you might say.
I would say it is the "right" question. Something can be necessary but not ethical. Being necessary does not make something "right". The point I have always made is that all animal life preserves itself at the expense of other life. There is no "right" or "wrong" in exploiting and using another species. Ethics that apply within your species do not automatically apply between species.
Notice however that we have chosen to apply some human ethics -- "humane treatment" -- to other species. There are laws on the books against torturing or animals or having animal fights, as Michael Vick found out. We have chosen to treat the experimental animals as humanely as possible while using them for our benefit.
Edited by lucaspa
Consecutive posts merged.
##### Share on other sites
Mokele, I have never seen cervical dislocation used on any animal other than mice. The technique on a larger animal is difficult, resulting in many failures to dislocate. It is not approved on any animals larger than mice. The technique I have seen most often on mice is to gently hold their tails, place the blunt edge of a scissors behind the neck, press down, and then sharply yank the tail.
I've used it on rats (up to 1 lb in weight) and I've seen it used in rabbits with 100% success, but not in a laboratory (for snake food). The method is pretty much the same for rats (but with a larger blunt object), though rabbits require a much more difficult method (pulling back on the rear legs while holding the neck tightly).
Also, ectotherms do not fall under Animal Welfare Act or any of the regulations -- either by FDA or USDA. Nor have I seen any "animal rights" people argue for better treatment of zebrafish, for example.
IIRC, they fall under the scope of one of the dozen or so oversight agencies that go through IACUC, but I forget which one. At the end of the day, I know you have to fill out IACUC paperwork for every procedure and species.
##### Share on other sites
I've used it on rats (up to 1 lb in weight) and I've seen it used in rabbits with 100% success, but not in a laboratory (for snake food). The method is pretty much the same for rats (but with a larger blunt object), though rabbits require a much more difficult method (pulling back on the rear legs while holding the neck tightly).
Cervical dislocation is only approved for rodents less than 200 gm. It is not a matter of "success" necessarily, but of "humane".
IIRC, they fall under the scope of one of the dozen or so oversight agencies that go through IACUC, but I forget which one. At the end of the day, I know you have to fill out IACUC paperwork for every procedure and species.
When I served on an IACUC, if you were collecting amphibians and reptiles from the wild then, yes, you had to have IACUC approval. But if you were working with non-endangered amphibian or reptile species purchased from a legal vendor, then no. The situation may have changed over the past 13 years. The USDA has a website on care of fish, amphibians, and reptiles in research -- http://awic.nal.usda.gov/nal_display/index.php?info_center=3&tax_level=3&tax_subject=169&topic_id=1078&level3_id=5346&level4_id=0&level5_id=0 -- but it is unclear whether there are any regulations involved.
Also remember that every institution gets to set many of its own IACUC policies. Therefore it may be your institution that requires that any research on any species must go thru the IACUC. That would be your local IACUC policy, but not something mandated by the Animal Welfare Act or specific agencies.
##### Share on other sites
Cervical dislocation is only approved for rodents less than 200 gm. It is not a matter of "success" necessarily, but of "humane".
If CD is humane for any rodent, why wouldn't it be for all of them unless it were an issue of being able to do it properly (and possibly without injury to the human handler)? I'd suspect difficulty simply from personal experience.
Anyhow, as I said, this wasn't in a laboratory setting, and CD is pretty humane compared to python constriction (and *very* humane compared to what large monitor lizards do to their prey).
When I served on an IACUC, if you were collecting amphibians and reptiles from the wild then, yes, you had to have IACUC approval. But if you were working with non-endangered amphibian or reptile species purchased from a legal vendor, then no. The situation may have changed over the past 13 years. The USDA has a website on care of fish, amphibians, and reptiles in research -- http://awic.nal.usda.gov/nal_display...=0&level5_id=0 -- but it is unclear whether there are any regulations involved.
I'm pretty sure it's the USDA that regulates them, though I don't know when it started - I've only had to deal with them for 6 years or so. I poked around the website and couldn't find much. I do know that they're now regulated - I've got to go to training, we've got IACUC numbers on all the cages (even captive-bred herps and fish). I even remember my MS adviser complaining because he had to fill out paperwork in order to feed goldfish to the lungfish.
##### Share on other sites
If CD is humane for any rodent,
It is not human for "any rodent". Only for rodents under 200 gm.
why wouldn't it be for all of them unless it were an issue of being able to do it properly (and possibly without injury to the human handler)? I'd suspect difficulty simply from personal experience.
I suspect that, as size increases, it becomes difficult to break the neck "instantly".
Anyhow, as I said, this wasn't in a laboratory setting, and CD is pretty humane compared to python constriction (and *very* humane compared to what large monitor lizards do to their prey).
No argument there. It is even more humane than hypobaric chambers used to kill unclaimed dogs and cats.
I'm pretty sure it's the USDA that regulates them, though I don't know when it started - I've only had to deal with them for 6 years or so. I poked around the website and couldn't find much. I do know that they're now regulated - I've got to go to training, we've got IACUC numbers on all the cages (even captive-bred herps and fish).
I found sites where herpetologists were recommending such training and supervision. But I have yet to find where USDA actually regulates captive-bred herps and fish. They do regulate catching them in the wild.
I have no doubt that your IACUC requires this. However, it may well be that there is no "regulation" as such, but a general acceptance among IACUCs that it is a good idea to supervise ectoderms and bring them under the IACUC umbrella. As I say, scientific societies that deal with ectoderms have websites that recommend this.
##### Share on other sites
Personally, I do not believe that animal testing is so awful. Alright, it sucks to be the animal, but it's far better that a rat dies than a human. After all, wouldn't you rather use something knowing that it's been tested, and that it's been proven, though trial and error, to be more safe?
I say that animal testing is the lesser of two evils.
##### Share on other sites
animals provide important in vivo models. cannot just test drugs with cancer cell lines...well, i guess you can, but usually that is not too compelling on its own.
Not that animal models dont have their own problems. Sometimes LD50 in mice does not always translate to drug cytotoxicity in humans and genetic differences among individuals accounting for differences in how a drug is metabolized compound this problem.
##### Share on other sites
Safer and more accurate alternatives to animal testing have been developed, but are not being used. It is not scientifically feasible to assume that animals will have the same reactions to drugs and cosmetics as humans. In addition, animal testing is unethical. Reading descriptions of what takes place during animal testing (for example: reading about how chemicals are dripped into rabbit’s eyes) is horrifying. That is why it is necessary to start putting these alternatives into action. Having “organs on chips,” a method developed at Harvard University, is one of the primary alternatives to animal testing. These chips contain human cells and show the development of reactions to drugs and cosmetics as they pertain to humans. This method will give much more accurate results and is more ethical than animal testing. Another method that has been developed is using human blood cells. This method is specifically for testing whether or not drugs will cause humans to develop a fever. Although this method does not give as wide of range of results as the “organs on a chip” method, it is extremely accurate in testing a major side effect. Finally, computer simulations are also a safe alternative. This method not only shows the side effects of drugs, but it also breaks down how the human develops the side effect, allowing the scientists to see the cause of the reaction and easily make an adjustment. These are three of the many alternatives to animal testing that need to be implemented into drug and cosmetic testing.
##### Share on other sites
Could you perhaps elaborate on that first sentence? We certainly have some methods of in vitro testing that have replaced assays that may have otherwise been performed in vivo, but AFAIK we currently have nothing that can replace animal models entirely. Computer modeling can only go so far and it is not a viable replacement on the whole. Organs on chips is a technology still in development and while that is still the case, it cannot replace animal testing either.
Edit: actually, this is not the thread I thought it was. I know for sure the ethics argument has been talked about in this thread: http://www.scienceforums.net/topic/82211-experiments-on-animals-for-medical-research/
##### Share on other sites
Safer and more accurate alternatives to animal testing have been developed, but are not being used. It is not scientifically feasible to assume that animals will have the same reactions to drugs and cosmetics as humans. In addition, animal testing is unethical. Reading descriptions of what takes place during animal testing (for example: reading about how chemicals are dripped into rabbit’s eyes) is horrifying. That is why it is necessary to start putting these alternatives into action. Having “organs on chips,” a method developed at Harvard University, is one of the primary alternatives to animal testing. These chips contain human cells and show the development of reactions to drugs and cosmetics as they pertain to humans. This method will give much more accurate results and is more ethical than animal testing. Another method that has been developed is using human blood cells. This method is specifically for testing whether or not drugs will cause humans to develop a fever. Although this method does not give as wide of range of results as the “organs on a chip” method, it is extremely accurate in testing a major side effect. Finally, computer simulations are also a safe alternative. This method not only shows the side effects of drugs, but it also breaks down how the human develops the side effect, allowing the scientists to see the cause of the reaction and easily make an adjustment. These are three of the many alternatives to animal testing that need to be implemented into drug and cosmetic testing.
I agree. And why bother using animals to test COSMETICS? If it was a drug that could save lives then it might be accepted. And the "organs on chips" are most likely better than testing on an animal because the chips replicate a human's organs and the effects of the drug than what might happen with a rabbit. The rabbit may die, but it turns out that drug could have saved human lives.
##### Share on other sites
I agree. And why bother using animals to test COSMETICS? If it was a drug that could save lives then it might be accepted. And the "organs on chips" are most likely better than testing on an animal because the chips replicate a human's organs and the effects of the drug than what might happen with a rabbit. The rabbit may die, but it turns out that drug could have saved human lives.
Cosmetics - I kinda agree but I know people will wear them and thus they need to be tested properly. But your assertion that organs on chips are better than in vivo testing is just plain wrong. And medical researchers don't just give the local bunnies a prescription and tell them to pop into the pharmacist on the way home - they test in very specific models that are known to reproduce (to a known extent) effects in animals that are desirable in humans.
The argument against animal testing would advance at a much greater rate if those proposing it actually knew the science. But if they know the science then they tend to realise that animal testing is essential and don't make the argument in the first place
##### Share on other sites
But if they know the science then they tend to realise that animal testing is essential and don't make the argument in the first place
Animal testing is done because humans see themselves as more important than other species. Everything we use non-human animal species for we only do because we see ourselves as more important than them. It's nothing but egotistical garbage. And yes, I'm a hypocrite, I realize that all too well
## Create an account
Register a new account
|
2020-07-07 03:06:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30036962032318115, "perplexity": 1993.9046402613328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891640.22/warc/CC-MAIN-20200707013816-20200707043816-00252.warc.gz"}
|
http://www.who-els.co.za/2008/12/domain-hosting.html
|
The main players in this arena is Google, Microsoft and Yahoo.
Seeing that we have to pay for .co.za in South Africa, I'm not worried about that part, plus always believed keep the DNS and hosting separate. I do my own DNS servers from my work, so not even that is a problem.
If you cannot use your own DNS servers, or do not have access to DNS servers, look at DynDNS ( http://www.dyndns.com ) first. I've had a couple of co.za domains on there servers before, just get the Uniform submission correct.
Now leaving my choice to last, here is what I tried:
Microsoft Office Live:
http://home.officelive.com
I ended up using the UK server, because this seemed to have free accounts, 5 only. But then after setting everything up I realised that the custom domain was limited to a preset list. Cannot use any domain. That was enough to let me give up.
"US$8.96 for the first 3 months (US$11.95/month after)" sound exactly like a cellphone contract from some of our networks. I did not even read further.
|
2018-07-17 13:36:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2453427016735077, "perplexity": 1837.7384303476942}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589726.60/warc/CC-MAIN-20180717125344-20180717145344-00036.warc.gz"}
|
http://ndflowers.com/blog/35e2b0-nuclear-particles-examples
|
Both alpha nad beta particles are charged, but nuclear reactions in Equations $$\ref{alpha1}$$ and $$\ref{beta2}$$ and most of the other nuclear reaction above are not balanced with respect to charge, as discussed when balancing redox reaction. fallout definition: Fallout is defined as the consequence or result of something, or radioactive particles from a nuclear explosion. These interactions are able to hold the particles together at extremely small distances of the order of a few nanometers (10 -9 m). Watching Nuclear Particles: See Background Radiation Zoom Through A Cloud Chamber. Examples: strontium-90, cesium-137. - G - Gamma ray A highly penetrating type of nuclear radiation, similar to x-radiation, except that it comes from within the nucleus of an atom, and, in general, has a shorter wavelength. Each lepton has a lepton number of 1 and each antilepton has a lepton number of -1.Other non-leptonic particles have a lepton number of 0. Nuclei formed by the fission of heavy elements. The weak nuclear force appears only in certain nuclear processes such as the β decay of a nucleus. The Cell Nucleus, Volume IX: Nuclear Particles, Part B discusses "splicing", "processing", and the controls of transcriptional and transport events which must be essential to cells that are either growing or are phenotypically differentiated. Atomic physics (or atom physics) is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. Watch the video to learn more intriguing facts about electromagnetic radiation, insights that will help you understand this science project.
These are 10 25 times stronger than gravitational force and weaker than electromagnetic and nuclear forces. The mass of the parent nucleus is greater than the sum of the masses of the daughter nucleus and the alpha particle; this difference is called the disintegration energy. Nuclear Fusion is a nuclear process, where the energy is generated by smashing together light atoms. In particle physics, the lepton number is used to denote which particles are leptons and which particles are not. Nuclear physics is the field of physics that studies the building blocks and interactions of atomic nuclei. Its effect is experienced inside these particles.
Radiation Basics Radiation is energy given off by matter in the form of rays or high-speed particles.
Nuclear physics is the study of the constituent particles of atomic nuclei, such as protons and neutrons, and the interactions between them. It is primarily concerned with the arrangement of electrons around ... A g particles eV J s kg m Alpha decay occurs when the strong nuclear force cannot hold a large nucleus together. ... All these are examples of electromagnetic radiation. Visit vedantu.com to read more about the process of nuclear fusion at vedantu.com These are very short-range forces much smaller than the size of protons or neutrons. 10–19 C) and neutrons (elementary particles having no charge). Alpha particles themselves are very stable. The lepton number is a conserved quantum number in all particle reactions. It is the contrary reaction of fission, where heavy isotopes are split apart.
These particles are massive with respect to other kinds of radiation; the beta particle, for example, is some 7,000 times smaller. They are of medium atomic weight and almost all are radioactive. Fusion is the means by which the sun and other stars generate light and heat.
|
2021-04-10 14:34:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5251011848449707, "perplexity": 610.9819385283246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057142.4/warc/CC-MAIN-20210410134715-20210410164715-00094.warc.gz"}
|
http://kjmm.org/journal/view.php?number=802
|
| Home | E-Submission/Review | Sitemap | Editorial Office |
Ahn and Kim: Charge Transport and Thermoelectric Properties of Sn-Doped Tetrahedrites Cu12Sb4-ySnyS13
3 Cited By
### Abstract
In this study, tetrahedrite compounds doped with Sn were prepared by mechanical alloying and hot pressing, and their charge transport and thermoelectric properties were analyzed. X-ray diffraction analysis revealed that both the synthetic powders and sintered bodies were synthesized as a single tetrahedrite phase without secondary phases. Densely sintered specimens were obtained with relatively high densities of 99.5%-100.0% of the theoretical density, and the component elements were distributed uniformly. Sn was successfully substituted at the Sb site, and the lattice constant increased from 1.0348 to 1.0364 nm. Positive signs of the Hall and Seebeck coefficients confirmed that the Sn-doped tetrahedrites were p-type semiconductors. The carrier concentration decreased from 1.28 × 1019 to 1.57 × 1018 cm-3 as the Sn content decreased because excess electrons were supplied by doping with Sn4+ at the Sb3+ site of the tetrahedrite. The Seebeck coefficient increased with increasing Sn content, and Cu12Sb3.6Sn0.4S13 exhibited maximum values of 238-270 µVK-1 at temperatures of 323-723 K. However, the electrical conductivity decreased as the amount of Sn doping increased. Thus, Cu12Sb3.9Sn0.1S13 exhibited the highest electrical conductivity of (2.24-2.40) × 104 Sm-1 at temperatures of 323-723 K. A maximum power factor of 0.73 mWm-1K-2 was achieved at 723 K for Cu12Sb3.9Sn0.1S13. Sn substitution reduced both the electronic and lattice thermal conductivities. The lowest thermal conductivity of 0.49-0.60 Wm-1K-1 was obtained at temperatures of 323-723 K for Cu12Sb3.6Sn0.4S13, where the lattice thermal conductivity was dominant at 0.49-0.57 Wm-1K-1. As a result, a maximum dimensionless figure of merit of 0.66 was achieved at 723 K for Cu12Sb3.9Sn0.1S13.
### 1. Introduction
Tetrahedrite (Cu12Sb4S13) is a promising thermoelectric material composed of abundant elements and has a low thermal conductivity because of its complex crystal structure (space 14¯3 m group:) [13]. Lone-pair electrons of the Sb atoms in the tetrahedrite lead to the anharmonic vibration of Cu atoms located inside the triangular plane of the S atoms, which results in a low lattice thermal conductivity [4,5]. Considering the valence and coordination of each element, the charge balance of tetrahedrite can be expressed as Cu22+Cu10+Sb43+S122-S2- [6,7]. To improve the dimensionless figure of merit (ZT), partial substitutions at the Cu, Sb, or S sites have been made with doping elements, including transition metals such as Ni, Co, Fe, and Mn at the Cu site [8-11], Te or As at the Sb site [12,13], and Se at the S site [14].
ZT is influenced by the Seebeck coefficient (α), electrical conductivity (σ), thermal conductivity (κ), and temperature (T) in Kelvin, represented as ZT = α2σκ-1T. The Seebeck coefficient and electrical conductivity have a trade-off relationship with the power factor (PF = α2σ) because the carrier concentration has the opposite effect on the two parameters [15]. In general, substitutional doping optimizes the carrier concentration, enhancing the PF, or it decreases the thermal conductivity, improving the thermoelectric properties.
Although many doping studies on tetrahedrites have been conducted on Cu sites, few studies have been performed on Sb sites. Bouyrie et al. [12] reported a ZT of 0.80 at 623 K for Cu12Sb3.39Te0.61S13, and Lu et al. [16] obtained a ZT of 0.92 at 723 K for Cu12Sb3TeS13. Kwak et al. [17] reported a ZT of 0.88 at 723 K for Cu12Sb3.9Bi0.1S13, and Kumar et al. [18] achieved a ZT of 0.84 at 673 K for Cu12Sb3.8Bi0.2S13. In this study, tetrahedrites Cu12Sb4-ySnyS13 (y = 0.1-0.4) partially substituted with Sn at the Sb site were synthesized by mechanical alloying, and sintered by hot pressing, and their charge transport and thermoelectric properties were assessed.
### 2. Experimental Procedure
Sn-doped tetrahedrites Cu12Sb4-ySnyS13 (y = 0.1, 0.2, 0.3, and 0.4) were synthesized by mechanical alloying. Cu (< 45 μm, purity 99.9%, Kojundo Chemical Lab.), Sb (< 150 μm, purity 99.999%, Kojundo Chemical Lab.), Sn (< 35 μm, purity 99.999%, Kojundo Chemical Lab.), and S (< 75 μm, purity 99.99%, Kojundo Chemical Lab.) were weighed to obtain the corresponding stoichiometric compositions. Mechanical alloying was performed at a rotation speed of 350 rpm for 24 h using a planetary ball mill (Fritsch Pulverisette5) in an Ar atmosphere with stainless-steel jars and balls. The synthesized powder was loaded into a graphite mold and subjected to consolidation using hot pressing at 723 K for 2 h under 70 MPa in vacuum. Details of the process conditions have been reported in a previous study [19].
The phases of the synthetic powder and sintered specimens were analyzed using X-ray diffraction (XRD; Bruker D8-Advance) with Cu-Kα radiation. The diffraction patterns were measured in the θ–2θ mode (2θ = 10–90°) with a step size of 0.02°. The lattice constants were analyzed using the Rietveld refinement (TOPAS program). Scanning electron microscopy (SEM; FEI Quanta400) was used to observe the fractured surfaces of the sintered specimens. The composition was analyzed using an energy-dispersive spectrometer (EDS; Bruker Quantax200), where the energy levels of the elements were adopted as Cu-Lα (0.928 eV), Sb-Lα (3.604 eV), Sn-Lα (3.444 eV), and S-Kα (2.309 eV). The Hall coefficient, carrier concentration, and mobility were measured by the van der Pauw (Keithley 7065) method by applying a constant magnetic field of 1 T and a current of 100 mA. The Seebeck coefficient and electrical conductivity were measured using a ZEM-3 (Ulvac-Riko) system in a He atmosphere. The thermal diffusivity and specific heat were measured using the laser flash method with TC-9000H (Ulvac-Riko) equipment, and then the thermal conductivity was estimated. Finally, the PF and ZT were evaluated at temperatures in the range of 323-723 K.
### 3. Results and Discussion
Figure 1 shows the XRD analytical results for the Cu12Sb4-ySnyS13 prepared by mechanical alloying and hot pressing. A single tetrahedrite phase (ICDD PDF#024-1318) was identified without secondary phases, and no phase transitions were observed at the doping content range in this study. As Table 1 shows, the lattice constant of Cu12Sb4-ySnyS13 increased from 1.0348 to 1.0364 nm as the Sn content increased. In our previous study [7], the lattice constant of undoped tetrahedrite Cu12Sb4S13 was 1.0327 nm. Thus, the lattice constant was significantly increased by Sn substitution at the Sb site. Tippireddy et al. [20] reported the ionic radii of Cu+ (60 pm), Cu2+ (57 pm), Sb3+ (76 pm), Sn4+ (69 pm), and Cu2+ (118 pm). Hansen et al. [21] suggested that Sn4+ can be doped into the Cu sites when no other divalent transition elements are present in the tetrahedrite system, suggesting that Cu2+ can be substituted at the Sb3+ site, and Sn4+ can be substituted for Cu+ and/or Cu2+. Therefore, in this study, as the Sn content increased, the lattice constant increased.
Figure 2 shows SEM images of the fractured surfaces of Cu12Sb4-ySnyS13. No remarkable difference in microstructure was observed based on Sn content, and each element was homogeneously distributed, as confirmed by EDS elemental mapping. As Table 1 shows, all the specimens reached relatively high densities of 99.5%-100.0%, and the actual composition was similar to its nominal composition within the error ranges of the EDS analysis. Therefore, in this study, mechanical alloying and hot pressing resulted in the successful preparation of homogeneous and dense Sn-doped tetrahedrite compounds.
Figure 3 shows the carrier concentration and mobility of the Cu12Sb4-ySnyS13. As the Sn content increased, the carrier concentration decreased from 1.28 × 1019 to 1.57 × 1018 cm-3. Sn4+ substituted at the Sb3+ site provided excess electrons, which reduced the carrier (hole) concentration due to charge compensation. Considering the ionic radii claimed by Tippireddy et al. [20] when Sn is doped in the tetrahedrite, either Sn4+ should be substituted at the Cu+ or Cu2+ site or Cu2+ at the Sb3+ site to increase the lattice constant, as shown in this study. However, if the entirety of the doped Sn is substituted at the Sb3+ site into a Cu2+ state, we should observe a reduction in carrier (hole) concentration. Therefore, this study interpreted that Sn4+ was partially substituted at the Cu+/Cu2+ sites and/or Sn4+ was also partially substituted at the Sb3+ site. This contributed to an increase in the lattice constant and a decrease in the carrier concentration. When y ≤ 0.3, the mobility slightly increased with increasing Sn content but decreased when y = 0.4. The reason the mobility of Cu12Sb3.6Sn0.4S13 decreased is uncertain, but it was assumed to be because of the change in the conduction mechanism.
The electrical conductivities of the Cu12Sb4-ySnyS13 samples are shown in Fig 4. The electrical conductivities (σ) of semiconductors are typically affected by the carrier concentration (n) and mobility (μ), which is represented as σ = neμ (e: electronic charge) [22]. For y = 0.1, the specimen in our study exhibited minimal temperature dependence up to 623 K but showed a slight decrease at temperatures above 623 K, indicating degenerate semiconductor behavior. For y = 0.2-0.3, the electrical conductivity slightly increased with increasing temperature. However, in the case of y = 0.4, the electrical conductivity showed a strong positive temperature dependence, indicating non-degenerate semiconductor behavior. Thus, it was determined that the electronic conduction mechanism transitioned from a degenerate to a non-degenerate state when the tetrahedrite was doped with Sn. At a constant temperature, the electrical conductivity decreased as the Sn content increased. This is in good agreement with the decrease in the carrier concentration as the Sn doping level increased, as shown in Fig 3.
Cu12Sb3.9Sn0.1S13 exhibited the highest electrical conductivity of (2.24-2.40) × 104 Sm-1 at temperatures between 323-723 K. For the electrical conductivity of undoped Cu12Sb4S13, Kim et al. [19] reported (2.17-3.03) × 104 Sm-1 at 323-723 K, and Pi et al. [23] obtained (1.10-1.90) × 104 Sm-1 at 323-723 K. Kwak et al. [17] reported that the electrical conductivity increased with increasing Bi content and positive temperature dependence in the cases of y ≤ 0.3 for Cu12Sb4-yBiyS13 (y = 0.1-0.4): as a result, from (2.20-3.12) × 104 Sm-1 at 323 K to (2.95-3.25) × 104 Sm-1 at 723 K. However, when y = 0.4, the electrical conductivity decreased because of the formation of a secondary phase (skinnerite Cu3SbS3). Conversely, Bouyrie et al. [12] reported negative temperature dependence of the electrical conductivity for Cu12Sb4-xTexS13 (x = 0.5-2.0): from (9.09-0.36) × 104 Sm-1 at 300 K to (4.54-0.42) × 104 Sm-1 at 700 K. Tippireddy et al. [20] reported that the electrical conductivity decreased with increasing Sn content and there was a negative temperature dependence for Cu12Sb4-xSnxS13 (x = 0.25-1): from (5.46-3.95) × 104 Sm-1 at 373 K to (4.80-3.09) × 104 Sm-1 at 673 K.
Figure 5 shows the Seebeck coefficients of the Cu12Sb4-ySnyS13 samples. The Seebeck coefficient values of all specimens were positive, indicating that the major carriers were holes of p-type semiconductors. The Seebeck coefficient of a ptype semiconductor is expressed as α = (8/3) π2 kB2m*Te-1h-2 (π/3n)2/3, where kB is the Boltzmann constant, m* is the effective carrier mass, and h is the Planck constant [24]. In our study, as the temperature increased, the Seebeck coefficient increased when y ≤ 0.3, but when y = 0.4, the Seebeck coefficient decreased. In general, when the temperature was higher than a certain temperature (i.e., intrinsic transition temperature), the carrier concentration increased rapidly, and the Seebeck coefficient decreased. This was related to the temperature dependence of the electrical conductivity, as shown in Fig 4.
Accordingly, the intrinsic transition seemed to occur at temperatures below 323 K for Cu12Sb3.6Sn0.4S13. At a constant temperature, the Seebeck coefficient increased as the Sn content increased. Thus, Cu12Sb3.6Sn0.4S13 exhibited a maximum Seebeck coefficient of 270-238 μVK-1 at temperatures of 323-723 K. In the case of undoped Cu12Sb4S13, Kim et al. [19] reported a Seebeck coefficient of 134-183 μVK-1 at 323-723 K, and Pi et al. [23] obtained 155-195 μVK-1 at 323-723 K. Kwak et al. [17] reported that the Seebeck coefficient decreased as the Bi content increased for Cu12Sb4-yBiyS13 (y = 0.1-0.4), and obtained the highest Seebeck coefficient of 153-186 μVK-1 at 323-723 K for Cu12Sb3.9Bi0.1S13. However, Bouyrie et al. [12] found that the Seebeck coefficient increased with increasing Te content for Cu12Sb4-xTexS13 (y = 0.50-1.75). Thus, the highest Seebeck coefficient of 192-264 μVK-1 was obtained at 300-700 K for Cu12Sb2.25Te1.75S13. Tippireddy et al. [20] reported that the Seebeck coefficient decreased with increasing Sn content for Cu12Sb4-xSnxS13 (x = 0.25-1), reaching a maximum value of 183 μVK-1 at 673 K for Cu12Sb3.65Sn0.35S13.
Figure 6 presents the PF of Cu12Sb4-ySnyS13. Based on the temperature dependences of the electrical conductivity and Seebeck coefficient shown in Figs 4 and Fig 5, respectively, the PF increased with increasing temperature. Because both the Seebeck coefficient and electrical conductivity are affected by the carrier concentration, as the Sn content increased, the carrier concentration (i.e., electrical conductivity) decreased, which decreased the PF. In this study, a maximum PF of 0.38-0.73 mWm-1K-2 was obtained at 323-723 K for Cu12Sb3.9Sn0.1S13. For undoped Cu12Sb4S13, Kim et al. [19] and Pi et al. [23] reported 0.40-0.95 mWm-1K-2 and 0.26-0.72 mWm-1K-2 at 323-723 K, respectively. Kwak et al. [17] obtained a high PF of 1.02 mWm-1K-2 at 723 K for Cu12Sb3.9Bi0.1S13, and Bouyrie et al. [12] reported 0.95 mWm-1K-2 at 700 K for Cu12Sb3.5Te0.5S13. Tippireddy et al. [20] achieved a very high PF of 1.30 mWm-1K-2 at 673 K for Cu12Sb3.65Sn0.35S13, which was anomalously higher than in other studies on tetrahedrite at all temperatures examined.
Figure 7 shows the thermal conductivities of the Cu12Sb4-ySnyS13 samples. From the relationship κ = κE + κL, the lattice thermal conductivity (κL) was obtained by subtracting the electronic thermal conductivity (κE) from the total thermal conductivity (κ). The electronic thermal conductivity was calculated using the Wiedemann–Franz law (κE = LσT, L: Lorenz number) [25], where the Lorenz number was estimated using the formula L[10-8V2K–2]= 1.5+exp(–|α|/116) [26,27]. Figure 7(a) shows the total thermal conductivity of Cu12Sb4-ySnyS13. As the temperature increased, the total thermal conductivity increased and then decreased at temperatures above 623 K due to the influence of the lattice thermal conductivity. At a constant temperature, the thermal conductivity decreased as the Sn content increased. Cu12Sb3.6Sn0.4S13 exhibited the lowest thermal conductivity of 0.49-0.60 mWm-1K-1 at 323-723 K, which was lower than the 0.72-0.82 mWm-1K-1 and 0.65-0.78 mWm-1K-1 of undoped Cu12Sb4S13 as reported by Kim et al. [19] and Pi et al. [23], respectively. Kwak et al. [17] obtained a minimal thermal conductivity of 0.77-0.75 mWm-1K-1 at 323-723 K for Cu12 Sb3.6Bi0.4S13. Tippireddy et al. [20] reported a minimum thermal conductivity of 0.61-0.79 mWm-1K-1 at 373-673 K for Cu12Sb3.75Sn0.25S13 and suggested that localized energy caused by lone-pair electrons on Sb scattered the high-velocity acoustic phonons and hybridized with the acoustic dispersions, leading to a reduction in thermal conductivity. Therefore, this study confirmed that the thermal conductivity decreased when Sn was substituted for Sb.
As Fig 7(b) shows, the electronic thermal conductivity increased with increasing temperature but decreased with increasing Sn content, resulting in a minimum electronic thermal conductivity of 0.002-0.057 mWm-1K-1 at 323-723 K for Cu12Sb3.6Sn0.4S13. This was much lower than in other studies: 0.19-0.34 mWm-1K-1 at 323-723 K for Cu12Sb4S13 [19], 0.19-0.40 mWm-1K-1 at 323-723 K for Cu12Sb3.6Bi0.4S13 [17], and 0.27-0.34 mWm-1K-1 at 373-673 K for Cu12Sb3SnS13 [20]. In this study, the lattice thermal conductivity decreased as the Sn content increased, resulting in a minimum lattice thermal conductivity of 0.49-0.57 mWm-1K-1 at 323-723 K for Cu12Sb3.6Sn0.4S13. Kim et al. [19] reported a lattice thermal conductivity of 0.43-0.59 mWm-1K-1 at 323-723 K for Cu12Sb4S13. Kwak et al. [17] obtained a minimum lattice thermal conductivity of 0.58-0.35 mWm-1K-1 at 323-723 K for Cu12Sb3.6Bi0.4S13, and Tippireddy et al. [20] achieved the lowest lattice thermal conductivity of 0.23-0.32 mWm-1K-1 at 373-673 K for Cu12Sb3.75Sn0.25S13.
Figure 8 shows the Lorenz numbers of Cu12Sb4-ySnyS13. Generally, the Lorenz number falls in the range of (1.45-2.44) × 10-8 V2K-2 [28]. Higher and lower Lorenz numbers indicate degenerate semiconductor or metallic behavior and non-degenerate semiconductor behavior, respectively. In this study, the Lorenz number decreased with increasing Sn content, which was considered to be a transition from degenerate to non-degenerate states. Cu12Sb3.9Sn0.1S13 showed the highest Lorenz number of (1.82-1.71) × 10-8 V2K-2, but Cu12Sb3.6Sn0.4S13 exhibited the lowest Lorenz number of (1.59-1.62) × 10-8 V2K-2 at 323-723 K. Similar Lorenz numbers were reported by Kim et al. [19] and Pi et al. [23], namely, (1.71-1.81) × 10-8 V2K-2 and (1.76-1.69) × 10-8 V2K-2 at 323-723 K for Cu12Sb4S13, respectively, and by Kwak et al. [17], namely, 1.88 × 10-8 V2K-2 at 323 K for Cu12Sb3.7Bi0.3S13.
The ZT for Cu12Sb4-ySnyS13 is shown in Fig 9. As the temperature increased, the ZT increased as the PF increased. Although the thermal conductivity decreased as the Sn content increased, the decrease in the PF was dominant. Therefore, the increase in Sn doping level could not further improve the ZT.
A maximum ZT of 0.66 was obtained at 723 K for the Cu12Sb3.9Sn0.1S13. The ZT values of the Sb-site-doped tetrahedrites were compared. Bouyrie et al. [12] reported that a ZT of 0.65 at 623 K for Cu12Sb3.5Te0.5S13 was generated by the multi-step annealing and spark-plasma sintering (SPS) process. Kwak et al. [17] obtained a ZT of 0.88 at 723 K for Cu12Sb3.9Bi0.1S13 prepared by mechanical alloying and hot pressing. Tippireddy et al. [20] achieved a ZT of 0.96 at 673 K for Cu12Sb3.65Sn0.35S13 fabricated by the annealing-milling-SPS process.
### 4. CONCLUSIONS
In this study, Sn-doped tetrahedrites Cu12Sb4-ySnyS13 (0.1 ≤ y ≤ 0.4) were synthesized by mechanical alloying and sintered by hot pressing. A single phase of tetrahedrite was obtained without secondary phases or residual elements without post-annealing treatment. The lattice constant increased as the Sn doping content increased, and Sn was substituted at the Sb site. As the Sn content increased, the carrier concentration decreased due to the charge compensation resulting from the excess supply of electrons. As the Sn content increased, mobility tended to increase when y ≤ 0.3, but decreased rapidly when y = 0.4, which was presumed to be caused by a change in the conduction mechanism, from a degenerate to a non-degenerate state. The electrical conductivity increased as the Sn content decreased, and the temperature dependence was minimal when y = 0.1-0.3. However, the electrical conductivity of the specimen with y = 0.4 increased rapidly as the temperature increased, similar to the behavior of a non-degenerate semiconductor. The Seebeck coefficient increased as the Sn content increased. For y = 0.1-0.3, the Seebeck coefficient increased as the temperature increased, but when y = 0.4, it decreased. This was because the carrier concentration increased rapidly at temperatures above the intrinsic transition temperature. The PF increased as the temperature increased, but it decreased with increasing Sn doping content. The PF was dominated by the influence of electrical conductivity. When the Sn content increased at a constant temperature, both the lattice and electronic thermal conductivities decreased. The Lorenz number decreased as the Sn content increased, changing from a degenerate to a non-degenerate state. The maximum ZT of 0.66 was obtained at 723 K for Cu12Sb3.9Sn0.1S13.
### Acknowledgments
This study was supported by the Basic Science Research Capacity Enhancement Project (National Research Facilities and Equipment Center) through the Korea Basic Science Institute funded by the Ministry of Education (Grant No. 2019R1A6C1010047).
##### Fig. 1.
XRD patterns of the Cu12Sb4-ySnyS13 tetrahedrites synthesized by mechanical alloying and hot pressing.
##### Fig. 2.
SEM images of the fractured surfaces of Cu12Sb4-ySnyS13, and EDS elemental maps of Cu12Sb3.9Sn0.1S13.
##### Fig. 3.
Charge transport properties of Cu12Sb4-ySnyS13.
##### Fig. 4.
Temperature dependences of the electrical conductivities of Cu12Sb4-ySnyS13.
##### Fig. 5.
Temperature dependences of the Seebeck coefficients of Cu12Sb4-ySnyS13.
##### Fig. 6.
Temperature dependences of the PFs of Cu12Sb4-ySnyS13.
##### Fig. 7.
Temperature dependences of the thermal conductivities of Cu12Sb4-ySnyS13: (a) total thermal conductivity and (b) electronic and lattice thermal conductivities.
##### Fig. 8.
Temperature dependences of the Lorenz numbers of Cu12Sb4-ySnyS13.
##### Fig. 9.
Temperature dependences of the ZTs of Cu12Sb4-ySnyS13.
##### Table 1.
Chemical compositions and physical properties of Cu12Sb4-ySnyS13.
Composition
Relative density [%] Lattice constant [nm] Lorenz number [10-8V2K-2]
Nominal Actual
Cu12Sb3.9Sn0.1S13 Cu13.59Sb3.07Sn0.10S12.39 99.9 1.0348 1.82
Cu12Sb3.8Sn0.2S13 Cu12.97Sb2.81Sn0.20S13.01 100.0 1.0359 1.79
Cu12Sb3.7Sn0.3S13 Cu13.48Sb2.68Sn0.31S12.53 99.5 1.0357 1.73
Cu12Sb3.6Sn0.4S13 Cu13.00Sb2.57Sn0.39S13.04 99.7 1.0364 1.59
### REFERENCES
1. X. Lu and D. T. Morelli, Phys. Chem. Chem. Phys. 15, 5762 (2013).
2. X. Lu, D. T. Morelli, Y. Xia, F. Zhou, V. Ozolins, H. Chi, X. Zhou, and C. Uher, Adv. Energy Mater. 3, 342 (2013).
3. A. Pfitzner, M. Evain, and V. Petricek, Acta Crystallogr. 53, 337 (1997).
4. Y. Bouyrie, C. Candolfi, S. Pailhès, M. M. Koza, B. Malaman, A. Dauscher, J. Tobola, O. Boisron, L. Saviot, and B. Lenoir, Phys. Chem. Chem. Phys. 17, 19751 (2015).
5. W. Lai, Y. Wang, D. T. Morelli, and X. Lu, Adv. Funct. Mater. 25, 3648 (2015).
6. E. Lara-Curzio, A. F. May, O. Delaire, M. A. McGuire, X. Lu, C. Y. Liu, E. D. Case, and D. T. Morelli, J. Appl. Phys. 115, 193515 (2014).
7. S. Y. Kim, G. E. Lee, and I. H. Kim, J. Korean Phys. Soc. 74, 967 (2019).
8. C. P. L. Guelou, A. V. Powell, R. Smith, and P. Vaqueiro, J. Appl. Phys. 126, 045107 (2019).
9. E. Makovicky, K. Forchr, W. Lottermoser, and G. Amthauer, Mineral. Pertrol. 43, 73 (1990).
10. R. Chetty, A. Bali, M. H. Naik, G. Rogl, P. Rogl, M. Jain, S. Suwas, and R. C. Mallik, Acta Mater. 100, 266 (2015).
11. K. Suekuni, K. Tsuruta, M. Kunii, H. Nishiate, E. Nishibori, S. Maki, M. Ohta, A. Yamamoto, and M. Koyano, J. Appl. Phys. 113, 043712 (2013).
12. Y. Bouyrie, C. Candolfi, V. Ohorodniichuk, B. Malaman, A. Dauscher, J. Tobola, and B. Lenoir, J. Mater. Chem. C. 3, 10476 (2015).
13. P. Levinsky, C. Candolfi, A. Dauscher, J. Tobola, J. Hejtmanek, and B. Lenoir, Phys. Chem. Chem. Phys. 21, 4547 (2019).
14. T. K. C. Alves, G. Domingues, E. B. Lopes, and A. P. Goncalves, J. Electron. Mater. 48, 2028 (2019).
15. X. Yan, B. Poudel, Y. Ma, W. Liu, G. Joshi, H. Wang, Y. Lan, D. Wang, G. Chen, and Z. Ren, Nano Lett. 10, 3373 (2010).
16. X. Lu and D. Morelli, J. Electron. Mater. 43, 1983 (2014).
17. S. G. Kwak, J. H. Pi, G. E. Lee, and I. H. Kim, Korean J. Met. Mater. 58, 272 (2020).
18. D. S. P. Kumar, R. Chetty, P. Rogl, G. Rogl, E. Bauer, P. Malar, and R. C. Mallik, Intermetallics. 78, 21 (2016).
19. S. Y. Kim, S. G. Kwak, J. H. Pi, G. E. Lee, and I. H. Kim, J. Electron. Mater. 48, 1857 (2019).
20. S. Tippireddy, D. S. P. Kumar, A. Karati, A. Ramakrishnan, S. Sarkar, S. C. Peter, P. Malar, K. H. Chen, B. S. Murty, and R. C. Mallik, ACS Appl. Mater. Interf. 11, 21686 (2019).
21. M. K. Hansen, E. Makovicky, and S. Karup-Møller, J. Miner. Geochem. 179, 43 (2003).
22. X. Shi, H. Chen, F. Hao, R. Liu, T. Wang, P. Qiu, U. Burkhardt, Y. Grin, and L. Chen, Nat. Mater. 17, 421 (2018).
23. J. H. Pi, G. E. Lee, and I. H. Kim, J. Electron. Mater. 49, 2710 (2020).
24. H. S. Kim, Z. M. Gibbs, Y. Tang, H. Wang, and G. J. Snyder, APL. Mater. 3, 041506 (2015).
25. T. Caillat, A. Borshchevsky, and J. P. Fleurial, J. Appl. Phys. 80, 4442 (1996).
26. B. Madaval and S. J. Hong, J. Electron. Mater. 45, 6059 (2016).
27. S. Y. Kim, J. H. Pi, G. E. Lee, and I. H. Kim, Korean J. Met. Mater. 58, 340 (2020).
28. G. S. Kumar, G. Prasad, and R. O. Pohl, J. Mater. Sci. 28, 4261 (1993).
TOOLS
Full text via DOI
Print
Share:
METRICS
3 Crossref
2 Scopus
1,574 View
|
2022-12-08 08:44:39
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8030798435211182, "perplexity": 6379.825606428395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711286.17/warc/CC-MAIN-20221208082315-20221208112315-00837.warc.gz"}
|
https://academic.oup.com/icesjms/article/2690095/Evaluating-the-impacts-of-fishing-on-sex-changing
|
Sex change has been widely documented in many commercially and recreationally important fish species, yet the implications of this life history trait are not considered in most stock assessments. This omission can lead to poor estimates of parameters vital to understanding the health of sequentially hermaphroditic stocks. Here, we present a game theoretic approach to model the sex changing behaviour of a stock of protogynous (female first) hermaphroditic fish and produce estimates of maximum sustainable yield (MSY), equilibrium biomass at MSY (BMSY) and sex ratio, then compare these reference points to those from an otherwise identical gonochoristic (non-sex changing) stock. We tested each stock at varying levels of exploitation and with a range of assumptions about how sex ratio impacts fertilization rate. We show that a protogynous hermaphroditic stock with flexible timing of sex change produces similar MSY and slightly higher BMSY than a gonochoristic stock with otherwise identical vital rates. Sex changing stocks were also able to maintain a higher proportion of males in the population than did non-sex changing stocks as exploitation increased. Although sex changing stocks were able to maintain their sex ratio, the age at which females changed sex decreased with increased exploitation, suggesting smaller body size, and presumably lower fecundity, for females in heavily exploited sex changing stocks. Our game theoretic approach to evaluating hermaphroditic stocks can accommodate a wide variety of sex changing cues and behaviours and allows a flexible model for understanding the effects of exploitation on hermaphroditic stocks.
## Introduction
The ability to change sex over an individual’s lifetime (sequential hermaphroditism) has been widely documented in marine teleost fish, having been confirmed in 48 families from thirteen orders including many recreationally and commercially important species (Sadovy de Mitcheson and Liu, 2008; Erisman and Hastings, 2011). Although hermaphroditism is generally well known by fisheries managers, and some management agencies collect data relevant to management of such species (e.g. sex ratio of the catch), most assessments of hermaphroditic stocks are conducted using the same methods as those which are generally applied to non-sex changing fish (Provost and Jensen, 2015). The failure to tailor assessments to the biology of hermaphroditic stocks may lead to poor estimates of biological reference points, or of the effects of exploitation on the stock, perhaps leading to collapse (e.g. Heppell et al., 2006; Alonzo et al., 2008; Brooks et al., 2008). Long-term failure in monitoring exploitation of such populations may also induce evolutionary changes in age at maturity and energy allocation among many other traits (Sattar et al., 2008).
A common theme emerging from research on sequential hermaphrodites is that the specifics of when and why an individual changes sex are critical to understanding the response of the population to harvest (Alonzo and Mangel, 2005). In many studies on the effects of fishing on protogynous (female to male sex change) populations, it is assumed that a female will change sex at a fixed age or size (Alonzo and Mangel, 2004) or that a female will change sex at an age proportional to the average age/size of the population (Armsworth, 2001). Given these assumptions, in a fishery where larger fish (predominately male) are preferentially targeted or retained, the sex ratio can become highly female biased and the population may suffer from sperm limitation (Sato and Goshima, 2006). This effect may lead to rapid population decline in the face of fishery-induced mortality, and eventual collapse (Armsworth, 2001; Alonzo and Mangel, 2004). However, species that show plasticity in the timing of their sex change are in some cases as resilient to fishing as non-sex changing stocks (Alonzo and Mangel, 2005; Molloy et al., 2007; Ben Miled et al., 2010). This resiliency is driven by the ability of the stock to compensate for selective loss of one sex by the other transitioning at an earlier age or size than in the absence of fishing. Even when sex ratios become highly female biased due to size-selective harvest of males, the stock can still maintain resiliency if the fertilization rate remains high (Brooks et al., 2008). Together these results strongly suggest that managers cannot derive simple sustainable harvest rules for all sequentially hermaphroditic fish; in short, the details matter.
We gain some insight into these details from reviewing the evolutionary theory behind sex changing populations. A primary prediction is that a hermaphroditic species will change sex at a size where the reproductive rate of the first sex equals that of the second sex (Warner, 1975). This outcome, however, is not common for many wild populations of hermaphrodites, as many will change sex earlier than predicted under the size-advantage theoretical model, suggesting factors other than size play a role in triggering sex change (Kazancioğlu and Alonzo, 2010; Rogers and Koch, 2011). From empirical tests of sex-change theory, we know that sex change in hermaphroditic fish will depend on a combination of social as well as endogenous cues (Warner and Swearer, 1991; Alonzo and Mangel, 2005; Benton and Berlinsky, 2006). For example, Sakai et al. (2002) showed that large females of the protogynous wrasse, Halichoeres melanuurs, exhibited male sexual behaviour immediately after male removal and became functionally male within 2–3 weeks of removal. However, when the largest female was relatively small, she was less likely to perform the male role. Reluctance to perform the male role at smaller body size is presumed to be due to strong female mate choice for larger males and male/male competition (Sakai et al., 2002). The existing body of research on this topic thus suggests that sex change is driven by a combination of exogenous (e.g. social) and endogenous (e.g. body size) cues. The implication for stock assessment and management is that population models must be flexible enough to capture the complexities of each species’ sex change “rules” (Alonzo and Mangel, 2005).
Evolutionary game theory provides a modelling framework in which complex cues of sex change can be represented. Game theory has been used to study hermaphroditic life histories in fishes and many have shown the conditions under which sex change is expected to evolve, and the timing of sex change in such species (Charnov, 1982; Kazancioğlu and Alonzo, 2009; Ben Miled et al., 2010). Game theory is a logical way to represent the life histories of sequentially hermaphroditic species in that it explicitly organizes how costs and benefits to changing sex trade-off against one another to produce net payoffs to the individual. Although other authors have generated model structures that capture these dynamics to a greater or lesser extent (e.g. Alonzo and Mangel, 2005, Molloy et al., 2007), here we formalize its use within a broad and flexible framework to allow its use for many species, rather than a species-specific model. We use our model to determine the maximum sustainable yield (MSY), Biomass at UMSY (BMSY), number of males in the population, number of females changing sex, and average age at sex change. We use as an example black sea bass (Centropristis striata), a protogynous hermaphrodite that is the target of important recreational and commercial fisheries throughout much of the Atlantic coast of the United States. We then evaluate how these factors influence estimates of fisheries yield, and compare these estimates to those of an otherwise identical gonochoristic population.
## Methods
We chose black sea bass due to its economic and cultural importance throughout the mid-Atlantic coast of the United States. The high exploitation and past overfishing of black sea bass, along with its protogynous mating system make it an ideal species for a modelling approach such as the one presented here (NEFSC, 2012; Provost, 2013). In addition, the most recent stock assessment for black sea bass in the mid-Atlantic region was rejected by the reviewers, in part because it failed to incorporate the protogynous life history of the species within the assessment model. We first built an age and sex-based population projection model and applied it to a non-sex-changing stock. The model included age-specific natural survival (S), age and sex-specific vulnerability to fishing (v), and sex-ratio dependent recruitment (R). Individuals in age-class 2 and above were considered mature. The population model followed the form:
$Xt+1=XtS-XtUx+Rxt-Ct$
(1)
$Yt+1=YtS-YtUy+Ryt+Ct$
Where Xt is the total female population vector and Yt is the male population vector for age classes 1–10 at a given time (t), S is the survival vector consisting of individual annual survival probability estimates for age classes 1–10 calculated as 1−Mwhere M is the natural mortality probability vector for each age class. U is the harvest vector of v per age class and is biased toward larger fish and males (Provost, 2013). Ris the young recruited into the population for females (x) and males (y), and C is the number of female fish that become male (set to zero for the non-protogynous population). Age-specific annual natural mortality was estimated using the Lorenzen (1996) model fitted to a power curve (NEFSC, 2012)
(2)
$M=Mu(ageb)$
where $Mu$ is the mortality at unit age and b is a scaling factor. Each age-specific mortality value was used to create the mortality vector ($M$). Vulnerability of each sex and age class were estimated from mark-recapture data (Provost, 2013) and used to create the vulnerability vector (v).
Recruitment (R) is calculated as:
(3)
$R=EfSe$
where E is recruitment at maximum fertilization, f is the probability of an egg being fertilized and $Se$ is the probability of an egg surviving to hatch. Alternatively, this could be written in terms of a marriage function (e.g. Iannelli et al., 2005). The recruits are added to the first age-class of the female population vector ($X$) with probability ($px$) or male population vector ($Y$) with probability ($1-pxy$). E follows the Beverton-Holt model (Hilborn and Walters, 1992):
(4)
where $α$ is the maximum number of eggs produced by the population, $Fm$ is the number of mature females in the population, and $β$ is the value for $Fm$ when $E=α2$ .
Fertilization ($f)$ was modified from Brooks et al. (2008) so that
(5)
$f=4km1-k+(5k-1)m$
where $m$ is the proportion of mature males in the population and $k$ is a stock specific parameter controlling the ability of the population to compensate for loss of males, which can range from 0.2 to 1.0. (Brooks et al., 2008). At high values of k, a population can maintain its fertilization rate when there are few males in the population. One would expect species that spawn in groups to have high values for k. At low values of k, losing one male can have detrimental effects on the reproductive output of a population (Figure 1). Species that spawn in pairs are expected to have low values for k.
Figure 1
Fertilization rate as a function of the proportion of mature males in the population at varying levels of k, a stock specific parameter controlling the ability of the population to compensate for loss of males (equation 5). k = (0.3, 0.5, 0.9).
Figure 1
Fertilization rate as a function of the proportion of mature males in the population at varying levels of k, a stock specific parameter controlling the ability of the population to compensate for loss of males (equation 5). k = (0.3, 0.5, 0.9).
We started with 100 000 individuals in the population and ran the model under no fishing mortality until the population stabilized. The resulting stable age and sex distributions were used as the initial age and sex distributions for the model under fishing pressure. The model was run with mortality rates of 0.75 × $×M,M$, and 1.25 $×M$; however, our results did not vary across these levels of natural mortality; all results presented are for models run where M is the natural mortality rate. We then estimated MSY, BMSY, and male/female ratios for the population across varying values of $k$. These values were all estimated numerically by varying exploitation rate (U) proportionally in 5% increments and summing total catch over 20 years. The value of U that maximized the catch over this time period was taken to be UMSY and the associated annual catch and abundance at age vector were MSY and NMSY, respectively. Total equilibrium biomass summed across all age classes of NMSY (BMSY) was calculated by multiplying NMSY by the average weight at age ($wa$) from the following allometric relationship
(6)
$wa=v1lav2$
where $la$ is length at age $a$ and v1 and v2 are species specific parameters. Length at age was modelled via the von Bertalanffy (1938) growth equation:
(7)
$la=L∞(1-e-Ka-t0)$
Parameters were estimated from lengths, weights, and ages of black sea bass collected off of New Jersey (Provost 2013). We caught 1762 males and 1931 females over two study seasons (2011–2012) and we used these to estimate sex-specific length at age. We fit parameters L, K, and t0 by non-linear regression. All parameter values to initialize the model and their sources used can be found in Table 1.
Table 1
Parameter values used in the model and for equations (1)–(6).
Parameter Parameter value Source and definition
Mortality
0.694 Parameter for Lorenzen model (NEFSC, 2012)
age [0.5, 1, 2… …10] Age of individual fish
−0.417 Parameter for Lorenzen model (NEFSC, 2012)
Exploitation
$Ux$ [0, 0.42, 0.54, 0.63, 0.63, 0.63, 0.63, 0.63, 0.63, 0.63] Maximum exploitation of females. Estimated from mark recapture data on black sea bass (Provost, 2013).
$Uy$ [0, 0.42, 0.11, 0.28, 1, 1, 1, 1, 1, 1,] Maximum exploitation of males. Estimated from mark recapture data on black sea bass (Provost, 2013).
Recruitment
α 2 800 000 Maximum number of eggs that can be produced by the population.
β 50 000 Number of mature females required to produce half of the maximum number of eggs.
$Fm$ 48 040 Number of mature females.
k [0.2, 0.3… … .1.0] Steepness parameter for fertilization rate (Brooks et al., 2008
m 0.34 Proportion of mature males in the population.
Se 0.05 Survival of eggs.
Biomass
v1 0.0649 Species specific parameter for estimating weight at age. (Bohnsack and Harper, 1988
v2 2.468 Species specific parameter for estimating weight at age. (Bohnsack and Harper, 1988
Parameter Parameter value Source and definition
Mortality
0.694 Parameter for Lorenzen model (NEFSC, 2012)
age [0.5, 1, 2… …10] Age of individual fish
−0.417 Parameter for Lorenzen model (NEFSC, 2012)
Exploitation
$Ux$ [0, 0.42, 0.54, 0.63, 0.63, 0.63, 0.63, 0.63, 0.63, 0.63] Maximum exploitation of females. Estimated from mark recapture data on black sea bass (Provost, 2013).
$Uy$ [0, 0.42, 0.11, 0.28, 1, 1, 1, 1, 1, 1,] Maximum exploitation of males. Estimated from mark recapture data on black sea bass (Provost, 2013).
Recruitment
α 2 800 000 Maximum number of eggs that can be produced by the population.
β 50 000 Number of mature females required to produce half of the maximum number of eggs.
$Fm$ 48 040 Number of mature females.
k [0.2, 0.3… … .1.0] Steepness parameter for fertilization rate (Brooks et al., 2008
m 0.34 Proportion of mature males in the population.
Se 0.05 Survival of eggs.
Biomass
v1 0.0649 Species specific parameter for estimating weight at age. (Bohnsack and Harper, 1988
v2 2.468 Species specific parameter for estimating weight at age. (Bohnsack and Harper, 1988
We then calculated MSY, BMSY, and male/female ratios for a protogynous stock using the same population model described above, but also allowing females to change sex. Sex change occurred according to a game theoretic payoff defined by the fitness and energy loss associated with the decision to change sex or not following a size dependent dynamical game (Kebir et al., 2015). We are focused on protogynous species; therefore size is used as a proxy for reproductive value as size is strongly related to male reproductive value in protogynous species (e.g. Zabala et al., 1997; Munday et al., 2006). As such, the decision to become male will occur when the reproductive value of becoming male is greater than reproductive value of being female.
The potential “Gain” for a female fish (GainX) is simply her potential contribution to recruitment (R) given her size in the next year (lax(t + 1)). The shape of this relationship is given by
(8)
$GainX=101+lax(t+1)/µlax(t+1)$
where $µlax$is a weighted average of the sizes of all other females in the current population and t is time. Consistent with the unidirectional nature of sex change in the protogynous black sea bass, male fish cannot change sex in this system. A female fish may change sex or not depending on the relative “Gain” for remaining female or becoming male. Therefore, we computed the potential “Gain” for a male fish (GainY) similarly to that of the female in order to calculate its potential contribution to R, given the size of the fish.
(9)
$GainY=101+lay(t+1)/µlay(t+1)$
The “Gain” for a fish that changes from female to male (GainXY) is equivalent to GainY. The impact of size on the ‘Gain’ is based on observations in the literature, however, this number is normalized to 1 in order to calculate the payoff (see below), and the impact of numeric values is relative. The only biological relevance in interpretation for the outcome of the model comes from the shape of the curve described. The “Loss” for a female fish in this system (LossX) is set to zero as her contribution to recruitment cannot be any higher or lower than the expected value for a female of a given size; i.e. she will contribute all that she is allowed based on her size, no more or less. The “Loss” for a male fish (LossY) is the competition for mates, assuming that larger fish in a protogynous species will outcompete smaller ones and have a higher probability of fertilization, particularly for species with a low value for k, given by
(10)
$LossY=GainY1-1+lay(t+1)µlay(t+1)$
The “Loss” for a fish that will change sex from female to male is calculated similarly, however, there is a growth penalty to changing sex.
(11)
$LossXY=GainY1-1+lax(t)µlay(t+1)$
The female fish is not allowed to grow during the year it undergoes the sex change to account for the energetic cost of sex change (e.g. Hamaguchi et al., 2002). Kebir et al. (2015) showed that size dependent models of sex change alone may not be sufficient to accurately characterize the timing of sex change, and that the penalty of changing sex must be included. Therefore, sex change will occur, or not, based on the reproductive output at her current size rather than her size in the next year.
The “Payoff” for each sex is simply the “Gain–Loss”.
(12)
$PayoffX=GainX-LossX$
$PayoffY=GainY-LossY$
$PayoffXY=GainXY-LossXY$
If PayoffX is greater than or equal to PayoffXY, the fish will remain female. If not, she will change sex.
Finally, we removed those individuals that changed sex from the female population vector and added them to the male population vector. This allowed us to calculate the average age at which the protogynous population was expected to change sex across varying fishing pressures and values of k. By considering many values of k, we tested the effects of fishing at various levels of male/male competition and across many fertilization rates. We then compared MSY, BMSY, and male/female ratios for the gonochoristic and sex changing stocks.
## Results
Gonochoristic stocks produced a slightly higher MSY for all values of k (Table 2; Figure 2). MSY for the gonochoristic[MSY(G)] stock ranged from 9404 individuals to 75 970 while MSY for the sex changing [MSY(P)] stocks ranged from 8786 to 75 388 (Table 2). Each stock produced the lowest MSY when k= 0.2 and the highest MSY when k= 1. The highest difference (as a percent) in MSY between stock types was at k= 0.2 where MSY of the gonochoristic stock was 7% higher than that of the sex changing stock. The lowest difference (as a percent) was at k= 0.5 where the difference was 0.02% (Table 2).
Figure 2
Biomass yield (a–c) and total yield in numbers of fish (d–f) at varying exploitation rates (U) for k =0.3 (a and d), k = 0.5 (b and e), and k = 0.9 (c and f). The solid line represents the values for the sex changing stock and the dashed line represents values for the non-sex changing stock in all graphs.
Figure 2
Biomass yield (a–c) and total yield in numbers of fish (d–f) at varying exploitation rates (U) for k =0.3 (a and d), k = 0.5 (b and e), and k = 0.9 (c and f). The solid line represents the values for the sex changing stock and the dashed line represents values for the non-sex changing stock in all graphs.
Table 2
BMSY and MSY for gonochoristic (G) and protogynous (P) stocks at varying values of k (the parameter controlling the ability of the population to compensate for loss of males).
BMSY (G)) (× 1000 kg) BMSY (P) (× 1000 kg) MSY (G) MSY (P)
0.2 3645 3672 9404 8786
0.3 8554 8988 22 666 22 317
0.4 12 573 13 287 33 780 33 664
0.5 15 957 16 818 43 092 43 082
0.6 18 906 19 806 51 414 51 212
0.7 21 542 22 378 58 685 58 433
0.8 23 873 24 619 65 109 64 768
0.9 25 951 26 540 70 833 70 379
27 817 28 401 75 970 75 388
BMSY (G)) (× 1000 kg) BMSY (P) (× 1000 kg) MSY (G) MSY (P)
0.2 3645 3672 9404 8786
0.3 8554 8988 22 666 22 317
0.4 12 573 13 287 33 780 33 664
0.5 15 957 16 818 43 092 43 082
0.6 18 906 19 806 51 414 51 212
0.7 21 542 22 378 58 685 58 433
0.8 23 873 24 619 65 109 64 768
0.9 25 951 26 540 70 833 70 379
27 817 28 401 75 970 75 388
Sex changing stocks produced a higher BMSY [BMSY(P)] across all values for k, and ranged from 367 to 28 400 (×1000 kg) and the BMSY for the gonochoristic stock [BMSY(G)] ranged from 3645 to 27 817 (×1000 kg) (Table 2; Figure 2). The smallest BMSY for each stock was when k =0.2 and the largest BMSY was when k= 1. The greatest difference in BMSY between stocks was when k= 0.4 where the difference (as a percent) was 5.7% (Figure 3). The smallest difference in the stocks was when k = 0.2 where the difference was 0.74% (Table 2). Although BMSY was still higher for the sex changing population, there was a steep decline in the difference in BMSY between k= 0.4 and 0.2 (Figure 3). This result is likely due to the inability of sex changing stocks to effectively compensate for the loss of males when k is very low (i.e. where males matter the most).
Figure 3
Difference in BMSY as a percent of the total difference between BMSY for a sex-chaning stock and BMSY for the non-sex changing stock at varying values of k.
Figure 3
Difference in BMSY as a percent of the total difference between BMSY for a sex-chaning stock and BMSY for the non-sex changing stock at varying values of k.
The higher BMSY, despite lower MSY, for the sex changing stock reflects the higher proportion of males in that population than in the gonochoristic population; they are larger so the mass per fish is higher (Figure 4). The sex changing stock had a higher proportion of males in the population for all values of k and U (Figure 4). The difference in the proportion of males in the population increased with an increase in U and a decrease in k. The number of females that changed sex in the sex changing stock decreased as k increased, reflecting the relative importance of males to the fertilization rate as k decreases (Figure 5). For low k values, more males are required to maintain the fertilization rate, thus females changed sex at twice the rate for the lowest value of k than they did at the highest (Figure 5). The age at sex change decreased as the exploitation rate increased, reflecting the population’s flexibility in the timing of sex change. As exploitation increased, the relative benefit of changing sex increased, thus, more females changed. As a result, the loss of large males was compensated for by the number of females changing sex. At low values of k, the average age at sex change decreased more rapidly as exploitation increased than it did for higher values of k (Figure 6).
Figure 4
Difference in the proportion of males in the population of the sex changing stock and the non-sex changing stock at varying exploitation rates (U). The solid line is for values when k = 0.3, the dashed line represents values when k = 0.5, and the dotted line represents values when k = 0.9.
Figure 4
Difference in the proportion of males in the population of the sex changing stock and the non-sex changing stock at varying exploitation rates (U). The solid line is for values when k = 0.3, the dashed line represents values when k = 0.5, and the dotted line represents values when k = 0.9.
Figure 5
The percentage of females that changed sex in the sex changing population at varying values of k, calculated as the number of females that change sex divided by the number of mature females.
Figure 5
The percentage of females that changed sex in the sex changing population at varying values of k, calculated as the number of females that change sex divided by the number of mature females.
Figure 6
The average age at sex change for the sex changing stock at varying rates of exploitation (U). The solid line is for values when k = 0.3, the dashed line represents values when k = 0.5, and the dotted line represents values when k = 0.9.
Figure 6
The average age at sex change for the sex changing stock at varying rates of exploitation (U). The solid line is for values when k = 0.3, the dashed line represents values when k = 0.5, and the dotted line represents values when k = 0.9.
## Discussion
Exploitation of sequentially hermaphroditic fish stocks is common, and these species make up a large proportion of the fisheries serving poorer nation-states (e.g. Carribean; Chiappone et al., 2000; Caballero-Arango et al., 2013). Such fisheries are prone to over-fishing in part due to data scarcity, and the lack of species-specific catch and effort regulations (Chiappone et al., 2000). Methods exist to evaluate the population status of data-poor species (Bednarek et al., 2011), however standard fisheries models cannot account for the effects of sex change in their results. Existing models designed to evaluate the sustainable harvest of sex-changing species (Alonzo and Mangel 2005; Heppell et al., 2006; Molloy et al., 2007) suggest that the impacts of fishing depend heavily on poorly known and likely species-specific parameters like k. We developed game theoretic methods to model sustainable yield as a way to flexibly incorporate species’ life histories, and illustrated the insights such a model can provide using black sea bass. Our approach considers the size of an individual female relative to the sizes of other fish in the population, the number of individuals at each size in the population, and male/male competition, although other elements of sex change can be incorporated. For example, many species may have endogenous (e.g. hormones regulated by age/size, regardless of the status of others in the population) cues that trigger sex change (e.g. Alonzo and Mangel, 2004; Heppell et al., 2006), and while our example considers only exogenous (size compared with others in the population) cues, endogenous ones could be easily incorporated into the game theoretic framework. Our framework also allows each parameter to be modified or tested over a range of values if the data for a specific species is not available, or if one wishes to weight parameters differently. The flexibility inherent in our model may prove beneficial for species in regions where fishing is especially important, yet where managers lack the resources to perform formal stock assessments.
Our results suggest that a hypothetical black sea bass stock has a slightly lower MSY but a higher BMSY than an otherwise identical gonochoristic stock. This higher BMSY for the sex changing stock is due to the higher numbers of male fish in the population, as male black sea bass are larger than females of the same age (Provost, 2013). As males were fished out of the population, females in the sex changing stock were able to replace them. Females in the sex changing stock replaced males more readily when maintaining the fertilization rate required more males in the population (i.e. low values for k, the parameter controlling the ability of the population to compensate for loss of males). Females in the sex changing populations in our model did not replace males lost to harvest nearly as quickly when k > 0.5. This finding, along with those of Brooks et al. (2008), illustrates the importance of understanding how much males contribute to reproductive success for a population of concern. We showed that individuals in a protogynous population with low values for k would decrease their age-at-sex-change more rapidly than those in stocks with higher values for k. This suggests exploitation will affect stocks with a lower value for k more drastically than those with a high value.
By adding the fertilization rate to the male/male competition component of the model, we were able to determine how the decision to remain female or become male changes with varying values of k in the fertilization rate; where k is a proxy for the importance of males to the reproductive output of a population. The impact of sex ratio on the fertilization rate has not been estimated empirically for a real fish population. However, the impact of changing sex ratio on the fertilization rate is essential to understanding population dynamics and the effects of fishing on protogynous species. Even if it cannot be directly measured, these parameters may be inferred from the reproductive behaviour of a species. One would assume species that spawn in large groups would have relatively high values of k because losing one male will not be very detrimental to the population (Brooks et al., 2008). One would also assume that species that spawn in pairs would have a very low value for k because losing a male could have detrimental effects on the population’s fertilization rate. This has been shown in the protogynous reef fish Thalassoma bifasciatum, which has two distinct mating systems; group mating and paired mating (Marconato et al., 1997). For high values of k, we observed the lowest amount of sex change and for the lowest values of k we observed the highest amount of sex change. Understanding the breeding behaviour of a protogynous species will allow researchers to estimate k and better understand the impacts of fishing on the fertilization rate and on the stock as a whole.
We found that increased exploitation increased the difference in male proportion of the population between gonochoristic and sex changing stocks, and decreased the average age at sex change for the sex changing stock. This result is due to the compensatory mechanism of flexible timing at sex change. For the species with a relatively fixed age-at-change, the proportion of males drastically declines with an increase in exploitation (Alonzo and Mangel, 2004; Heppell et al., 2006). For species with plasticity in their age-at-change, populations may remain relatively stable under higher rates of exploitation (Alonzo and Mangel, 2005). This stability is due to the flexibility in this life history parameter. In order for gonochoristic species or species with fixed age-at-change to maintain a stable sex ratio, their age-at-maturity and/or age-at-change would have to decrease. These traits could require thousands of generations to respond to exploitation, but a trait such as age-at-change in a species where that trait is flexible may be changed in a single generation (e.g. Sattar et al., 2008). Although adjusting the age at which females can change sex can compensate for lost males, it does create a situation where the average size of individuals in the population decreases with an increase in exploitation. It has been shown for protogynous stocks that populations under heavy fishing pressure may have similar sex ratios to those stocks of the same species under lighter fishing pressure (Götz et al., 2008). However, these studies have also shown that stocks under heavier fishing pressure have a smaller average body size and a smaller size-at-change than those under lighter fishing pressure (Hamilton et al., 2007; Götz et al., 2008). The findings of these studies are reflected in our results as well. Although the sex ratio for our sex changing stock remained fairly constant across varying rates of exploitation, the age-at-change decreased drastically. If fishery managers were to collect data on the age or size-at-change for sex changing stocks, they could alert policy makers to fishing rates that may be dangerously high. Many assessments collect data on sex ratio, however few collect data on age-at-change (Provost and Jensen, 2015). Although sex ratio can allow managers to detect problems in gonochoristic populations or sex changing populations with a fixed age-at-change, it is not as helpful for sex changing stocks with flexible age-at-change.
Our game theoretic approach accommodates stocks with fixed age-at-change and/or stocks that cue on exogenous factors before changing sex. Although there may be other fisheries induced changes to life history traits of hermaphroditic stocks that may affect sex change such as age at first maturity, or energy allocation (e.g. Sattar et al., 2008), we focused on the rapid effects of exploitation on sex change that may be monitored by fisheries managers and included in stock assessments. Our implementation here included competition as a function of the size relative to the population and weighted by the number of individuals in each size class. Some models considering exogenous cues for sex changing fish have simply assumed that a fish will change sex when it reaches a given size relative to the largest in the population (e.g. Molloy et al., 2007). Other models have included expected reproductive output as a cue for sex change (e.g. Alonzo and Mangel, 2005). Although these assumptions may hold true for some species, such a case-by-case approach to examining the effects of exploitation on sex changing fish would not be of broad use. If the assumptions above were true for a species of concern, our game theoretic framework would accommodate that assumption by simply adjusting the “Loss matrix” to fit the assumptions. This framework also provides an opportunity to determine the effects of exploitation on the assumptions themselves. The results of our approach are similar to those produced by more species-specific models in that they both predict that sex changing stocks that use exogenous cues are as robust to exploitation as gonochoristic stocks. Our results also predicted that stocks of sex changing fish, while maintaining similar sex ratios, tend to have smaller individuals and smaller size-at-change. This matches the empirical results observed for sex changing stocks (Hamilton et al., 2007; Götz et al., 2008). Having such a flexible, non-species specific model to examine the effects of fishing may be beneficial to managing sex changing species.
## Acknowledgements
We thank Øyvind Fiksen and one anonymous reviewer for helpful comments on the manuscript. This work was partially supported by a grant to OPJ from the National Oceanic and Atmospheric Administration's Research Set Aside Program.
## References
Alonzo
S. H.
Mangel
M.
2004
.
The effects of size-selective fisheries on the stock dynamics of and sperm limitation in sex-changing fish
.
Fishery Bulletin
,
102
:
1
13
.
Alonzo
S. H.
Mangel
M.
2005
.
Sex-change rules, stock dynamics and the performance of spawning-per-recruit mesures in protogynous stocks
.
Fishery Bulletin
,
103
:
229
245
.
Alonzo
S. H.
Ish
T.
Key
M.
MacCall
A. D.
Mangel
M.
2008
.
The importance of incorporating protogynous sex change into stock assesments
.
Bulletin of Marine Science
,
83
:
163
179
.
Armsworth
P. R.
2001
.
Effects of fishing on a protogynous hermaphrodite
.
Canadian Journal of Fisheries and Aquatic Science
,
58
:
568
578
.
Brooks
E. N.
Shertzer
K. W.
Gedamke
T.
2008
.
Stock assessment of protogynous fish: evaluating measures of spawning biomass used to estimate biological reference points
.
Fishery Bulletin
,
106
:
12
23
.
Bednarek
A. T.
Cooper
A. B.
Cresswell
K. A.
Mangel
M.
Satterthwaite
W. H.
Simpfendorfer
C. A.
Wiedenmann
J. R.
2011
.
The certainty of uncertainty in marine conservation and what to do about it
.
Bulletin of Marine Science
,
87
:
177
195
.
Ben Miled
S.
Kebir
A.
Hbid
M. L.
2010
.
Individual based model for grouper populations
.
Acta Biotheoretica
,
58
:
247
264
.
Benton
C. B.
Berlinsky
D. L.
2006
.
Induced sex change in black sea bass
.
Journal of Fish Biology
,
69
:
1491
1503
.
Bohnsack
J. A.
Harper
D. E.
1988
.
Length-weight relationship of selected marine reef fishes from the southeastern United States and the Carribean
.
NOAA Tech. Mem. NMFS-SEFC-215
31
. p.
Caballero-Arango
D.
Brulé
T.
Nóh-Quiñones
V.
Colás-Marrufo
T.
Pérez-Díaz
E.
2013
.
Reproductive biology of the tiger grouper in the southern Gulf of Mexico
.
Transactions of the American Fisheries Society
,
142
:
282
299
.
Charnov
E. L.
1982
.
Alternative life-histories in protogynous fishes: a general evolutionary theory
.
Marine Ecology Progress Series
,
9
:
305
307
.
Chiappone
M.
Sluka
R.
Sealey
K. S.
2000
.
Groupers (Pisces:Serranidae) in fished and protected areas of the Florida Keys, Bahamas, and northern Caribbean
.
Marine Ecology Progress Series
,
198
:
261
272
.
Erisman
B. E.
Hastings
P. A.
2011
.
Evolutionary transitions in the sexual patterns of fishes: insights from a phylogenetic analysis of the seabasses (Teleosti: Serranidae)
.
Copeia
,
2011
:
357
364
.
Götz
A.
Kerwath
S. E.
Attwood
C. G.
Sauer
W. H. H.
2008
.
Effects of fishing on population structure and life history of roman Chrysoblephus laticeps (Sparidae)
.
Marine Ecology Progress Series
,
362
:
245
259
.
Hamaguchi
Y.
Sakai
Y.
Takasu
F.
N.
2002
.
Modeling spawning strategy for sex change under social control in haremic angelfishes
.
Behavioral Ecology
,
13
:
75
82
.
Hamilton
S. L.
Caselle
J. E.
Standish
J. D.
Schroeder
D. M.
Love
M. S.
Rosales-Casian
J. A.
Sosa-Nishizaki
O.
2007
.
Size-selective harvesting alters life histories of a temperate sex-changing fish
.
Ecological Applications
,
17
:
2268
2280
.
Heppell
S. S.
Heppell
S. A.
Coleman
F. C.
Koenig
C. C.
2006
.
Models to compare management options for a protogynous fish
.
Ecological Applications
,
16
:
238
249
.
Hilborn, R., and Walters, C. J. 1992. Quantitative Fisheries Stock Assessment: Choice, Dynamics and Uncertainty. Kluwer Academic Publishers, Boston/Dordrecht/London.
Iannelli
M.
Martcheva
M.
Milner
F. A.
2005
.
Gender-Structured Population Modeling: Mathematical Methods, Numerics, and Simulations. Frontiers in Applied Mathematics 31
.
SIAM
,
.
Kazancioğlu
E.
Alonzo
S. H.
2009
.
Costs of changing sex do not explain why sequential hermaphroditism is rare
.
American Naturalist
,
173
:
327
336
.
Kazancioğlu
E.
Alonzo
S. H.
2010
.
Classic predictions about sex change do not hold under all types of size advantage
.
Journal of Evolutionary Biology
,
23
:
2432
2441
.
Kebir
A.
Fefferman
N. H.
Ben Miled
S.
2015
.
Understanding hermaphrodite species through game theory
.
Journal of Mathematical Biology
,
71
:
1505
1524
.
Lorenzen
K.
1996
.
The relationship between body weight and natural mortality in juvenile and adult fish: a comparison of natural ecosystems and aquaculture
.
Journal of Fish Biology
,
49
:
627
647
.
Marconato
A.
Shapiro
D. Y.
Petersen
C. W.
Warner
R. R.
Yoshikawa
T.
1997
.
Methodological analysis of fertilization rate in the bluehead wrasse Thalassoma bifasciatum: pair versus group spawns
.
Marine Ecology Progress Series
,
161
:
61
70
.
Molloy
P. P.
Goodwin
N. B.
Côté
I. M.
Reynolds
J. D.
2007
.
Predicting the effects of exploitation on male-first sex-changing fish
.
Animal Conservation
,
10
:
30
38
.
Munday
P. L.
Buston
P. M.
Warner
R. R.
2006
.
Diversity and flexibility of sex-change strategies in animals
.
Trends in Ecology and Evolution
,
21
:
89
95
.
Northeast Fisheries Science Center
.
2012
. 53rd Northeast Regional Stock Assesment Workshop (53rd SAW) Assessment Report. US Dept Commer, Northeast Fish Sci Cent Ref Doc. 12-05; 559 p. Available from: National Marine Fisheries Service, 166 Water Street, Woods Hole, MA 02543-1026, or online at (http://www.nefsc.noaa.gov/nefsc/publications).
Rogers
L.
Koch
A.
2011
.
The evolution of sex-change timing under environmental uncertainty: a test by simulation
.
Evolutionary Ecology Research
,
13
:
387
399
.
Provost
M. M.
2013
. Understanding sex change in exploited fish populations: a review of east coast fish stocks and assessment of selectivity and sex change in black sea bass (Centropristis striata) in New Jersey. (Master’s thesis). Rutgers University, New Brunswick, NJ.
Provost
M. M.
Jensen
O. P.
2015
.
The impacts of fishing on hermaphroditic species and treatment of sex change in stock assessments
.
Fisheries
,
40
:
536
545
.
Y.
Liu
M.
2008
.
Functional hermaphroditism in teleosts
.
Fish
,
9
:
1
43
.
Sakai, Y., Karino, K., Nakashima, Y., and Kuwamura, T. 2002. Status-dependent behavioural sex change in a poloygynous coral-reef fish, Halichoeres melanurus. Journal of Ethology, 20: 101–105.
Sato, T., and Goshima, S. 2006. Impacts of male-only fishing and sperm limitation in manipulated populations of an unfished crab, Hapalogaster dentata. Marine Ecology Progress Series, 313: 193–204.
Sattar
S. A.
Jørgensen
C.
Fiksen
Ø.
2008
.
Fisheries-induced evolution of energy and sex allocation
.
Bulletin of Marine Science
,
83
:
235
250
.
von Bertalanffy, L. 1938. A quantitative theory of organic growth (inquiries on growth laws. II). Human Biology, 10: 181–213.
Warner
R. R.
1975
.
The adaptive significance of sequential hermaphroditism in animals
.
American Naturalist
,
109
:
61
82
.
Warner
R. R.
Swearer
S. E.
1991
.
Social control of sex change in the bluehead wrasse, Thalassoma bifasciatum (Pisces: Labridae)
.
Biology Bulletin
,
181
:
199
204
.
Zabala
M.
Louisy
P.
Garcia-Rubies
A.
Gracia
V.
1997
.
Socio-behavioural context of reproduction in the Mediterranean dusky grouper Epinephelus marginatus (Lowe, 1834) (Pisces, Serranidae) in the Medes Islands Marine Reserve (NW Mediterranean, Spain)
.
Scientia Marina
,
61
:
65
77
.
|
2017-02-21 08:46:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 42, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6120797395706177, "perplexity": 3318.670813825973}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170696.61/warc/CC-MAIN-20170219104610-00493-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://wjrider.wordpress.com/2016/08/08/the-benefits-of-using-primitive-variables/
|
Simplicity is the ultimate sophistication.
― Clare Boothe Luce
When one is solving problems involving a flow of some sort, conservation principles are quite attractive since these principles follow nature’s “true” laws (true to the extent we know things are conserved!). With flows involving shocks and discontinuities, the conservation brings even greater benefits as the Lax-Wendroff theorem demonstrates (https://wjrider.wordpress.com/2013/09/19/classic-papers-lax-wendroff-1960/). In a nutshell you have guarantees about the solution through the use of conservation form that are far weaker without it. A particular set of variables is the obvious variables because they arise naturally in conservation form. For fluid flow these are density, momentum and total energy. The most seemingly straightforward thing to do is use these same variables to discretize the equations. This is generally a bad choice and should be avoided unless one does not care about the quality of results.
While straightforward and obvious, the choice of using conserved variables is almost always a poor one, and far better results can be achieved through the use of primitive variables for most of the discretization and approximation work. This is even true if one is using characteristic variables (which usually imply some sort of entirely one-dimensional character). The primitive variables have simple and intuitive meaning physically, and often equate directly to what can be observed in nature (conservedvariables don’t). The beauty of primitive variables is that they trivially generalize to multiple dimensions in ways that characteristic variables do not. The other advantages are equally clear specifically the ability to extend the physics of the problem in a natural and simple manner. This sort of extension usually causes the characteristic approach to either collapse or at least become increasingly unwieldy. A key aspect to keep in mind at all times is that one returns to the conservation variables for the final approximation and update of the equations. Keeping the conservation form for the accounting of the complete solution is essential.
To keep the bulk of the discussion simple, I will focus on the Euler equations of fluid dynamics. These equations describe the conservations of mass, $\rho_t + m_x = 0$, momentum, $m_t + (m^2/\rho + p)_x = 0$ and total energy, $E_t + \left[m/\rho(E + p) \right]_x = 0$ in one dimension. Even in this very simple setting the primitive variables are immensely useful as demonstrated by HT Huynh, in another of his massively under-appreciated papers. In this paper he masterfully covers the whole of the techniques and utility of primitive variables. Arguably, the use of primitive variables went mainstream with the papers of Colella and Woodward. In spite of the broad appreciation of that paper, the use of primitive variables in work is still more a niche than common practice. The benefits become manifestly obvious whether one is analyzing the equations (which is equivalent to the more complex variable set!), or discretizing the solutions.
Study the past if you would define the future.
― Confucius
The use of the “primitive variables” came from a number of different directions. Perhaps the earliest use of the term “primitive” came from meteorology in terms of the work of Bjerknes (1921) whose primitive equations formed the basis of early work in computing weather in an effort led by Jules Charney (1955). Another field to use this concept is the solution of incompressible flows. The primitive variables are the velocities and pressure, which is distinguished from the vorticity-streamfunction approach (Roache 1972). In two dimensions the vorticity-streamfunction solution is more efficient, but lacks simple connection to measurable quantities. The same sort of notion separates the conserved variables from the primitive variables in compressible flow. The use of primitive variables as an effective approach computationally may have begun in the computational physics work at Livermore in the 1970’s (see e.g., Debar). The connection of the primitive variables to classical analysis of compressible flows and simple physical interpretation also plays a role.
What are the primitive variables? The basic conserved variables form compressible fluid flow are density, $\rho$, momentum, $m=\rho u$, and total energy, $E = \rho e + \frac{1}{2} \rho u^2$. Here the velocity is $u$ and the internal energy is $e$. One also has the equation of state $p=P(\rho,e)$ as the constitutive relation. Let’s take the Euler equations and rewrite them using the primitive variables, the conservations of mass, $\rho_t + (\rho u)_x = 0$, momentum, $(\rho u)_t + (\rho u^2 + p)_x = 0$ and total energy, $\left[\rho (e + \frac{1}{2}u^2)\right]_t + \left[u(\left(rho (e + \frac{1}{2}u^2)+ p\right) \right]_x = 0$. Except for the energy equation, the expressions are simpler to work with, but this is the veritable tip of the proverbial iceberg.
What are the equations for the primitive variables? The primitive variables can be expressed and evolved using simpler equations, which are primarily evolution equations dependent on differentiability, which must be present for any sort of accuracy to be in play anyway. The mass equation is the same although one might expand the derivative, $\rho_t + u \rho_x + \rho u_x = 0$. The momentum equation is replaced by an equation of motion, $u_t + u u_x + \frac{1}{\rho} p_x = 0$. The energy equation could be replaced with a pressure equation, $p_t + u p_x + \gamma p u_x = 0$ ($\gamma$ is the generalized isentropic derivative $\partial_\rho p|_S$) or an internal energy equation, $\rho e_t + \rho u e_x + p u_x = 0$. One can use either energy representation to good measure, or better yet, use both and avoid having to evaluate the equation of state. Moreover if one wants you can evaluate the difference between the pressure from the evolution equation and the state relation as an error measure.
How does on convert to the primitive variables, and convert back to the conserved variables? If one is interested in analysis of the conservative equations, then one linearizes the equations about a point, $U_t + \left(F(U)\right)_x = 0 \rightarrow U_t + \partial_U F(U) U_x = 0$ where $U$ is the vector of conserved varibles, and $F(U)$ is the flux function. The matrix $A_c = partial_U F(U)$ is the flux Jacobian. One does an eigenvalue decomposition, \$ to analyze the equations. From this decomposition, $A_c = R_c \Lambda L_c$, one can get the eigenvalues, $\Lambda$, and the characteristic variables, $L_c \Delta U$. The analysis is difficult and non-intuitive with the conserved variables.
Here we get to the cool part of this whole thing, there is a much easier and more intuitive path through the primitive variables. One can get a matrix representation of the primitive variables which I’ll call $V$ in vector form, $V_t + A_p V_x = 0$. One can get the terms in $A_p$ easily from the differential forms, and recognizing that $\gamma p = \rho c^2$, with $c$ being the speed of sound, the eigen-analysis is so simple that it can be done by hand (and it’s a total piece of cake for Mathematica). Using similar notation as the conserved form, $A_p = R_p \Lambda L_p$. The first thing to note is that $\Lambda$ is exactly the same, i.e., the eigenvalues are identical. One then gets a result for the characteristics, $L_p \Delta V$ that matches the textbooks, and that $L_p \Delta V = L_c \Delta U$. All the differences in the transformation are bound up in the right eigenvectors $R_c$ and $R_p$, and the ease of physical insight provided by the analysis.
Now we can elucidate how to move between these two forms, and even use the primitive variables for the analysis of the conserved form directly. Using Huynh’s paper as a guide and repeating the main results one defines a matrix of partial derivatives of the conserved variables, $U$ with respect to the primitive variables, $V$, $M= \partial_V U$. This matrix then can be inverted into $M^{-1}$ and we then may define an identity, $A_c = M A_p M^{-1}$, which might allow the conserved eigen-analysis to be executed in terms of the more convenient primitive variables. The eigenvalues of $A_c$ and $A_p$ are the same. We can get the left and right eigenvectors through $L_c = L_p M^{-1}$ and $R_c = M R_p$. All of this follows the simple application of the chain rule to the linearized versions of the governing equations.
The primitive variable idea can be extended in a variety of nifty and useful ways. One can augment the variable set in ways that can yield some extra efficiency to the solution by avoiding extra evaluations of the constitutive (or state) relations. This would most classically involve using both a pressure and energy equation in the system. Miller and Puckett provide a nice example of this technique in practice, building upon the work of Colella, Glaz and Ferguson where expensive equation of state evaluations are avoided. One must note that the system of equations being used to discretize the system is carrying redundant information that may have utility beyond efficiency.
One can go beyond this to add variables to the system of equations that are redundant, but carry information implicit in their approximation that may be useful in solving equations. One might add an equation for the specific volume of the fluid to compare with density. Similar things could be done with kinetic energy, vorticity, or entropy. In each case the redunency might be used to discover or estimate error or smoothness of the underlying solution and perhaps adapt the solution method on the basis of this information.
Using the primitive variables for discretization is almost as good as using characteristic variables in terms of solution fidelity. Generally if you can get away with 1-D ideas, the characteristic variables are unambiguously the best. The primitive variables are almost as good. The key is to use a local transformation to the primitive variables for the work of discretization even when your bookkeeping is all in conserved variables. Even if you are doing characteristic variables, the construction and use of them is enabled by primitive variables. The resulting expressions for the characteristics are simpler in primitive variables. Perhaps almost as important the expressions for the variables themselves are far more intuitively expressed in primitive variables.
A real source of power of the primitive variables comes when you extend past the simpler case of the Euler equations to things like magneto-hydrodynamics (MHD i.e., compressible magnetic fluids). Doing discretization of the MHD with conserved variables is a severe challenge and analysis of their mathematical characteristic structure can be a decent into utter madness. Doing the work in these more complex systems using the primitive variables is extremely advantageous. It is an approach that is far too often left out and the quality and fidelity of numerical methods suffers as a result.
Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius — and a lot of courage to move in the opposite direction.
― Ernst F. Schumacher
Lax, Peter, and Burton Wendroff. “Systems of conservation laws.”Communications on Pure and Applied mathematics 13, no. 2 (1960): 217-237.
Huynh, Hung T. “Accurate upwind methods for the Euler equations.” SIAM Journal on Numerical Analysis 32, no. 5 (1995): 1565-1619.
Colella, Phillip, and Paul R. Woodward. “The piecewise parabolic method (PPM) for gas-dynamical simulations.” Journal of computational physics 54, no. 1 (1984): 174-201.
Woodward, Paul, and Phillip Colella. “The numerical simulation of two-dimensional fluid flow with strong shocks.” Journal of computational physics54, no. 1 (1984): 115-173.
Van Leer, Bram. “Upwind and high-resolution methods for compressible flow: From donor cell to residual-distribution schemes.” Communications in Computational Physics 1, no. 192-206 (2006): 138.
Bjerknes, V. “The Meteorology of the Temperate Zone and the General Atmospheric CIRCULATION. 1.” Monthly Weather Review 49, no. 1 (1921): 1-3.
Charney, J. “The use of the primitive equations of motion in numerical prediction.” Tellus 7, no. 1 (1955): 22-26.
Roache, Patrick J. Computational fluid dynamics. Hermosa publishers, 1972.
DeBar, R. B. Method in two-D Eulerian hydrodynamics. No. UCID-19683. Lawrence Livermore National Lab., CA (USA), 1974.
Miller, Gregory Hale, and Elbridge Gerry Puckett. “A high-order Godunov method for multiple condensed phases.” Journal of Computational Physics128, no. 1 (1996): 134-164.
Colella, P., H. M. Glaz, and R. E. Ferguson. “Multifluid algorithms for Eulerian finite difference methods.” preprint (1996).
|
2019-12-08 05:31:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 45, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7294809818267822, "perplexity": 546.4522873741668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540506459.47/warc/CC-MAIN-20191208044407-20191208072407-00138.warc.gz"}
|
https://codegolf.meta.stackexchange.com/questions/18344/popularity-of-posts-based-on-capitalization-no-graphs-this-time
|
# Popularity of posts, based on capitalization (no graphs this time)
After this question, I became interested in looking for various relationships in post popularity. While reading through the HNQ list, I noticed that some questions are capitalized in title case, and others written like sentences. So, I used a SEDE query to get a table of question data, and started analyzing.
From a list of 11066 questions, 10937 started with a letter (once '"([{ are filtered out). These were divided into six groups, based on the case of the first letters of the words.
Average Post:
• Mean Views: 2444.953
One Word: Lowercase:
There are only 8 of these. Statistically, they tend to do worse than average. Note that my program only counted spaces as word boundaries.
• Posts: 0.073%
• Mean Views: 1767.375
One Word: Uppercase:
These tend to do better, although there are only 34 of them.
• Posts: 0.311%
• Mean Views: 2509.735
All Lowercase:
All-lowercase titles tend to score lower, potentially because they tend to be lower-effort questions.
• Posts: 0.91%
• Mean Views: 1465.273
Title Case:
Title case is slightly uglier to me, although the votes seem to indicate that title case questions do better than average.
• Posts: 19.32%
• Mean Views: 2056.176
First Word Lowercase:
There are but 30 questions which don't begin with a capitalized word, but have one later on. They tend to do worse than average.
• Posts: 0.274%
• Mean Views: 1974.667
First Word Uppercase:
This category is all non-title-case posts which begin with a capitalized word (and are more than one word in length). This category gets more average views than title case, but less average votes. Odd.
• Posts: 79.025%
• Mean Views: 2553.225
These results are probably 99% caused by outliers, but still interesting. We can take away some important points from them:
• If you want a post to be popular, capitalizing the first word increases the average number of views by 64%
• Over 98% of question titles are either title case or sentence case
• Only 42 questions are one word in length
• This was not a productive use of my time
• I think you have correlation backwards - questions becoming popular magically changes their titles' capitalization. – Esolanging Fruit Dec 9 '19 at 8:29
• @EsolangingFruit That's also a possibility, although I'd say a question is more likely to be taken seriously if the title is formatted well (and all-lowercase titles tend to be lower quality with more misspellings) – Redwolf Programs Dec 9 '19 at 13:23
• Now to think of a good challenge where it's appropriate to use all caps in the title so I can be the first! – Sam Dean Dec 9 '19 at 16:17
• @SamDean Nope! – pppery Dec 11 '19 at 4:01
• @pppery DAMMIT! But that looks like a really interesting challenge, thanks! – Sam Dean Dec 11 '19 at 9:31
• Probably another interesting one is about ending punctuation e.g. ! or ?`. – Bubbler Dec 12 '19 at 10:04
• @Bubbler I think I'll do question punctuation and formatting next, then one on answer formatting. Good idea! – Redwolf Programs Dec 12 '19 at 13:22
• I'm not knowledgeable about SEDE, but as a suggestion it might be interesting to plot challenge popularity (votes or views) as a function of challenge length (number of characters) – Luis Mendo Dec 18 '19 at 10:06
• – Redwolf Programs Dec 18 '19 at 14:24
|
2020-07-06 06:06:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6234564185142517, "perplexity": 3919.085841183706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890105.39/warc/CC-MAIN-20200706042111-20200706072111-00371.warc.gz"}
|
https://jump.dev/JuMP.jl/v0.21.6/extensions/
|
# Extending JuMP
## Extending MOI
See the bridge section in the MOI manual.
JuMP.add_bridgeFunction
add_bridge(model::Model,
BridgeType::Type{<:MOI.Bridges.AbstractBridge})
Add BridgeType to the list of bridges that can be used to transform unsupported constraints into an equivalent formulation using only constraints supported by the optimizer.
source
JuMP.BridgeableConstraintType
struct BridgeableConstraint{C, B} <: AbstractConstraint
constraint::C
bridge_type::B
end
Constraint constraint that can be bridged by the bridge of type bridge_type. Adding this constraint to a model is equivalent to
add_bridge(model, bridge_type)
add_constraint(model, constraint)
Examples
Given a new scalar set type CustomSet with a bridge CustomBridge that can bridge F-in-CustomSet constraints, when the user does
model = Model()
@variable(model, x)
@constraint(model, x + 1 in CustomSet())
optimize!(model)
with an optimizer that does not support F-in-CustomSet constraints, the constraint will not be bridged unless he manually calls add_bridge(model, CustomBridge). In order to automatically add the CustomBridge to any model to which an F-in-CustomSet is added, simply add the following method:
function JuMP.build_constraint(_error::Function, func::AbstractJuMPScalar,
set::CustomSet)
constraint = ScalarConstraint(func, set)
return JuMP.BridgeableConstraint(constraint, CustomBridge)
end
Note
JuMP extensions should extend JuMP.build_constraint only if they also defined CustomSet, for three reasons:
1. It is problematic if multiple extensions overload the same JuMP method.
2. A missing method will not inform the users that they forgot to load the extension module defining the build_constraint method.
3. Defining a method where neither the function nor any of the argument types are defined in the package is called type piracy and is discouraged in the Julia style guide.
source
## Extending JuMP macros
In order to provide a convenient syntax for the user to create variables, constraints or set the objective of a JuMP extension, it might be required to use macros similar to @variable, @constraint and @objective. It is recommended to first check whether it is possible to extend one of these three macros before creating a new one so as to leverage all their features and provide a more consistent interface to the user.
### Extending the @constraint macro
The @constraint macro always calls the same three functions:
Adding methods to these functions is the recommended way to extend the @constraint macro.
#### Adding parse_constraint methods
JuMP.sense_to_setFunction
sense_to_set(_error::Function, ::Val{sense_symbol})
Converts a sense symbol to a set set such that @constraint(model, func sense_symbol 0) is equivalent to @constraint(model, func in set) for any func::AbstractJuMPScalar.
Example
Once a custom set is defined you can directly create a JuMP constraint with it:
julia> struct CustomSet{T} <: MOI.AbstractScalarSet
value::T
end
julia> Base.copy(x::CustomSet) = CustomSet(x.value)
julia> model = Model();
julia> @variable(model, x)
x
julia> cref = @constraint(model, x in CustomSet(1.0))
x ∈ CustomSet{Float64}(1.0)
However, there might be an appropriate sign that could be used in order to provide a more convenient syntax:
julia> JuMP.sense_to_set(::Function, ::Val{:⊰}) = CustomSet(0.0)
julia> MOIU.shift_constant(set::CustomSet, value) = CustomSet(set.value + value)
julia> cref = @constraint(model, x ⊰ 1)
x ∈ CustomSet{Float64}(1.0)
Note that the whole function is first moved to the right-hand side, then the sign is transformed into a set with zero constant and finally the constant is moved to the set with MOIU.shift_constant.
source
#### Adding build_constraint methods
There are typically two choices when creating a build_constraint method, either return an AbstractConstraint already supported by the model, i.e. ScalarConstraint or VectorConstraint, or a custom AbstractConstraint with a corresponding add_constraint method (see Adding add_constraint methods).
JuMP.build_constraintFunction
build_constraint(_error::Function, Q::Symmetric{V, M},
::PSDCone) where {V <: AbstractJuMPScalar,
M <: AbstractMatrix{V}}
Return a VectorConstraint of shape SymmetricMatrixShape constraining the matrix Q to be positive semidefinite.
This function is used by the @constraint macros as follows:
@constraint(model, Symmetric(Q) in PSDCone())
The form above is usually used when the entries of Q are affine or quadratic expressions, but it can also be used when the entries are variables to get the reference of the semidefinite constraint, e.g.,
@variable model Q[1:2,1:2] Symmetric
# The type of Q is Symmetric{VariableRef, Matrix{VariableRef}}
var_psd = @constraint model Q in PSDCone()
# The var_psd variable contains a reference to the constraint
source
build_constraint(_error::Function,
Q::AbstractMatrix{<:AbstractJuMPScalar},
::PSDCone)
Return a VectorConstraint of shape SquareMatrixShape constraining the matrix Q to be symmetric and positive semidefinite.
This function is used by the @constraint and @SDconstraint macros as follows:
@constraint(model, Q in PSDCone())
@SDconstraint(model, P ⪰ Q)
The @constraint call above is usually used when the entries of Q are affine or quadratic expressions, but it can also be used when the entries are variables to get the reference of the semidefinite constraint, e.g.,
@variable model Q[1:2,1:2]
# The type of Q is Matrix{VariableRef}
var_psd = @constraint model Q in PSDCone()
# The var_psd variable contains a reference to the constraint
source
##### Shapes
Shapes allow vector constraints, which are represented as flat vectors in MOI, to retain a matrix shape at the JuMP level. There is a shape field in VectorConstraint that can be set in build_constraint and that is used to reshape the result computed in value and dual.
JuMP.shapeFunction
shape(c::AbstractConstraint)::AbstractShape
Return the shape of the constraint c.
source
JuMP.reshape_vectorFunction
reshape_vector(vectorized_form::Vector, shape::AbstractShape)
Return an object in its original shape shape given its vectorized form vectorized_form.
Examples
Given a SymmetricMatrixShape of vectorized form [1, 2, 3], the following code returns the matrix Symmetric(Matrix[1 2; 2 3]):
julia> reshape_vector([1, 2, 3], SymmetricMatrixShape(2))
2×2 LinearAlgebra.Symmetric{Int64,Array{Int64,2}}:
1 2
2 3
source
JuMP.reshape_setFunction
reshape_set(vectorized_set::MOI.AbstractSet, shape::AbstractShape)
Return a set in its original shape shape given its vectorized form vectorized_form.
Examples
Given a SymmetricMatrixShape of vectorized form [1, 2, 3] in MOI.PositiveSemidefinieConeTriangle(2), the following code returns the set of the original constraint Symmetric(Matrix[1 2; 2 3]) in PSDCone():
julia> reshape_set(MOI.PositiveSemidefiniteConeTriangle(2), SymmetricMatrixShape(2))
PSDCone()
source
JuMP.dual_shapeFunction
dual_shape(shape::AbstractShape)::AbstractShape
Returns the shape of the dual space of the space of objects of shape shape. By default, the dual_shape of a shape is itself. See the examples section below for an example for which this is not the case.
Examples
Consider polynomial constraints for which the dual is moment constraints and moment constraints for which the dual is polynomial constraints. Shapes for polynomials can be defined as follows:
struct Polynomial
coefficients::Vector{Float64}
monomials::Vector{Monomial}
end
struct PolynomialShape <: AbstractShape
monomials::Vector{Monomial}
end
JuMP.reshape_vector(x::Vector, shape::PolynomialShape) = Polynomial(x, shape.monomials)
and a shape for moments can be defined as follows:
struct Moments
coefficients::Vector{Float64}
monomials::Vector{Monomial}
end
struct MomentsShape <: AbstractShape
monomials::Vector{Monomial}
end
JuMP.reshape_vector(x::Vector, shape::MomentsShape) = Moments(x, shape.monomials)
Then dual_shape allows the definition of the shape of the dual of polynomial and moment constraints:
dual_shape(shape::PolynomialShape) = MomentsShape(shape.monomials)
dual_shape(shape::MomentsShape) = PolynomialShape(shape.monomials)
source
JuMP.SquareMatrixShapeType
SquareMatrixShape
Shape object for a square matrix of side_dimension rows and columns. The vectorized form contains the entries of the the matrix given column by column (or equivalently, the entries of the lower-left triangular part given row by row).
source
JuMP.SymmetricMatrixShapeType
SymmetricMatrixShape
Shape object for a symmetric square matrix of side_dimension rows and columns. The vectorized form contains the entries of the upper-right triangular part of the matrix given column by column (or equivalently, the entries of the lower-left triangular part given row by row).
source
#### Adding add_constraint methods
JuMP.add_constraintFunction
add_constraint(model::Model, con::AbstractConstraint, name::String="")
Add a constraint con to Model model and sets its name.
source
|
2021-05-11 22:58:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.675651490688324, "perplexity": 3251.415784611162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990419.12/warc/CC-MAIN-20210511214444-20210512004444-00176.warc.gz"}
|
https://answers.ros.org/answers/48286/revisions/
|
The variable is set by cmake.mk which can be found in /opt/ros/fuerte/share/ros/core/mk. The file /opt/ros/fuerte/share/ros/core/rosbuild/rostoolchain.cmake contains more information about the purpose of the file. I think it has something to do with cross compilation.
|
2021-06-21 13:53:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47948411107063293, "perplexity": 325.12486125700605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488273983.63/warc/CC-MAIN-20210621120456-20210621150456-00352.warc.gz"}
|
https://www.jobilize.com/physics-k12/course/equilibrium-by-openstax-equilibrium
|
# Equilibrium
Page 1 / 2
Equilibrium is the state of constant motion; rest being a special case.
We have already used this term in reference to balanced force system. We used the concept of equilibrium with an implicit understanding that the body has no rotational tendency. In this module, we shall expand the meaning by explicitly considering both translational and rotational aspects of equilibrium.
A body is said to be in equilibrium when net external force and net external torque about any point, acting on the body, are individually equal to zero. Mathematically,
$\Sigma F=0$
$\Sigma \tau =0$
These two vector equations together are the requirement of body to be in equilibrium. We must clearly understand that equilibrium conditions presented here only ensure absence of acceleration (translational or rotational) – not rest. Absence of acceleration means that velocities are constant – not essentially zero.
Now, we study translational motion of rigid body with respect to its center of mass, the linear and angular velocities under equilibrium are constants :
${v}_{C}=\text{constant}$
$\omega =\text{constant}$
We need to analyze equilibrium of a body simultaneously for both translational and rotational equilibrium in terms of conditions as laid down here.
## Equilibrium types
We are surrounded by great engineering architectures and mechanical devices, which are at rest in the frame of reference of Earth. A large part of engineering creations are static objects. On the other hand, we also seek equilibrium of moving objects like that of floating ship, airplane cruising at high speed and such other moving mechanical devices. In both cases – static or dynamic, external forces and torques are zero.
An equilibrium in motion is said be “dynamic equilibrium”. Similarly, an equilibrium at rest is said be “static equilibrium”. From this, it is clear that static equilibrium requires additional conditions to be fulfilled.
$⇒{v}_{C}=0$
$⇒\omega =0$
## Equilibrium equations
In general, a body is subjected to sufficiently good numbers of forces. Consider for example, a book placed on a table. This simple arrangement actually is subjected to four normal forces operating at the four corners of the table top in the vertically upward direction and two weights, that of the book and the table, acting vertically downward.
If we want to solve for the four unknown normal forces acting on the corners of the table top, we would need to have a minimum of four equations. Clearly, two vector relations available for equilibrium are insufficient to deal with the situation.
We actually need to write two vector equations in component form along each of thee mutually perpendicular directions of a rectangular coordinate system. This gives us a set of six equations, enabling us to solve for the unknowns. We shall, however, see that this improvisation, though, helps us a great deal in analyzing equilibrium, but is not good enough for this particular case of the book and table arrangement. We shall explain this aspect in a separate section at the end of this module. Nevertheless, the component force and torque equations are :
what's lamin's theorems and it's mathematics representative
if the wavelength is double,what is the frequency of the wave
What are the system of units
A stone propelled from a catapult with a speed of 50ms-1 attains a height of 100m. Calculate the time of flight, calculate the angle of projection, calculate the range attained
58asagravitasnal firce
Amar
water boil at 100 and why
what is upper limit of speed
what temperature is 0 k
Riya
0k is the lower limit of the themordynamic scale which is equalt to -273 In celcius scale
Mustapha
How MKS system is the subset of SI system?
which colour has the shortest wavelength in the white light spectrum
if x=a-b, a=5.8cm b=3.22 cm find percentage error in x
x=5.8-3.22 x=2.58
what is the definition of resolution of forces
what is energy?
Ability of doing work is called energy energy neither be create nor destryoed but change in one form to an other form
Abdul
motion
Mustapha
highlights of atomic physics
Benjamin
can anyone tell who founded equations of motion !?
n=a+b/T² find the linear express
أوك
عباس
Quiklyyy
|
2019-11-17 17:18:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7013149261474609, "perplexity": 741.3978522878398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669225.56/warc/CC-MAIN-20191117165616-20191117193616-00444.warc.gz"}
|
https://zbmath.org/?q=an:0262.55012
|
# zbMATH — the first resource for mathematics
A survey of homotopy theory. (English) Zbl 0262.55012
##### MSC:
55-02 Research exposition (monographs, survey articles) pertaining to algebraic topology 55P20 Eilenberg-Mac Lane spaces 55R50 Stable classes of vector space bundles in algebraic topology and relations to $$K$$-theory 55Q05 Homotopy groups, general; sets of homotopy classes 55S20 Secondary and higher cohomology operations in algebraic topology 55Q55 Cohomotopy groups
Full Text:
|
2021-04-11 13:15:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49670490622520447, "perplexity": 7941.650943718415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038062492.5/warc/CC-MAIN-20210411115126-20210411145126-00348.warc.gz"}
|