url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://physics.stackexchange.com/questions/164472/why-does-charge-build-up-at-the-boundary-surface-of-two-media
|
# Why does charge build up at the boundary surface of two media?
On a homework problem, we are asked to to use the first two Maxwell equations,
$$\nabla\cdot \mathbf{B} = 0$$
$$\nabla \cdot \mathbf{D} = \rho$$
to show that along the boundary surface of two different media (different permittivity constants), the magnetic field component that is normal to that boundary is continuous, and also that the electric displacement field $\mathbf{D}$ has a discontinuity on the surface because of the surface charge density on that boundary.
How do I show this? Am I supposed to show, through Maxwell's equations (the ones provided) that an electromagnetic four-potential causes surface charge density to accumulate on the boundary? Why does this happen?
• The polarization changes from one dielectric to the other - this means there will be some net charge on the interface. Think about the implications of $D=\epsilon_0 E + P$. – Floris Feb 11 '15 at 3:21
• Isn't the polarization change only for the component of the electric field tangential to the boundary surface? – Arturo don Juan Feb 11 '15 at 3:46
• If one of the "materials" was a vacuum, there would be no polarization in that material - but there would be in the other material. So no - I am pretty sure that there is a component of polarization normal to the surface (parallel to the field). – Floris Feb 11 '15 at 3:54
• Wouldn't there be no phase change then if you were talking about incident light incident right on (nearly completely perpendicular to) the boundary of a medium? (This doesn't happen) In that case, you only have your $\mathbf{D}$ component in the direction parallel/tangential to the boundary surface. – Arturo don Juan Feb 11 '15 at 4:16
• I think I was just confusing 'parallel' to the surface and 'parallel' to the normal to the surface. – Arturo don Juan Feb 26 '15 at 1:01
|
2019-09-21 02:41:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7020405530929565, "perplexity": 410.59075078112915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574182.31/warc/CC-MAIN-20190921022342-20190921044342-00334.warc.gz"}
|
https://www.mathlearnit.com/fractions-to-decimals.html
|
# Fractions to Decimals
Changing fractions to decimals can at times be a little bit trickier than converting decimal numbers to fractions.
Some simple fractions though can be fairly straight-forward, such as $\fn_jvn&space;\tfrac{1}{2}$.
An effective strategy to convert a fraction like this, is to look at the bottom number, the denominator.
Then trying to establish another number that the denominator can be multiplied by, to give a power of 10, such as 10, 1001000 etc.
For $\dpi{80}&space;\fn_jvn&space;\frac{17}{4}$ , 2 x 5 = 10.
So:
$\dpi{100}&space;\fn_jvn&space;\frac{1\times 5}{2\times 5}$ = $\dpi{100}&space;\fn_jvn&space;\frac{5}{10}$
( Multiplication of top AND bottom keeps overall fraction value the same )
Examples of dividing a number by 10, 100 etc, can be seen on the Division of Decimals page.
$\dpi{100}&space;\fn_jvn&space;\frac{5}{10}$ = 0.5
The decimal form of the fraction $\fn_jvn&space;\tfrac{1}{2}$ is 0.5.
Examples
(1.1)
Convert $\dpi{80}\fn_jvn&space;\frac{5}{8}$ to a decimal.
Solution
8 x 125 = 1000
=> $\dpi{80}\fn_jvn&space;\frac{3\: \times 125}{8\: \times 125}$ = $\dpi{80}\fn_jvn&space;\frac{375}{1000}$ = 0.375
(1.2)
Convert $\dpi{80}\fn_jvn&space;\frac{574}{10000}$ to a decimal.
Solution
4 zeroes in 10000, decimal point moves 4 places left.
$\dpi{80}\fn_jvn&space;\frac{574}{10000}$ = 0.0574
(1.3)
Convert 2$\fn_jvn&space;\tfrac{57}{100}$ to a decimal number.
Solution
First thing to do is to leave the whole number part 2 to one side, and concentrate just on the fraction.
$\fn_jvn&space;\tfrac{57}{100}$ = 0.57
Now bringing the whole number back, placing in front of the decimal point, to form the complete decimal number.
2$\fn_jvn&space;\tfrac{57}{100}$ = 2.57
## Fractions to Decimals, Further Examples
(2.1)
Convert the fraction $\fn_jvn&space;\tfrac{5}{8}$ to decimal form.
Solution
Treat this fraction as the sum 5 ÷ 8.
1)
Set up the division sum as usual.
8 5
2)
Place a decimal point after the 5, and also above at the same place in the answer space above.
.
8 5.
3)
The next step is to place some zeroes after the decimal point underneath.
Depending how many decimal places you want to have in the decimal form of the fraction.
Here we'll try 4 zeroes and see what numbers we get.
.
8 5.0000
Now we treat this sum as if it was 50000 ÷ 8.
4)
0.6 2 5 0
8 5.02040 0
Notice there is a 0 at the 4th decimal place, and that there would also be just 0's from then on.
As we'd just be doing 0 divided by 8 again and again.
Thus the fraction $\fn_jvn&space;\tfrac{5}{8}$ gives a terminating decimal, which ends at 3 decimal places.
$\dpi{80}\fn_jvn&space;\frac{5}{8}$ = 0.625.
(2.2)
Convert the fraction $\fn_jvn&space;\tfrac{2}{7}$ to decimal form.
Solution
Not all fractions convert to a decimal number that terminates, or terminates early.
An example of such a fraction is $\fn_jvn&space;\tfrac{2}{7}$.
1)
.
7 2.
2)
We'll try again initially to obtain a decimal number with 4 decimal places.
But we'll place an extra fifth 0 below, this is so that this 5th division will tell us whether to round the 4th decimal place up or down.
More information on rounding can be seen on the Rounding Decimal Numbers page.
.
7 2.00000
3)
0.2 8 5 7 1
7 2.060405010
Ignoring a remainder at the 5th division, it gives 1, as 7 goes into 10 once.
So we round down, meaning the fourth place, 7, stays as it is.
$\dpi{80}\fn_jvn&space;\frac{2}{7}$ is 0.2857 to 4 decimal places.
(2.3)
Convert the fraction $\fn_jvn&space;\tfrac{7}{44}$ to decimal form.
Solution
Sometimes Long Division is required when converting fractions to decimals.
The process can a bit tedious at times, as such, care needs to be taken at each step.
We'll aim to obtain a decimal number with 3 decimal places, so will place 4 zeroes below, in order to find out if we round down or up at the third decimal place.
$\dpi{80}\fn_jvn&space;\frac{7}{44}$ is 0.159 to 3 decimal places.
› › Fractions to Decimals
|
2018-07-19 10:04:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7713159918785095, "perplexity": 2717.693085694076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590794.69/warc/CC-MAIN-20180719090301-20180719110301-00349.warc.gz"}
|
https://pdglive.lbl.gov/DataBlock.action?node=S067D1
|
#### (C) Other neutrino mixing results
The LSND collaboration reported in AGUILAR 2001 a signal which is consistent with ${{\overline{\mathit \nu}}_{{\mu}}}$ $\rightarrow$ ${{\overline{\mathit \nu}}_{{e}}}$ oscillations. In a three neutrino framework, this would be a measurement of $\theta _{12}$ and $\Delta \mathit m{}^{2}_{21}$. This does not appear to be consistent with most of the other neutrino data. The following listings include results from ${{\mathit \nu}_{{\mu}}}$ $\rightarrow$ ${{\mathit \nu}_{{e}}}$ , ${{\overline{\mathit \nu}}_{{\mu}}}$ $\rightarrow$ ${{\overline{\mathit \nu}}_{{e}}}$ appearance and ${{\mathit \nu}_{{\mu}}}$ , ${{\overline{\mathit \nu}}_{{\mu}}}$ , ${{\mathit \nu}_{{e}}}$ , and ${{\overline{\mathit \nu}}_{{e}}}$ disappearance experiments, and searches for $\mathit CPT$ violation.
#### $\Delta \mathit m{}^{2}$ for sin$^2(2{}\theta )$ = 1 ( ${{\mathit \nu}_{{\mu}}}$ $\rightarrow$ ${{\mathit \nu}_{{e}}}$ )
VALUE (eV${}^{2}$) CL% DOCUMENT ID TECN COMMENT
• • We do not use the following data for averages, fits, limits, etc. • •
$0.03\text{ to }0.55$ 90 1
2021
MBNE MiniBooNE ${{\mathit \nu}}$ ,${{\overline{\mathit \nu}}}$ combined
$0.03\text{ to }0.05$ 90 2
2018 C
MBNE MiniBooNE ${{\mathit \nu}}$ ,${{\overline{\mathit \nu}}}$ combined
$0.015\text{ to }0.050$ 90 3
2013 A
MBNE MiniBooNE
$<0.34$ 90 4
2012
MBNE MiniBooNE/SciBooNE
$<0.034$ 90
2007
MBNE MiniBooNE
$<0.0008$ 90
2004
K2K Water Cherenkov
$<0.4$ 90
2003
NOMD CERN SPS
$<2.4$ 90
2002
NTEV NUTEV FNAL
5
2001
LSND ${{\mathit \nu}}$ ${{\mathit \mu}}$ $\rightarrow$ ${{\mathit \nu}_{{e}}}$ osc.prob.
$0.03\text{ to }0.3$ 95 6
1998
LSND ${{\mathit \nu}_{{\mu}}}$ $\rightarrow$ ${{\mathit \nu}_{{e}}}$
$<2.3$ 90 7
1996
CHARM/CDHS
$<0.9$ 90
1994 C
CHM2 CERN SPS
$<0.09$ 90
1986
HLBC BEBC CERN PS
1 AGUILAR-AREVALO 2021 result is based on a total of $18.75 \times 10^{20}$ POT in neutrino mode, and $11.27 \times 10^{20}$ POT in anti-neutrino mode. Best fit at 0.043 eV${}^{2}$. The allowed region does not extend to large $\Delta$m${}^{2}$. The quoted value is the entire allowed region of $\Delta$m${}^{2}$ at 90$\%$ C.L. for all values of sin$^2(2\theta )$. Supersedes AGUILAR-AREVALO 2018C.
2 AGUILAR-AREVALO 2018C result is based on ${{\mathit \nu}_{{\mu}}}$ $\rightarrow$ ${{\mathit \nu}_{{e}}}$ appearance of $460.5$ $\pm99.0$ events; The best fit value is $\Delta$m${}^{2}$ = 0.041 eV${}^{2}$. Superseded by AGUILAR-AREVALO 2021 .
3 AGUILAR-AREVALO 2013A result is based on ${{\mathit \nu}_{{\mu}}}$ $\rightarrow$ ${{\mathit \nu}_{{e}}}$ appearance of $162.0$ $\pm47.8$ events; marginally compatible with twoneutrino oscillations. The best fit value is $\Delta$m${}^{2}$ = 3.14 eV${}^{2}$.
4 MAHN 2012 is a combined spectral fit of MiniBooNE and SciBooNE neutrino data with the range of $\Delta$m${}^{2}$ up to 25 eV${}^{2}$. The best limit is 0.04 at 7 eV${}^{2}$.
5 AGUILAR 2001 is the final analysis of the LSND full data set. Search is made for the ${{\mathit \nu}_{{\mu}}}$ $\rightarrow$ ${{\mathit \nu}_{{e}}}$ oscillations using ${{\mathit \nu}_{{\mu}}}$ from ${{\mathit \pi}^{+}}$ decay in flight by observing beam-on electron events from ${{\mathit \nu}_{{e}}}$ ${}^{}\mathrm {C}$ $\rightarrow$ ${{\mathit e}^{-}}{{\mathit X}}$ . Present analysis results in $8.1$ $\pm12.2$ $\pm1.7$ excess events in the 60$<\mathit E_{{{\mathit e}} }<200$ MeV energy range, corresponding to oscillation probability of $0.10$ $\pm0.16$ $\pm0.04\%$. This is consistent, though less significant, with the previous result of ATHANASSOPOULOS 1998 , which it supersedes. The present analysis uses selection criteria developed for the decay at rest region, and is less effective in removing the background above 60 MeV than ATHANASSOPOULOS 1998 .
6 ATHANASSOPOULOS 1998 is a search for the ${{\mathit \nu}_{{\mu}}}$ $\rightarrow$ ${{\mathit \nu}_{{e}}}$ oscillations using ${{\mathit \nu}_{{\mu}}}$ from ${{\mathit \pi}^{+}}$ decay in flight. The 40 observed beam-on electron events are consistent with ${{\mathit \nu}_{{e}}}$ ${}^{}\mathrm {C}$ $\rightarrow$ ${{\mathit e}^{-}}$ X; the expected background is $21.9$ $\pm2.1$. Authors interpret this excess as evidence for an oscillation signal corresponding to oscillations with probability ($0.26$ $\pm0.10$ $\pm0.05)\%$. Although the significance is only $2.3~\sigma$, this measurement is an important and consistent cross check of ATHANASSOPOULOS 1996 who reported evidence for ${{\overline{\mathit \nu}}_{{\mu}}}$ $\rightarrow$ ${{\overline{\mathit \nu}}_{{e}}}$ oscillations from ${{\mathit \mu}^{+}}$ decay at rest. See also ATHANASSOPOULOS 1998B.
7 LOVERRE 1996 uses the charged-current to neutral-current ratio from the combined CHARM (ALLABY 1986 ) and CDHS (ABRAMOWICZ 1986 ) data from 1986.
References:
AGUILAR-AREVALO 2021
PR D103 052002 Updated MiniBooNE neutrino oscillation results with increased data and new background studies
AGUILAR-AREVALO 2018C
PRL 121 221801 Significant Excess of ElectronLike Events in the MiniBooNE Short-Baseline Neutrino Experiment
AGUILAR-AREVALO 2013A
PRL 110 161801 Improved Search for ${{\overline{\mathit \nu}}_{{\mu}}}$ $\leftrightarrow{{\overline{\mathit \nu}}_{{e}}}$ Oscillations in the MiniBooNE Experiment
MAHN 2012
PR D85 032007 Dual Baseline Search for Muon Neutrino Disappearance at 0.5 eV${}^{2}$ < $\Delta$m${}^{2}$ < 40 eV${}^{2}$
AGUILAR-AREVALO 2007
PRL 98 231801 Search for Electron Neutrino Appearance at the $\Delta \mathit m{}^{2}\sim{}$1 eV${}^{2}$ Scale
AHN 2004
PRL 93 051801 Search for Electron Neutrino Appearance in 250 km Long-baseline Experiment
ASTIER 2003
PL B570 19 Search for ${{\mathit \nu}_{{\mu}}}$ $\rightarrow$ ${{\mathit \nu}_{{e}}}$ Oscillations in the NOMAD Experiment
AVVAKUMOV 2002
PRL 89 011804 A Search for ${{\mathit \nu}_{{\mu}}}$ $-{{\mathit \nu}_{{e}}}$ and ${{\overline{\mathit \nu}}_{{\mu}}}$ $-{{\overline{\mathit \nu}}_{{e}}}$ Oscillations at NUTeV
AGUILAR 2001
PR D64 112007 Evidence for Neutrino Oscillations from the Observation of ${{\overline{\mathit \nu}}_{{e}}}$ Appearance in a ${{\overline{\mathit \nu}}_{{\mu}}}$ Beam
ATHANASSOPOULOS 1998
PRL 81 1774 Evidence for ${{\mathit \nu}_{{\mu}}}$ $\leftrightarrow$ ${{\mathit \nu}_{{e}}}$ Oscillations from the LSND
LOVERRE 1996
PL B370 156 Limits on ${{\mathit \nu}_{{\mu}}}$ Oscillations from the Measurement of the Ratio of 0${{\mathit \mu}^{\pm}}$ to 1${{\mathit \mu}^{\pm}}$ Events at the CERN Narrow Band Neutrino Beam
VILAIN 1994C
ZPHY C64 539 Search for Muon to Electron Neutrino Oscillations
ANGELINI 1986
PL B179 307 New Experimental Limits on ${{\mathit \nu}_{{\mu}}}$ $\leftrightarrow$ ${{\mathit \nu}_{{e}}}$ Oscillations
|
2022-12-09 13:26:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7655282616615295, "perplexity": 4274.783277667421}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711396.19/warc/CC-MAIN-20221209112528-20221209142528-00672.warc.gz"}
|
http://ibpsexamguide.org/quantitative-aptitude/quantitative-aptitude-course/distance-speed-time-boats-and-streams/exercise-2-7.html
|
# Exercise : 2
1. With a uniform speed a car covers a distance in 8 hours. Were the speed increased by 4 km/hr the same distance could be covered in 712 hours. What is the distance covered?
(a) 640 km
(b) 480 km
(c) 420 km
(d) Cannot be determined
(e) None of these
#### View Ans & Explanation
Ans.b
Here D7.5 - D8 = 4
(where D is the distance in km)
⇒ 0.5 D = 4 × 8 × 7.5
⇒ D = 2 × 4 × 8 × 7.5 = 480 km
2. A 300-metre-long train crosses a platform in 39 seconds while it crosses a signal pole in 18 seconds. What is the length of the platform?
(a) 320 metres
(b) 650 metres
(c) 350 metres
(e) None of these
#### View Ans & Explanation
Ans.c
When a train crosses a platform, it crosses a distance equal to the sum of the length of the platform and that of the train. But when a train crosses a signal pole, it crosses the distance equal to its length only.
Here, time taken by the train to cross a signal pole = 18 seconds
Hence, speed of the train = 30018 m/sec
The train takes 21 = (39 – 18) seconds extra in order to cross the platform.
Hence, length of platform = $\frac{21 \times 300}{18} = 350 m$
3. A 260-metre-long train crosses a 120-metre-long wall in 19 seconds. What is the speed of the train?
(a) 27 km/hr
(b) 49 km/hr
(c) 72 km/hr
(d) 70 km/hr
(e) None of these
#### View Ans & Explanation
Ans.c
Speed of train = $\frac{260 + 120}{19} \times \frac{18}{5} = 72 \; km/hr$
4. A 270-metre-long train running at the speed of 120 kmph crosses another train running in opposite direction at the speed of 80 kmph in 9 secs. What is the length of the other train?
(a) 240 metres
(b) 320 metres
(c) 260 metres
(d) 230 metres
(e) None of these
#### View Ans & Explanation
Ans.d
Relative speed = 120 + 80 kmph = 200 × 518 m/sec
t = $\frac{Distance}{Speed} = \frac{\left(270 + x \right) \times 9}{500}$
or 270 + x = $\frac{9 \times 500}{9}x$ = 500 – 270 = 230 m
5. A monkey ascends a greased pole 12 metres high. He ascends 2 metres in first minute and slips down 1 metre in the alternate minute. In which minute, he reaches the top ?
(a) 21st
(b) 22nd
(c) 23rd
(d) 24th
(e) None of these
#### View Ans & Explanation
Ans.b
In 2 minutes, he ascends = 1 metre
∴ 10 metres, he ascends in 20 minutes.
∴ He reaches the top in 21st minute.
6. A man walks a certain distance and rides back in 614h. He can walk both ways in 734h. How long it would take to ride both ways ?
(a) 5 hours
(b) 412 hours
(c) 434 hours
(d) 6 hours
(e) None of these
#### View Ans & Explanation
Ans.c
We know that, the relation in time taken with two different modes of transport is
twalk both + tride both = 2 (twalk + tride)
314 + tride both = 2 × 254
⇒ tride both = 252 - 314 = 194 = 434 hours
7. There are 20 poles with a constant distance between each pole. A car takes 24 seconds to reach the 12th pole . How much time will it take to reach the last pole?
(a) 25.25 s
(b) 17.45 s
(c) 35.75 s
(d) 41.45 s
(e) None of these
#### View Ans & Explanation
Ans.d
Let the distance between each pole be x m.
Then, the distance up to 12th pole = 11 xm
Speed = 11x24 m/s
Time taken to covers the total distance of 19x
= $\frac{19x \times 24}{11x} = 41.45 \; s$
8. A man is walking at a speed of 10 km per hour. After every kilometre, he takes rest for 5 minutes. How much time will be take to cover a distance of 5 kilometres?
(a) 48 min.
(b) 50 min.
(c) 45 min.
(d) 55 min.
(e) None of these
#### View Ans & Explanation
Ans.b
Rest time = Number of rest × Time for each rest
= 4 × 5 = 20 minutes
Total time to cover 5km
= (510 × 60)minutes + 20 minutes = 50 minutes
9. On a journey across Bombay, a tourist bus averages 10 km/h for 20% of the distance, 30 km/h for 60% of it and 20 km/h for the remainder. The average speed for the whole journey was
(a) 10 km/h
(b) 30 km/h
(c) 5 km/h
(d) 20 km/h
(e) None of these
#### View Ans & Explanation
Ans.d
Let the average speed be x km/h.
and Total distance = y km. Then,
0.210y + 0.630y + 0.220y = yx
⇒ x = 10.05 = 20 km/h
10. In a 800 m race around a stadium having the circumference of 200 m, the top runner meets the last runner on the 5th minute of the race. If the top runner runs at twice the speed of the last runner, what is the time taken by the top runner to finish the race ?
(a) 20 min
(b) 15 min
(c) 10 min
(d) 5 min
(e) None of these
#### View Ans & Explanation
Ans.c
After 5 minutes (before meeting), the top runner covers 2 rounds i.e., 400 m and the last runner covers 1 round
i.e., 200 m.
∴ Top runner covers 800 m race in 10 minutes.
|
2018-05-25 18:25:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5468724966049194, "perplexity": 2847.2610597687253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867173.31/warc/CC-MAIN-20180525180646-20180525200646-00477.warc.gz"}
|
https://trustica.cz/en/2018/02/22/introduction-to-elliptic-curves/
|
## Introduction to elliptic curves
Written by Dominik Joe Pantůček on February 22, 2018.
We are about to start a journey to the realm of elliptic curve cryptography. It may seem strange at this point as why we should bother with that, but rest assured that we will eventually find out how to use this knowledge to secure our email communication. In this introduction, you can expect to see what an elliptic curve looks like, how it is defined and how it can be simplified if we want to make some practical use of it.
Although the name says “elliptic”, elliptic curve is definitely not an ellipse. But – as ellipse – it is an one-dimensional object defined in two-dimensional plane. Its definition basically says: the curve is formed from all points in the plane, which solve the Weierstrass equation. The equation is named for Karl Weierstrass[1], who studied elliptic functions extensively during the nineteenth century and the general form of such equation is:
$y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6$
First thing you might notice, it is not simple. Second one may be the fact, the $a_5$ coefficient is missing. That is just a matter of history and to avoid confusion, mathematicians even today use this coefficient naming. But how does such curve look like? To give you an impression, you can find one in picture 1.
Picture 1: Elliptic curve defined by Weierstrass equation $y^2+2xy-y=x^3+2x^2+x+3$
It is true, as mentioned above, the equation is not actually simple. And it is also true that mathematicians like to simplify things when possible. In this case we can use Riemann-Roch’s theorem[2] – actually its variation studied by Friedrich Karl Schmidt[3] – applied to algebraic curves. As long as the genus of the curve remains the same and we are working in the same topological space, we can easily transform the equation into much simpler form using the following transformation – effectively transforming the coordinate system:
$x’=u^2x+r$
$y’=u^3y+su^2x+t$
You might be wondering why am I talking about simplification when it looks even more complex than before? It is rather simple, somebody has already done the job of calculating the proper values for $r$, $s$, $t$ and $u$ and they are:
$r=-\frac{a_1^2+4a_2}{12}$
$s=-\frac{a_1}{2}$
$t=\frac{a_1^3+4a_1a_2-12a_3}{24}$
$u=1$
Yes, it looks even worse, but you can probably guess, that as long as you have some initial coefficients, calculating these is just a matter of calculating their values – which is something a computer can do at virtually no cost. But what is the result? The result is that coefficients $a_1$, $a_2$ and $a_3$ become zero after this transformation and by using $a=a_4$ and $b=a_6$ for the two remaining coefficients, we get the simplified Weierstrass equation which is much better to work with:
$y^2=x^3+ax+b$
The transformation from generic Weierstrass equation to simplified can be seen in the following video:
And the resulting simplified curve can be seen in picture 2.
Picture 2: Elliptic curve defined by simplified Weierstrass equation $y^2=x^3-3x+5.25$
In case you wonder how the coefficients $a$ and $b$ were calculated, you can get the idea:
$a=-2st-a_1(t+rs)-a_3s+3r^2+2a_2r+a_4$
$b=-t^2-a_1rt-a_3t+r^3+a_2r^2+a_4r+a_6$
Once again – this might seem pretty complex, but remember that this is a solved equation and if you want to know the numbers, that is when some computing machinery comes handy.
I hope this got you excited about elliptic curves and maybe next time we will fast forward on to how these elliptic curves form a basis of very interesting cryptographic schemes. See you then!
### References
1. Karl Weierstrass. (2018, February 13). In Wikipedia, The Free Encyclopedia. Retrieved 21:17, February 18, 2018, from https://en.wikipedia.org/w/index.php?title=Karl_Weierstrass&oldid=825470901
2. Riemann–Roch theorem. (2017, December 21). In Wikipedia, The Free Encyclopedia. Retrieved 21:15, February 18, 2018, from https://en.wikipedia.org/w/index.php?title=Riemann%E2%80%93Roch_theorem&oldid=816382095
3. Friedrich Karl Schmidt. (2017, October 22). In Wikipedia, The Free Encyclopedia. Retrieved 21:15, February 18, 2018, from https://en.wikipedia.org/w/index.php?title=Friedrich_Karl_Schmidt&oldid=806504696
|
2020-04-01 18:35:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8291875123977661, "perplexity": 382.20674084192274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505826.39/warc/CC-MAIN-20200401161832-20200401191832-00318.warc.gz"}
|
https://math.stackexchange.com/questions/2907329/relation-between-dilogarithm-and-its-complex-conjugate
|
Relation between dilogarithm and its complex conjugate
I am looking for a relation between the dilog and its complex conjugate, that is can I simplify the following summation of terms $$f(z) = \text{Li}_2(z) + (\text{Li}_2(z))^*?$$
I have looked through the many identities that are known to exist among such functions on the Wolfram pages but did not find any involving the complex conjugate. If $z>1$ then $\text{Li}_2(z)$ is complex such that the combination $f(z)$ is real so it would be nice if $f(z)$ may be simplified to a dilog with an argument not appearing on the branch cut or something alike.
• The best we can say is $\text{Li}_2(z) + (\text{Li}_2(z))^* = 2\text{Re }\text{Li}_2(z)$, which is just a restatement. – GEdgar Sep 6 '18 at 11:18
The real part $\Re \mathrm{Li}_2(x) = \frac{1}{2}f(x)$ for $x>1$ can be computed with the Euler reflection formula (see https://en.wikipedia.org/wiki/Polylogarithm#Dilogarithm)
$$\Re \mathrm{Li}_2(x) = \frac{\pi^2}{6} - \mathrm{Li}_2(1-x) - \ln x \ln|1-x|$$
where I have used $\Re \ln(1-x) = \ln|1-x|$ for $x > 1$
|
2019-07-17 22:46:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571199417114258, "perplexity": 177.8044318503167}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525414.52/warc/CC-MAIN-20190717221901-20190718003901-00377.warc.gz"}
|
https://de.zxc.wiki/wiki/Mortalit%C3%A4t
|
mortality
Mortality (from lat. Mortalitas "mortality"), mortality rate , mortality or death rate are terms from demography . They denote the number of deaths in relation to the total number of individuals or - in the case of the specific death rate - in relation to the number in the relevant population , usually in a certain period of time. The mortality in terms of the probability of death can be found in the first column of each mortality table .
The death rate or mortality rate is the ratio of the number of deaths to the average population of a population.
The crude mortality is the number of deaths per population per time, for example per 1,000 people and a year. The age-specific mortality, for example child mortality , indicates the deaths per age group per time. Lethality is the mortality related to the total number of people suffering from a disease. In the case of infant or maternal mortality , the number of events (births) is the reference value, not the population size.
In epidemiology , (disease-specific) mortality is the ratio of the number of individuals who have died of a disease in a population in a period of time to the number of individuals in the population (usually based on 100,000 inhabitants). The mortality rate , however, is the ratio of the number of deaths from a particular disease to the number of individuals with the disease individuals.
Mortality curve
Age-specific death rates in Germany in 1990 and 2010 ( log. Scale )
According to the birth risk, the death rate drops to its minimum value for eight to ten year olds with approx. 20 deaths per 100,000 people of the age group per year (tpj = deaths per year (per 100,000 people)), see diagram. At almost 50%, accidents are the most common cause of death in this age group. For 15 to 20 year olds, accidents are also the main risk (40 tpj ), followed by murder (approx. 18 tpj for the USA, 40 tpj for South Africa, 5 tpj for Germany) and suicide (12 tpj ). With increasing age, the suicide rate and the frequency of accidents remain almost unchanged, while illnesses make up the main part of the death rate of 800 tpj among 50 to 60 year olds.
Abraham de Moivre (1725) approximated the age-dependent mortality rate through a hyperbolic increase in the risk of death, limited by a maximum age. Benjamin Gompertz (1824) suggested an exponential increase in mortality, which reflects the observed data well from the age of 30. Refined models introduce additional parameters.
Modeling according to Gompertz
In the Gompertz diagram (see mortality curve above) the logarithm of the death rate is plotted against age. The logarithmic representation shows that from the age of approx. 30 the increase is almost linear: the death rate doubles approximately at constant time intervals. This period of time is also abbreviated as MRDT from mortality rate doubling time (or MRD). The linear increase in the logarithmic representation corresponds to an exponential increase in the death rate with age. Mathematical modeling typically uses the natural logarithm, so the death rate is described as follows:
${\ displaystyle {\ text {Death rate}} ({\ text {Age}}) = S_ {30} \ cdot e ^ {G \ cdot ({\ text {Age}} - 30 \, {\ text {years} })}}$
It denotes the mortality at the age of 30 years. An adjustment for the parameter provides a value of , which corresponds to an MRDT of years. The factor is called the Gompertz death coefficient. Studies have shown that the MRDT has typically been between 7 and 9 years in Australia, the United States, Japan, and Northern Europe from the mid-18th century to today. Therefore the value is often estimated at 8. ${\ displaystyle \ textstyle S_ {30}}$${\ displaystyle \ textstyle G}$${\ displaystyle \ textstyle G \ approx {\ frac {0 {,} 08} {\ text {year}}}}$${\ displaystyle {\ tfrac {\ ln 2} {G}} = 8 {,} 7}$${\ displaystyle \ textstyle G}$
For comparison, MRDT values for other animal species are: laboratory mouse 0.27 years, laboratory hamster 0.5, rhesus monkey 15, horse 4, domestic dog 3, black-backed gull 5, king pheasant 1.6 years.
Examples of mortality
• Intrauterine or fetal mortality refers to the time of pregnancy, it includes miscarriages, stillbirths, and terminations.
• Premature infant mortality or neonatal mortality is a subset of infant mortality (1st to 4th month).
• Perinatal mortality is the sum of neonatal and fetal mortality, reduced in various ways by phases of the onset of pregnancy of different lengths, according to WHO 22 weeks. In 2006 it was 0.3% (Luxembourg), 0.5% (Germany), 1% (France) and 1.5% (Macedonia) of births for fetuses weighing at least 1 kg and up to 6 days after birth .
• Infant mortality around 2000 in Germany: 400 deaths per 100,000 births = 4 per 1000 = 1/250.
• Maternal mortality at birth in 2003 in Germany: 12 per 100,000 women giving birth, at birth in 2003 in Kenya: 1,300 per 100,000.
• Mortality in Germany: 1,000 deaths per 100,000 inhabitants per year = 10 per 1,000 = 1/100.
• Road mortality 2004: deaths per 100,000 inhabitants per year: 8 in Germany, 5 in the Netherlands.
• Fatalities from lightning strikes in Germany nowadays: an average of three to seven deaths per year in Germany, i.e. less than 0.01 per 100,000 inhabitants per year In the 19th century, around 300 people were killed by lightning in Germany each year, as significantly more people were out Worked in the field and could not retreat into protected objects such as cars, tractors or combine harvesters.
The mean life expectancy is better suited to the comparison of different regions than the general or raw mortality , since it compensates for the possibly different age structure composition of the population. In relation to the age structure , very different populations also have very different mortality rates.
A general probability of death is often derived from the mortality / year for a risk assessment. For example, in Germany with a population of 80 million, around five people die each year from lightning strikes. Assuming an age of 80 years, the risk of dying from lightning within the 80 years is 1: 200,000. The risk of traffic accidents in Germany is accordingly 1: 150. After all, the general risk of dying within 80 years of life is 1: 1.25 = 80%.
Influencing variables
The main influencing factors for mortality are:
• Ecological determinants (especially the environment, precaution against natural disasters)
• Socio-economic, political and cultural determinants (physical work, occupational safety, income, nutrition, lifestyle, war, traffic, ...)
• Medical determinants (e.g. genetic factors, quality of medical care, vaccinations, health education, hygiene regulations, etc.)
• While it averages out statistically, chance remains as fate for the individual: luck and unhappiness.
The standardized mortality rate deals with the data on the deaths of groups of people, which are made mathematically comparable with regard to age, gender, etc.
use
The birth rate and death rate are important parameters for determining the age distribution of a society and population dynamics in general.
Mortality is also used in some risk analysis criteria (see Minimum endogenous mortality ). In technology, failure probabilities are examined as part of event time analyzes.
The death rate plays a role in estimating seasonal flu and pandemics . The mortality is compared with the mean values of previous years without the epidemic and in this way excess mortality is determined that can be assigned to the epidemic. From 2008 DG Sante supported the establishment of the Euromomo project for pan-European monitoring of mortality. It collects data from 18 European countries, the four parts of the United Kingdom and two German federal states continuously and promptly in order to make the effects of seasonal influenza or a pandemic on mortality rates visible across national borders. The project is now also receiving support from the European Center for Disease Prevention and Control (ECDC) and the World Health Organization (WHO).
literature
• Ladislaus von Bortkewitsch : The mean lifespan. The methods of their determination and their relation to the measurement of mortality. Gustav Fischer, Jena 1893 ( digitized ).
• Rainer Wehrhahn, Verena Sandner Le Gall: Population geography. WBG (Scientific Book Society), Darmstadt 2011, ISBN 978-3-534-15628-3 , pp. 36-45.
Wiktionary: mortality - explanations of meanings, word origins, synonyms, translations
Individual evidence
1. ^ Pschyrembel Clinical Dictionary . 258th edition. De Gruyter, 1998.
2. ↑ Protection against infection and infection epidemiology . Technical terms - definitions - interpretations, Robert Koch Institute 2015
3. ^
4. ↑ Death rate in the US .
5. a b Finch u. a .: Slow mortality rate accelerations during aging in some animals approximate that of humans. Science 249, 1990, 902-905, doi: 10.1126 / science.2392680 JSTOR 2877958 .
6. Finch: The Biology of human longevity. Academic Press, 2007, p. 12.
7. accidents . At: gbe-bund.de.
8. Deaths from external causes and their consequences (from 1998). Federal health reporting (Germany)
9. euromomo.eu
|
2022-01-21 09:14:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6666440963745117, "perplexity": 3073.801830542305}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302740.94/warc/CC-MAIN-20220121071203-20220121101203-00465.warc.gz"}
|
https://solvedlib.com/5-given-the-ir-and-nmr-spectra-for-compound-x,340884
|
# 5. Given the IR and NMR spectra for compound X, C9H13N, propose an acceptable structure. 3350...
###### Question:
5. Given the IR and NMR spectra for compound X, C9H13N, propose an acceptable structure. 3350 1505 2925 3000 ROC 100 1000 4H 3H (5) 2H 2H 2H 10 9 8 7 6 5 4 3 2 1 0
#### Similar Solved Questions
##### 5. (6 points) Consider the following double integral. 66 (4x + 3x2y - y3) dx dy Estimate this integral by using the trapezoid rule with h = 1 in both the x and the y directions
5. (6 points) Consider the following double integral. 66 (4x + 3x2y - y3) dx dy Estimate this integral by using the trapezoid rule with h = 1 in both the x and the y directions...
##### Consider the optimization problem min(zt rz)eR? f(I1,I2), where f(r1,I2) = ri 4ri 4r e+2ri G41.Show that the function f(T1,12) is not convex On R? but must have global optimal solution: (h) Use Newton" method t0 find the solution using starting points (0,5) and _" (5,0) Explain your anSwerS_
Consider the optimization problem min(zt rz)eR? f(I1,I2), where f(r1,I2) = ri 4ri 4r e+2ri G41. Show that the function f(T1,12) is not convex On R? but must have global optimal solution: (h) Use Newton" method t0 find the solution using starting points (0,5) and _" (5,0) Explain your anSwe...
##### Find the length of side c to 2 decimal places. 27.26 ft 11° 16' 09"
Find the length of side c to 2 decimal places. 27.26 ft 11° 16' 09"...
##### A firm’s production function is f(x1, x2) = min{x1, x2} a What restriction must a satisfy...
A firm’s production function is f(x1, x2) = min{x1, x2} a What restriction must a satisfy in order for profits to be maximised? Find the factor demand, supply, and the profit functions....
##### 02_Consider that bus trip from Istanbul to izmit; which is subject to the followings:The distance between the two cities is nearly 100 kmn: The speed can not exceed [1O kmh: The traffic is usually heavy and therefore the bus speed is variable; The bus usually leaves the bus station in Harem later than the departure time. but the delays never exceed [0 min:What is the total time (T spent 0n trip from Istanbul to izqit by bus under these conditions?Hivt: Since distance (D) is approximnate. you wou
02_ Consider that bus trip from Istanbul to izmit; which is subject to the followings: The distance between the two cities is nearly 100 kmn: The speed can not exceed [1O kmh: The traffic is usually heavy and therefore the bus speed is variable; The bus usually leaves the bus station in Harem later ...
##### We are reviewing Labor Unions and I beed assistance with the below: Describe the advantages and...
We are reviewing Labor Unions and I beed assistance with the below: Describe the advantages and disadvantages of unions from the perspective of employees and employers? What factors have affected labor unions and the employee-employer relationship?...
##### 4) What is the physical barrier in the root that regulates the flow of water to xylem via cell walls? A) phloem B) epidermis C) Casparian strip B cortex plasmodesmata
4) What is the physical barrier in the root that regulates the flow of water to xylem via cell walls? A) phloem B) epidermis C) Casparian strip B cortex plasmodesmata...
##### Verify that the equation is an identitycosXsinChoose the correct answer below0 AJcos*2 X sincOs *sin x 1 + coscosX sinxcos*0 B. 1 + coSX sincoscos sin xsin X 1 + cos % =1 sin xcoScoscos}+cos {cossincoScos2 tantan x-2 tanxsintantan
Verify that the equation is an identity cosX sin Choose the correct answer below 0 AJ cos* 2 X sin cOs * sin x 1 + cos cosX sinx cos* 0 B. 1 + coSX sin cos cos sin x sin X 1 + cos % =1 sin x coS cos cos}+ cos { cos sin coS cos 2 tan tan x-2 tanx sin tan tan...
##### In 2010 silicon nitride,Si3N4, was used as the main material inthe thrusters of the Japanese (JAXA) space probe, Akatsuki, taskedto study the atmosphere of Venus. At very high temperatures, Si3N4(140.28 g/mol) can be produced by the direct reaction of solidsilicon and nitrogen gas. 3 Si(s) + 2 N2(g) Ã Si3N4 Suppose that1.4 kg of silicon (28.085 amu) and 1.2 kg of nitrogen (28.0134g/mol) are sealed into a suitable reaction vessel and heated to atemperature where the reaction goes to completion.
In 2010 silicon nitride,Si3N4, was used as the main material in the thrusters of the Japanese (JAXA) space probe, Akatsuki, tasked to study the atmosphere of Venus. At very high temperatures, Si3N4 (140.28 g/mol) can be produced by the direct reaction of solid silicon and nitrogen gas. 3 Si(s) + 2 N...
##### 3) If a 30-cm ruler with a smallest division of 1 mm is used for a...
3) If a 30-cm ruler with a smallest division of 1 mm is used for a measurement, which of the following choices cannot be considered as a precise result of using this ruler. Circle the worst choices: 5.223124 cm 5.22 cm 0.052 m 5 cm 0.0522 m 5 m 26 6) Sum all the values in 3) and express the sum with...
##### (1 point) A normal distribution with mean 0 and standard deviation Võ is sampled three times,...
(1 point) A normal distribution with mean 0 and standard deviation Võ is sampled three times, yielding values x, y, z. Find the log-likelihood function In L(0) (type theta for 6): In L(O) = Find the derivative of the log-likelihood with respect to 0 (type theta for 6): a ᏧᎾ [ln LC...
##### 8.19 Let Xl, Xn be independent random variables, each having garma density with the same scale parameter A _ Let &i be the shape parameter of the density of X; Verify that Xi + +Xn has & gamnma density Wit shape = parameter &1 + 0an + dn and scale parameter /_
8.19 Let Xl, Xn be independent random variables, each having garma density with the same scale parameter A _ Let &i be the shape parameter of the density of X; Verify that Xi + +Xn has & gamnma density Wit shape = parameter &1 + 0an + dn and scale parameter /_...
##### 3) Find the value of a, b, € and d if ffx) is differentiable asinx + bcosx X<o f(x) x+x+1 3 >x > 0 cx + d x234) Prove that flx) =x? +x2+x+l has a root between -1 and 0. (BONUS: find the root explicitly for 5 extra points)5) Find an equation of a line that is of flx) -x tangent to the graph 4 and parallel to the line 4x +V+l= 0.
3) Find the value of a, b, € and d if ffx) is differentiable asinx + bcosx X<o f(x) x+x+1 3 >x > 0 cx + d x23 4) Prove that flx) =x? +x2+x+l has a root between -1 and 0. (BONUS: find the root explicitly for 5 extra points) 5) Find an equation of a line that is of flx) -x tangent to th...
|
2023-04-01 05:36:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8004911541938782, "perplexity": 3698.9935249304185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00654.warc.gz"}
|
http://jmlr.org/papers/v15/staedler14a.html
|
## Pattern Alternating Maximization Algorithm for Missing Data in High-Dimensional Problems
Nicolas Städler, Daniel J. Stekhoven, Peter Bühlmann; 15(Jun):1903−1928, 2014.
### Abstract
We propose a novel and efficient algorithm for maximizing the observed log-likelihood of a multivariate normal data matrix with missing values. We show that our procedure, based on iteratively regressing the missing on the observed variables, generalizes the standard EM algorithm by alternating between different complete data spaces and performing the E-Step incrementally. In this non-standard setup we prove numerical convergence to a stationary point of the observed log- likelihood. For high-dimensional data, where the number of variables may greatly exceed sample size, we perform regularization using a Lasso-type penalty. This introduces sparsity in the regression coefficients used for imputation, permits fast computation and warrants competitive performance in terms of estimating the missing entries. We show on simulated and real data that the new method often improves upon other modern imputation techniques such as k-nearest neighbors imputation, nuclear norm minimization or a penalized likelihood approach with an $\ell_1$-penalty on the concentration matrix.
[abs][pdf][bib]
|
2018-08-15 14:52:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4467499256134033, "perplexity": 997.9220257256521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210133.37/warc/CC-MAIN-20180815141842-20180815161842-00563.warc.gz"}
|
https://da.overleaf.com/latex/templates/cumulative-dissertation-template/rdwdbhmwfyyc
|
# Cumulative Dissertation Template
Author
Pavol Harar
AbstractA cumulative dissertation is in its nature a different document in comparison to a standard monogram dissertation. Unfortunately, the university guide-lines sometimes do not provide a LaTeX template or do not exactly specify the structure of such a document. Therefore, to make the final document as readable as possible, we might want to structure it more than the standard thesis templates (e.g. adding multiple bibliographies, multiple tables of contents, etc.). Unfortunately, this process is not always straight forward and the interplay between additional packages can cause troubles. I encountered multiple problems while preparing the template for my thesis, which are solved in this template and I hope this document will save some time to anybody wishing to structure their thesis similarly.
|
2020-12-04 21:21:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24325498938560486, "perplexity": 1334.9540233766988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141743438.76/warc/CC-MAIN-20201204193220-20201204223220-00709.warc.gz"}
|
https://www.ideals.illinois.edu/handle/2142/100649
|
## Files in this item
FilesDescriptionFormat
application/vnd.openxmlformats-officedocument.presentationml.presentation
1194218.pptx (3MB)
PresentationMicrosoft PowerPoint 2007
application/pdf
3254.pdf (20kB)
AbstractPDF
## Description
Title: BLUE SHIFTED HYDROGEN BOND IN CH/D3CN…HCCL3COMPLEXES Author(s): Behera, Bedabyas Contributor(s): Das, Puspendu Kumar Subject(s): Fundamental interest Abstract: H-bonded complexes between \chem{CHCl_3} and \chem{CH/D_3CN } have been identified by FTIR spectroscopy in the gas phase at room temperature. With increasing partial pressure of the components, The C-H stretching fundamental shifts to the blue which has been identified as due to C-H...N interaction. The C-H stretching frequency of \chem{CHCl_3} with \chem{CH_3CN } and \chem{CD_3CN } are shifted by +8.7 and +8.6 \wn,respectively. By using quantum chemical calculations at the MP2/6-311++G$^{*}$$^{*}$ level, we predict the geometry, electronic structural parameters, binding energy, and spectral shift in the H-bonded complexes. The potential energy scans of the above complexes as a function of C...N distance shows that the H-bonding interaction is predominantly due to contribution of two opposing forces i.e.,electrostatic attraction between H and N which leads to the C-H bond elongation with consequent red-shift and the electronic and nuclear repulsion between the C and N which results in C-H bond contraction and blue-shift of the C-H stretching frequency. The net effect of these two opposing forces at the equilibrium complex geometry dictates the nature of the shift although the influence of the surrounding atoms bonded to the atoms that are directly involved in the H-bonding cannot be fully underestimated. The total interaction energy (-14.23 kJ/mol) is characterized by Morokuma energy decomposition analysis where the binding in \chem{CH/D_3CN }...\chem{CHCl_3} is dominated by electrostatic attraction (-25.86 kJ/mol). The attraction, however, is considerably suppressed by exchange repulsion (+19.54 kJ/mol). Other components like polarization (-5.44 kJ/mol) and charge transfer (-5.06 kJ/mol) make significant contribution to the interaction energy. Issue Date: 06/22/18 Publisher: International Symposium on Molecular Spectroscopy Citation Info: APS Genre: Conference Paper / Presentation Type: Text Language: English URI: http://hdl.handle.net/2142/100649 DOI: 10.15278/isms.2018.FD05 Other Identifier(s): FD05 Date Available in IDEALS: 2018-08-172018-12-12
|
2022-01-26 17:01:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6196855902671814, "perplexity": 6000.002648213573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304959.80/warc/CC-MAIN-20220126162115-20220126192115-00344.warc.gz"}
|
https://zbmath.org/?q=an%3A1310.53002
|
# zbMATH — the first resource for mathematics
Function theory on symplectic manifolds. (English) Zbl 1310.53002
CRM Monograph Series 34. Providence, RI: American Mathematical Society (AMS) (ISBN 978-1-4704-1693-5/hbk). 203 p. (2014).
The subject of this book is the study of function theory on symplectic manifolds. Symplectic geometry arose naturally in the context of classical mechanics, namely the cotangent bundle of a manifold $$M$$ is a phase space of a mechanical system with the configuration space $$M$$. Starting from 1980 symplectic topology developed exponentially with the introduction of powerful new methods, such as: Gromov’s theory of pseudo-holomorphic curves, Floer homology, Hofer’s metric on the group of Hamiltonian diffeomorphisms, Gromov-Witten invariants, symplectic field theory and the link to mirror symmetry. Rigidity phenomena have been investigated using these new methods. In this context, function spaces exhibit interesting properties and also provide a link to quantum mechanics.
This book is a monograph on function theory on symplectic manifolds as well as an introduction to symplectic topology. The first chapter introduces Eliashberg-Gromow $$C^0$$-rigidity theorem, Arnold’s conjecture on symplectic fixed points, Hofer’s geometry and $$J$$-holomorphic curves. Several facets of $$C^0$$-robustness of the Poisson bracket are investigated in various chapters of the book. The theory of symplectic quasi-states is described in Chapter five and the applications to symplectic intersections, Lagrangian knots and Hofer’s geometry are presented in Chapter six. The last three chapters describe an introduction to Floer homology.
##### MSC:
53-02 Research exposition (monographs, survey articles) pertaining to differential geometry 53D05 Symplectic manifolds (general theory) 53D17 Poisson manifolds; Poisson groupoids and algebroids 53D40 Symplectic aspects of Floer homology and cohomology 57R17 Symplectic and contact topology in high or arbitrary dimension 81S10 Geometry and quantization, symplectic methods 81P15 Quantum measurement theory, state operations, state preparations 53D42 Symplectic field theory; contact homology
|
2022-01-26 22:59:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5768360495567322, "perplexity": 860.4975070532088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305006.68/warc/CC-MAIN-20220126222652-20220127012652-00171.warc.gz"}
|
http://www.unicode.org/mail-arch/unicode-ml/y2013-m01/0127.html
|
# Re: help with an unknown character
Date: Fri, 11 Jan 2013 20:01:14 +0100
2013/1/11 Jukka K. Korpela <jkorpela_at_cs.tut.fi>
> The page http://en.wikipedia.org/wiki/**Contradiction<http://en.wikipedia.org/wiki/Contradiction>(which isn’t particularly convincing or otherwise important) refers to the
> LaTeX Symbol List
> ftp://ftp.funet.fi/pub/TeX/**CTAN/info/symbols/**
> comprehensive/symbols-a4.pdf<ftp://ftp.funet.fi/pub/TeX/CTAN/info/symbols/comprehensive/symbols-a4.pdf>
> which describes, in clause “3 Mathematical Symbols”, some notations used
> for contradiction. None of them resembles much the symbol in the image.
> What comes closest is \blitza, but it’s still rather different, and there
> is no information of what it might be in Unicode terms.
>
In fact what is expressed is not a contradiction, but a symbol for FALSE
(opposed to TRUE).
But mathemetics also include assertions that are neither FALSE or TRUE but
UNDECIDABLE (and it can be PROVEN that such assertion is undecidable,
within a logic system with its axioms, whiuch means that you can derive two
distinct logic systems where the undecidable assertion is arbitrarily set
as TRUE or FALSE).
There's also the need to express cases where assertions have any other
probability of being TRUE or FALSE (instead of just 0% and 100%), and
you'll need a symbol to express this probability, because it is sometimes
computable, within the logic system itself. Sometimes this probability is
not absolute and could be within a range (the UNDECIDABLE state means that
the probability range is [0%..100%] inclusively). This includes cases like
the results of some operations supposed to return any number, where you'll
need the concept of "NaN" (not a number), and even some more ranges of NaN
values indicating the cause of this undecidability.
Mathematics have a lot of logic (and numeric) systems (in fact their
possible number is most probably infinite). For each of them, you need more
symbols to express your assertions. How many symbols will you need ? Each
mathemetical theory studying one of them will then need to create its own
symbols
Received on Fri Jan 11 2013 - 13:03:50 CST
This archive was generated by hypermail 2.2.0 : Fri Jan 11 2013 - 13:03:51 CST
|
2018-02-20 02:47:23
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8899420499801636, "perplexity": 3564.5800971213007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812871.2/warc/CC-MAIN-20180220010854-20180220030854-00293.warc.gz"}
|
https://quant.stackexchange.com/questions/49889/forward-price-vs-futures-price-wilmott/49904
|
# Forward price vs. futures price - Wilmott
I am reading Paul Wilmott's book PWOQF2, and there is something I don't get in his derivation of the convexity adjustment between forward and futures prices (chap. 30).
He models $$S$$ and $$r$$ following SDEs $$dS_t = \mu S_t dt +\sigma S_t dX_1$$
$$dr_t = u(r,t)dt + w(r,t) dX_2$$
$$d\langle X_1, X_2 \rangle_t = \rho dt$$
under the physical measure, the risk-neutral measure dynamics being the same up to the market price of risk term $$\lambda$$.
He shows the well-known result for forward price, i.e.
$$\text{Forward price} = \frac{S}{Z}$$ where $$Z$$ is the relevant zero coupon bond price. Until then everything's fine.
He then writes the futures price as $$F(S, r, t) = \frac{S}{p(r,t)}$$, where $$p$$ is some kind of discount factor. Following his usual routine, we get the pricing PDE for a derivative depending on $$S$$ and $$r$$: $$\frac{\partial F}{\partial t} + \frac{1}{2}\sigma^2S^2\frac{\partial^2 F}{\partial S^2}+\rho\sigma Sw\frac{\partial^2 F}{\partial S \partial r} + \frac{1}{2}w^2\frac{\partial^2 F}{\partial r^2} + rS\frac{\partial F}{\partial S} + \left( u - \lambda w \right)\frac{\partial F}{\partial r} = 0$$
And then he derives a PDE for $$p$$: $$\frac{\partial p}{\partial t} + \frac{1}{2}w^2\frac{\partial^2 p}{\partial r^2} + \left( u - \lambda w \right)\frac{\partial p}{\partial r} - rp \underline{-w^2\frac{\left(\frac{\partial p}{\partial r}\right)^2}{q} + \rho\sigma\beta\frac{\partial p}{\partial r}} = 0$$ commenting "Just plug the similarity form into the equation to see this".
My questions are :
1. What similarity form? And into which equation? (Not clear at all to me…)
2. Do you guys have any ideas where the $$q$$ and $$\beta$$ in the underlined terms (the famous convexity adjustment) come from? They never appear in the equations given at the beginning of the section (yes, the long summary was about that)
Thanks a lot for your help!
Modulo the two $$\beta$$ and $$q$$ errors, the proof is not that complicated actually. The similarity solution is simply $$F(S, r, t) = \frac{S}{p(r, t)}$$ and it has to be input into the previous pricing PDE. We thus replace : $$\frac{\partial F}{\partial t} = -\frac{S}{p^2}\frac{\partial p}{\partial t}$$ $$\frac{\partial F}{\partial S} = \frac{1}{p}$$ $$\frac{\partial^2 F}{\partial S^2} = 0$$ $$\frac{\partial F}{\partial r} = -\frac{S}{p^2}\frac{\partial p}{\partial r}$$ $$\frac{\partial^2 F}{\partial r^2} = -\frac{S}{p^4}\left[p^2 \frac{\partial^2 p}{\partial r^2}-2p\left( \frac{\partial p}{\partial r} \right)^2 \right] \equiv -\frac{S}{p^3}\left[p \frac{\partial^2 p}{\partial r^2}-2\left( \frac{\partial p}{\partial r} \right)^2 \right]$$ $$\frac{\partial^2 F}{\partial S \partial r} = -\frac{1}{p^2}\frac{\partial p}{\partial r}$$ $$\Rightarrow -\frac{S}{p^2}\frac{\partial p}{\partial t} - \frac{\rho\sigma Sw}{p^2}\frac{\partial p}{\partial r} - \frac{1}{2} w^2 \frac{S}{p^3}\left[p \frac{\partial^2 F}{\partial r^2}-2\left( \frac{\partial p}{\partial r} \right)^2 \right] + r\frac{S}{p} - (u - \lambda w)\frac{S}{p^2}\frac{\partial p}{\partial r} = 0$$ You multiply by $$-\frac{p^2}{S}$$ and we're done. $$\frac{\partial p}{\partial t} + \frac{1}{2} w^2 \frac{\partial^2 p}{\partial r^2} + (u - \lambda w) \frac{\partial p}{\partial r} - rp \underline{- \frac{w^2}{\color{red}{p}} \left( \frac{\partial p}{\partial r} \right)^2 + \rho\sigma \color{red}{w} \frac{\partial p}{\partial r}} = 0$$
|
2020-04-04 22:45:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8576490879058838, "perplexity": 507.71674459649233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370525223.55/warc/CC-MAIN-20200404200523-20200404230523-00473.warc.gz"}
|
https://analytixon.com/2015/08/28/whats-new-on-arxiv-28/
|
Heterogeneous programming has started becoming the norm in order to achieve better performance by running portions of code on the most appropriate hardware resource. Currently, significant engineering efforts are undertaken in order to enable existing programming languages to perform heterogeneous execution mainly on GPUs. In this paper we describe Jacc, an experimental framework which allows developers to program GPGPUs directly from Java. By using the Jacc framework, developers have the ability to add GPGPU support into their applications with minimal code refactoring. To simplify the development of GPGPU applications we allow developers to model heterogeneous code using two key abstractions: \textit{tasks}, which encapsulate all the information needed to execute code on a GPGPU; and \textit{task graphs}, which capture the inter-task control-flow of the application. Using this information the Jacc runtime is able to automatically handle data movement and synchronization between the host and the GPGPU; eliminating the need for explicitly managing disparate memory spaces. In order to generate highly parallel GPGPU code, Jacc provides developers with the ability to decorate key aspects of their code using annotations. The compiler, in turn, exploits this information in order to automatically generate code without requiring additional code refactoring. Finally, we demonstrate the advantages of Jacc, both in terms of programmability and performance, by evaluating it against existing Java frameworks. Experimental results show an average performance speedup of 32x and a 4.4x code decrease across eight evaluated benchmarks on a NVIDIA Tesla K20m GPU.
We present two new statistical machine learning methods designed to learn on fully homomorphic encrypted (FHE) data. The introduction of FHE schemes following Gentry (2009) opens up the prospect of privacy preserving statistical machine learning analysis and modelling of encrypted data without compromising security constraints. We propose tailored algorithms for applying extremely random forests, involving a new cryptographic stochastic fraction estimator, and na\'{i}ve Bayes, involving a semi-parametric model for the class decision boundary, and show how they can be used to learn and predict from encrypted data. We demonstrate that these techniques perform competitively on a variety of classification data sets and provide detailed information about the computational practicalities of these and other FHE methods.
Fractional imputation (FI) is a relatively new method of imputation for handling item nonresponse in survey sampling. In FI, several imputed values with their fractional weights are created for each missing item. Each fractional weight represents the conditional probability of the imputed value given the observed data, and the parameters in the conditional probabilities are often computed by an iterative method such as EM algorithm. The underlying model for FI can be fully parametric, semiparametric, or nonparametric, depending on plausibility of assumptions and the data structure. In this paper, we give an overview of FI, introduce key ideas and methods to readers who are new to the FI literature, and highlight some new development. We also provide guidance on practical implementation of FI and valid inferential tools after imputation. We demonstrate the empirical performance of FI with respect to multiple imputation using a pseudo finite population generated from a sample in Monthly Retail Trade Survey in US Census Bureau.
Anomaly detection is an important task in many real world applications such as fraud detection, suspicious activity detection, health care monitoring etc. In this paper, we tackle this problem from supervised learning perspective in online learning setting. We maximize well known \emph{Gmean} metric for class-imbalance learning in online learning framework. Specifically, we show that maximizing \emph{Gmean} is equivalent to minimizing a convex surrogate loss function and based on that we propose novel online learning algorithm for anomaly detection. We then show, by extensive experiments, that the performance of the proposed algorithm with respect to $sum$ metric is as good as a recently proposed Cost-Sensitive Online Classification(CSOC) algorithm for class-imbalance learning over various benchmarked data sets while keeping running time close to the perception algorithm. Our another conclusion is that other competitive online algorithms do not perform consistently over data sets of varying size. This shows the potential applicability of our proposed approach.
Heterogeneous systems consisting of general-purpose processors and different types of hardware accelerators are becoming more and more common in HPC systems. Especially FPGAs provide a promising opportunity to improve both performance and energy efficiency of such systems. Adding FPGAs to clouds or data centers allows easy access to such reconfigurable resources. In this paper we present our cloud service models and cloud hypervisor called RC3E, which integrates virtualized FPGA-based hardware accelerators into a cloud environment. With our hardware and software framework, multiple (virtual) user designs can be executed on a single physical FPGA device. We demonstrate the performance of our approach by implementing up to four virtual user cores on a single device and present future perspectives for FPGAs in cloud-based data environments.
This extended abstract presents ThreadPoolComposer, a high-level synthesis-based development framework and meta-toolchain that provides a uniform programming interface for FPGAs portable across multiple platforms.
|
2023-01-30 09:17:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1956075131893158, "perplexity": 1337.7523059119876}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00225.warc.gz"}
|
https://tex.stackexchange.com/questions/558763/multirow-2-vertical-center-text
|
multirow (2) vertical center text
MWE:
\documentclass{standalone}
\usepackage{booktabs}
\usepackage{multirow}
\begin{document}
\begin{tabular}{@{}l|llll@{}}
\toprule
\multicolumn{1}{c|}{\multirow{2}{*}{Test}} & \multicolumn{4}{c}{A} \\ \cmidrule(l){2-5}
\multicolumn{1}{c|}{} & 1 & 2 & 3 & 4 \\ \midrule
& & & & \\
& & & & \\ \bottomrule
\end{tabular}
\end{document}
Output:
Saw this, this. Somewhere, I have to use makecell or m, but not able to grasp. Any help in making the "Test" vertically align.
• Since rules from booktabs package add some vertical white space above and below them, hence the incompatibility with vertical lines (please keep that in mind) and the gaps around the intersections, \multirow{2} will not result in the expected output. You could try with something like \multirow{2.4} or other non-integer values. Aug 15, 2020 at 16:58
• Side note on your MWE: Compiling results in a bunch of error messages since you can't have a float like table inside of the standalone class. Either keep the table and switch to article or a different standard class or stick with standalone and remove the table. Aug 15, 2020 at 17:00
• Depending on the actual contents of your table, it might be better to not vertically center the contents, but leave them top aligned as they usually are. If you with to keep the booktabs rules, stay away from vertical lines entirely. Aug 15, 2020 at 17:02
• See also: Proper centering with cmidrule and multi- row and column, Vertical alignment using multirow and booktabs and Vertically centering of text in multirow in table when using \cmidrule for more related questions that address vertically centered contents of \multirow next to a \cmidrule. Aug 15, 2020 at 17:06
• @leandriis, Thanks for the link, 1st link indeed solved my problem. changed \multirow{2}{*}{Test} to \multirow{2}[3]{*}{Test}. However, did not understand what [N] does. Can you explain it? Aug 15, 2020 at 17:52
Here is what you can do with {NiceTabular} of nicematrix (with the latest version: 5.4 of 2020-10-06).
\documentclass{article}
\usepackage{booktabs}
\usepackage{nicematrix}
\begin{document}
\begin{NiceTabular}{@{}l|llll@{}}
\toprule
\Block{2-1}{Test} & \Block{1-4}{A} \\ \cmidrule(l){2-5}
& 1 & 2 & 3 & 4 \\ \midrule
& & & & \\
& & & & \\ \bottomrule
\end{NiceTabular}
\end{document}
• In {NiceTabular}, you use \Block to merge cells both vertically and horizontally.
• The content of the block is composed at the mathemtatical center of the rectangle of the merged cells (not as with \multirow).
• The vertical rules are not broken and thus, are compatible with booktabs (but you must be aware that the use of vertical rules is not at all in the spirit of booktabs).
• You need several compilations (because nicematrix uses PGF/Tikz nodes).
• Good to know about nicematrix, thanks. Aug 15, 2020 at 17:54
Avoid verticals for a better impact and troubles with gaps
The \multirow option can be changed to decimal values for finer up-down adjustment/centering -- here changed to value of 2.4
\documentclass{article}
\usepackage{booktabs}
\usepackage{multirow}
\begin{document}
\begin{table}[]
\begin{tabular}{@{}lllll@{}} \toprule
\multirow{2.4}{*}{Test} & \multicolumn{4}{c}{A} \\ \cmidrule(l){2-5}
& 1 & 2 & 3 & 4 \\ \midrule
& X & Y & Z & A \\
& P & Q & R & S \\ \bottomrule
\end{tabular}
\end{table}
\end{document}
|
2022-06-24 23:07:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951210618019104, "perplexity": 1881.6631654468}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033816.0/warc/CC-MAIN-20220624213908-20220625003908-00684.warc.gz"}
|
https://scriptinghelpers.org/questions/118662/parent-doesnt-exist-indexing-nil
|
0
# Parent doesn't exist? Indexing nil.
Hello, I'm making a data store changer for a speedrun game. I already have the data store set up, and I'm working on the data store updater. Here is my code:
local Players = game:GetService("Players")
local pausetime = script.Parent
local time = script.Parent.Parent:FindFirstChild("SpeedunTimerGUI")
local function onPauseButtonTouch(touchpart)
local partParent = touchpart.Parent
local humanoid = partParent:FindFirstChildWhichIsA("Humanoid")
if humanoid then
local player = Players:GetPlayerFromCharacter(partParent)
if pb then
end
end
end
end
pausetime.Touched:Connect(onPauseButtonTouch())
I get the error on line 7 telling me "Workspace.Speedrun Timer.Timer_Pause.Script:7: attempt to index nil with 'Parent'". What is the problem with referencing the parent?
1
oilsauce 196
4 days ago
Edited 4 days ago
The error Workspace.Speedrun Timer.Timer_Pause.Script:7: attempt to index nil with 'Parent' basically means whatever you're trying to get the Parent of, is nil. In this case, that would be touchpart.
Why is it nil? The issue is at line 20. When connecting the Touched event to onPauseButtonTouch function, you're actually calling the function instead of connecting it, by putting two brackets () behind it.
This caused your function to run at line 20, and since you haven't given the function any arguments, touchpart was nil.
So instead of pausetime.Touched:Connect(onPauseButtonTouch()), the appropriate code for this case is pausetime.Touched:Connect(onPauseButtonTouch).
0
Tysm! BrightFeather8 122 — 4d
|
2021-01-19 05:33:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4162696301937103, "perplexity": 6619.198045194192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517966.39/warc/CC-MAIN-20210119042046-20210119072046-00141.warc.gz"}
|
https://crypto.stackexchange.com/questions/77669/why-cant-ransomware-practically-use-rsa-to-encrypt-all-files
|
# Why can't ransomware practically use RSA to encrypt all files?
I understand that a few ransomware have used an RSA public key to encrypt all files belonging to the victim. This is a bullet-proof system in terms of its security because the private key is always safe with the hackers.
But most ransomware use the hybrid encryption approach and during that they make mistakes allowing security researchers to build decryptors. The attractiveness of this hybrid approach lies in the speed of symmetric encryption. But because the hackers makes implementation mistakes in this approach, why don't all hackers switch to asymmetric-only encryption? It seems easier to implement programmatically than the hybrid approach. Is slow speed of RSA the only factor the limits the hackers? How slow is RSA when compared to AES for, let's say, 500 MB of data on an average i5 processor?
I have searched the Internet looking for a comparison between speeds of RSA versus AES for bulk data encryption in modern computers but found nothing. I understand that criminals cannot be reached for comment on why they don't use asymmetric-only approach for bulk encryption but I am looking to understand what could be the practical reason behind why bulk encryption with RSA is so undesirable.
• "It seems easier to implement programmatically than the hybrid approach." [citation needed] what exactly is the encryption scheme you're proposing? – Maeher Feb 18 at 8:44
• I don't think this is is very particular to this scenario to be honest, it is just the same as with any encryption scheme, ransomware or not. – Maarten Bodewes Feb 18 at 13:58
• AES vs RSA is not the issue. AES when used correctly is virtualy impossible to break. The weakness is in the key management. – shumy Feb 18 at 14:24
## 2 Answers
RSA is really very slow compared with symmetric ciphers. You can check this yourself running e.g. openssl benchmark:
openssl speed rsa
On my machine (and with openssl 1.1.1a) I get
Doing 2048 bits public rsa's for 10s: 330640 2048 bits public RSA's in 10.00s
so we can do ~33k encryptions per second and the size of the encrypted block is less than the modulus, so less than 256 bytes. If the attacker wanted to encrypt 1GB file, it would take about 2 minutes. On the other hand,
openssl speed aes
shows that AES-CBC can encrypt ~300MB per second
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes
aes-128 cbc 94234.08k 136019.99k 137980.25k 304759.81k 304166.23k 315654.14k
so the same 1GB file would be encrypted in 3 seconds.
For the attackers, speed is important, since the faster the files are encrypted, the less likely the process will be interrupted.
• "RSA" is also not an encryption scheme. Now you could use a secure RSA based encryption scheme like RSA-OAEP, but how you extend this to long messages without atrocious ciphertext expansion is not necessarily clear. – Maeher Feb 18 at 8:53
• @Maeher Yes, you need to use RSA in an encryption mode, like you said e.g. RSA-OAEP (that's why I said " the size of the encrypted block is less than the modulus" because we need padding). Since the security requirement for a malware is just inability to decrypt for known plaintexts and not stronger properties like CCA2, most likely basic modes like ECB or CBC would be sufficient. – Krystian Feb 18 at 9:04
• Yes you can do blockwise encryption if you only need CPA security (which should indeed be fine for the application). But the ciphertext expansion would be pretty horrible compared to just using hybrid encryption. – Maeher Feb 18 at 9:09
• @Maeher: Agreed, the ciphertext expansion incurred by RSA-OAEP is a major issue for essentially in-place file encryption use case required by ransomware. – Krystian Feb 18 at 10:00
First of all, this idea is based on a misconception:
"But most ransomware use the hybrid encryption approach and during that they make mistakes allowing security researchers to build decryptors."
This is my opinion is not correct. The first ransomware used just symmetric encryption. Now if that symmetric key is left then it may be possible to retrieve it again. You must make a terrible mistake in the implementation if you use hybrid cryptography with separate symmetric keys per file. Now there will of course still be a ransomware nitwit (these are not high payed professionals) that still managed to screw it up, but I don't believe that that is common.
I have searched the Internet looking for a comparison between speeds of RSA versus AES for bulk data encryption in modern computers but found nothing.
Oh dear, you must up your Google-fu.
The other answer mentions encryption speed, but there is more to it than that. I think that the encryption speed of RSA is not such a problem that needs to be a huge problem. However, there is another disadvantage: efficiency when it comes to size.
If you encrypt a plaintext using RSA then you will have a certain disadvantage when it comes to ciphertext expansion compared to the plaintext. How big the ciphertext expansion is depends on the padding scheme used. For RSA PKCS#1 v1.5 padding you can go as low as an overhead of 11 bytes (although larger padding sizes are more secure). For OAEP, well, somebody created a table here. So if you e.g. use RSA PKCS#1 v1.5 with a 2048 bit key you'd expand the ciphertext with 11 bytes per 256 - 11 = 245 bytes. That means that files will suddenly be a lot larger (something that might be very noticable) or you might of course run out of disk space to perform the encryption at all.
Another problem is that RSA requires the use of a random number generator to generate the padding (well, unless deterministic scheme is used, but that would be overestimating the prowess of the ransomware creator). If you keep hitting the random number generator then it may decide to request additional entropy, and that means that your system may run out of it. When it runs out, it will stall or the speed will likely degrade to a crawl. That will be rather noticeable, especially if there are other applications that request random data from the system.
Finally, although the encryption speed may still be enough to encrypt lots of data, it is questionable if the decryption speed will be sufficient. Decryption speed is much lower for RSA than encryption speed. That said, the person or persons behind the ransomware will probably not care all that much.
• Thanks for providing exact size overhead for PKCS#1 1.5 and OAEP encryption - exact numbers were missing in my answer. – Krystian Feb 18 at 14:16
|
2020-09-25 10:21:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3157016932964325, "perplexity": 1765.5176338525027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400223922.43/warc/CC-MAIN-20200925084428-20200925114428-00431.warc.gz"}
|
https://mathoverflow.net/questions/82324/comparing-two-markov-chains
|
# Comparing two Markov chains
I thought that this question is more appropriate for math.stackexchange, where I asked it, but seeing how I got no response, here it goes:
I am interested in the question of the positive recurrence of a Markov chain that "converges" to another Markov chain known to be positive recurrent. The following is, in the context of queueing theory, a concrete example of what I mean.
Consider a system where a single server is serving two clients. Time is slotted. For client $i \in \{1, 2\}$, the number of packets arriving in each time step is a iid Bernoulli random variable with probability $p_i$.
Each client has queues of infinite capacity.
Assume $p_1 + p_2 < \frac{1}{8}$.
At each time slot, a client with non-empty queue may choose to submit one packet to the server for processing. This packet will be processed and leave the relevant queue if and only if the other client did not submit a packet in that time slot.
Now consider the following simple algorithm. Assume that client $i$ knows $p_i$. Then at each time slot, client $i$ (if its queue is non-empty) will submit a packet to the server with iid probability $2 p_i$. Let $j$ be the other client.
The probability of a packet submitted by client $i$ being processed is at least $1 - 2 \cdot p_j > \frac{3}{4}$. Thus, the probability of the size of a non-empty queue at client $i$ reducing by $1$ is at least $2 \cdot p_i \times \frac{3}{4} > p_i$. Since the departure process has a higher rate than the arrival process, it is clear that the corresponding Markov chain is positive recurrent and the queues are stable.
Here comes my question. Assume the clients do not know their own $p_i$'s. Naturally, they could approximate it as follows: at time $T$, the approximation $\hat p_i(T)$ is defined by $\hat{p_i}(T) =$$\min \left\{ 1, \frac{A(T)}{T} \right\}$, where $A(T)$ is the number of packets that have arrived up to time $T$. The clients can now use $\hat p_i(T)$ instead of $p_i$ in the above algorithm.
It seems to me that since $\hat p_i(T)$ converges almost surely to $p_i$, the resulting Markov chain will be positive recurrent too. But I am not sure this simply can be stated as true, and/or how to show that this holds.
Thanks.
-
This question has meanwhile been answered on Math.SE, see math.stackexchange.com/questions/86052/… – Stefan Kohl Dec 6 '13 at 15:25
|
2015-04-19 03:24:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6555226445198059, "perplexity": 382.84604168429223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246637364.20/warc/CC-MAIN-20150417045717-00220-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://mathforces.com/problems/64/
|
# Prime powers
Author: mathforces
Problem has been solved: 38 times
Русский язык | English Language
Find the sum of all primes $p$ for which there are natural numbers $x$, $y$, and $z$ such that $x ^ p + y ^ p + z ^ p - x - y - z$ is a product of three different prime numbers.
|
2023-03-29 15:39:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7217955589294434, "perplexity": 409.2244328528357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00017.warc.gz"}
|
https://byjus.com/question-answer/in-the-figure-above-calculate-the-torque-about-the-pendulum-suspension-point-produced-by-the/
|
Question
# In the figure above, Calculate the torque about the pendulum suspension point produced by the weight of the bob, given that the mass is 40 cm below the suspension point, measured vertically, and m = 0.50 kg.
A
0.49N×m
B
0.98N×m
C
1.7N×m
D
2.0N×m
E
3.4N×m
Solution
## The correct option is B $$0.98 N \times m$$Given : $$m = 0.5 kg$$ $$L = 40 cm = 0.4 m$$ Torque about suspension point $$\tau = L F$$ $$sin\theta$$ where $$\theta$$ is the angle between $$\vec{L}$$ and $$\vec{F}$$From figure, $$\theta =30^o$$$$\implies$$ $$\tau = L (mg) sin 30^o = 0.4 \times (0.5 \times 9.8 ) \times 0.5 =0.98$$ NmPhysics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
2022-01-21 06:02:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9530544877052307, "perplexity": 7578.467616311348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302723.60/warc/CC-MAIN-20220121040956-20220121070956-00279.warc.gz"}
|
https://www.blaumut.com/benzoin-to-yfmgt/c7c06f-general-mathematics-calculator
|
Mathematics is the science of quantity. Trig. Symbolab: equation search and math solver - solves algebra, trigonometry and calculus problems step by step. In general, students are encouraged to explore the various branches of mathematics, both pure and applied. This sample General Mathematics paper shows the format of the examination for 2017. There is an emphasis on the development of real numbers and their everyday usage in learning mathematics. It is important that topic is mastered before continuing... Read More. Advanced Calculator for school or study that allows you to calculate formulas, solve equations or plot functions. I am only able to help with one math problem per session. 1) In which civilization dot patterns were first employed to represent numbers? Book Condition: New. The first solid-state electronic calculator was created in the early 1960s. Find items in libraries near you. 88 pages. Please ensure that your password is at least 8 characters and contains each of the following: You'll be able to enter math problems once our session is over. Algebra. By using this website, you agree to our Cookie Policy. Home. Example: How many 2 s do we multiply to get 8? Easy to use and read. The TI-30X IIS is also available in pink and blue. page 3 of 21 PLEASE TURN OVER (d) As an alternative option, Dimitrios could have borrowed the $150 000 using a reducing-balance (principal and interest) loan with an interest rate of 3.9% per annum, compounded quarterly. Other Stuff. Created: Aug 30, 2016. By using this website, you agree to our Cookie Policy. For a new problem, you will need to begin a new live expert session. The first term of an arithmetic sequence is equal to$\frac{5}{2}\$ and the common difference is equal to 2. Find the value of the 20 th term. docx, 15 KB. 0.200. Delve into mathematical models and concepts, limit value or engineering mathematics and find the answers to all your questions. Online Mathematics Quiz with Answers . Math Calculators . Show Ads. At first, I simply included a copy of the calculator on my existing math calculator pages. Biochemical calculations : how to solve mathematical problems in general biochemistry. Free Pre-Algebra, Algebra, Trigonometry, Calculus, Geometry, Statistics and Chemistry calculators step-by-step 7. The NSCAS Calculator Policy is designed to ensure fairness for all students, avoid disturbances in the testing room, and protect the security of the test materials. Math for Everyone. This video discusses the process of solving rational equations. Our math books are for all study levels. Free math lessons and math homework help from basic math to algebra, geometry and beyond. The Math Calculator will evaluate your problem down to a final solution. The Department of Mathematics maintains no single policy on student calculator usage in its courses. TI-Nspire Calculator Companion complements the Maths QuestÂ11ÂStandard General Mathematics 2e.Shipping may be from our Sydney, NSW warehouse or from our UK or US warehouse, depending on stock availability. High School Math Solutions – Trigonometry Calculator, Trig Equations. Answer: Chinese. The General Mathematics subject provides students with a breadth of mathematical and statistical experience that encompasses and builds on all three strands of the F-10 curriculum. Scientific Calculator - A great Scientific Calculator. Differential Equation Calculator The calculator will find the solution of the given ODE: first-order, second-order, nth-order, separable, linear, exact, Bernoulli, homogeneous, or inhomogeneous. The differential equation is of many types, namely Ta-Da! 2. Clear and Free! Get exclusive access to content from our 1768 First Edition with your subscription. Currently, we have around 200 calculators to help you "do the math" quickly in areas such as finance, fitness, health, math, and others, and we are still developing more. Visit Cosmeo for explanations and help with your homework problems! Find more Mathematics widgets in Wolfram|Alpha. For example, 4 and −4 are square roots of 16, because 4² = (−4)² = 16. Learn more Accept. General-Codebreaker-2. Advanced Search Find a Library. Free. 5 3 customer reviews. NSCAS General Mathematics Calculator Policy for paper /pencil version . This website uses cookies to ensure you get the best experience on our website. trigonometric-equation-calculator. Buy on Amazon Buy on Staples.com. General Math. You can also add, subtraction, multiply, and divide and complete any arithmetic you need. Calculator Examples » Math Symbols. You only need to make one entry for each qualification – this will cover all the question papers and certification.Every specification is given a national discount (classification) code by the Department for Education (DfE), which indicates its subject area.If a student takes two specifications with the same discount code, Further and Higher Education providers are likely to take the view that they have only achieved one of the two qualifications. » More Examples Trying the examples on the Examples page is the quickest way to learn how to use the calculator. Mathematics 30–1 Formula Sheet For axb2 ++xc=0, x a bbac 2! RELATIONS versus FUNCTIONS GENERAL MATHEMATICS Samar College Galina V. Panela RELATIONS FUNCTIONS A relation is a rule that relates values from a set of values called the domain to a second set of values called the range. Assume that the conditions of the account remain constant. In a previous post, we learned about trig evaluation. Simple Calculator - A nice Simple Free Online Calculator. Try this example now! Current calculator limitations. Find the punchline to a joke by answering the questions. I am currently working on this problem. Our math calculators and math solvers are web-based tools designed to solve different math problems, anything from basic equations to complex integrals or derivatives, in a few seconds.Our calculators and math solvers are online, so you don't need to download anything, and they are absolutelly free. Thus, here also, maths forms an important part of our daily routine. Free algebra and math word problems. According to the Ancient Method, discussed in Indian history which Indian's had forgot after 'Dwapar Yug' or nearly some 5015 years ago time duration from now. GR 11 GENERAL MATHEMATICS M1 BASIC NUMERACY 5 11.1 NUMBERS AND APPLICATIONS Introduction This Unit focuses on the mathematics used every day in our communities to measure, compare and present information numerically. 1. Plots & Geometry. Get the free "General Differential Equation Solver" widget for your website, blog, Wordpress, Blogger, or iGoogle. The natural display shows fractions, roots and exponents as you would expect it from mathematics. A course of study in General Mathematics can establish a basis for further education and employment in the fields of business, commerce, education, finance, IT, social science and the arts. Non-calculator Level 1 Functional Skills Mathematics non-calculator paper. In mathematics, a square root of a number x is a number y such that y² = x; in other words, a number y whose square (the result of multiplying the number by itself, or y ⋅ y) is x. FEATURES Calculate any formula you want and show them in a 2d or 3d plot. In algebra, a quadratic equation (from the Latin quadratus for "square") is any equation that can be rearranged in standard form as ax²+bx+c=0 where x represents an unknown, and a, b, and c represent known numbers, where a ≠ 0. Try MathPapa Algebra Calculator The equation calculator allows you to take a simple or complex equation and solve by best method possible. Help With Your Math Homework. 7 8 9 + Back. Khan Academy Video: Factoring Expressions; Need more problem types? The version below will show you the final answer only. 0. How many digits are there in Hindu-Arabic System? Mathematics books Need help in math? How many digits are there in Hindu-Arabic System? Course Description: This course is an integration of Algebra, Business Mathematics and Logic. Online Abacus - An Online Abacus! Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top Home Questions Tags Users Unanswered Calculate covariant divergence. (2 marks) QuickMath allows students to get instant solutions to all kinds of math problems, from algebra and equation solving right through to calculus and matrices. This is only the second post for AQA GCSE maths revision and it deals with AQA paper 2 Higher Calculator June 2015 based on the new specimen papers published June 2015.. Here’s a link to the AQA paper 2 Higher Calculator June 2015 for you to download.. Here’s the link to the hand written worked answers: AQA paper 2 Higher Calculator June 2015 WORKED ANSWERS These sample assessment materials (SAMs) have been designed to meet the requirements of the new reformed Functional Skills. The Mathematics 2 course, often taught in the 10th grade, covers Quadratic equations, functions, and graphs; Complex numbers; Rational exponents and exponential models; Similarity and Trigonometry; Solids; Circles and other Conic sections; and introductory Probability. Enter the expression you want to evaluate. Preview and details Files included (2) docx, 15 KB. The natural display shows fractions, roots and exponents as you would expect it from mathematics. Resource; Calculator list: Scientific (PDF, 157.0 KB) Related resources. Every nonnegative real number x has a unique nonnegative square root, called the principal square root, which is denoted by √(x), where the symbol √() is called the radical sign or radix. General Math Solvers Below, there is a collection of general math solvers and calculators covering issues like the fractions, percentages, factorization, prime numbers, divisibility and square roots. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Online Mathematics Quiz with Answers . Calculate the total cost of this option, and compare it … Solvers and Calculators in this section Knowledge of these facts I got after many difficulties faced and… (a) 10 (b) 20 (c) 30 (d) 40. This is a free online math calculator together with a variety of other free math calculators that compute standard deviation, percentage, fractions, and time, along with hundreds of other calculators addressing finance, fitness, health, and more. In mathematics, a square root of a number x is a number y such that y² = x; in other words, a number y whose square (the result of multiplying the number by itself, or y ⋅ y) is x. Create lists, bibliographies and reviews: or Search WorldCat. For example, enter 3x+2=14 into the text box to get a step-by-step explanation of how to solve 3x+2=14. For example, 4 and −4 are square roots of 16, because 4² = (−4)² = 16. The calculations are done based on basic mathematical concepts. You can even see the steps (with a subscription)! 1. An example in three variables is x³ + 2xyz² − yz + 1. General Replies. Initial conditions are also supported. Use of calculators for teaching percent to first-year general mathematics students is explained by addressing such areas as sequence of lessons, teaching/learning problems, calculator errors, and recommended instructional strategies. Calculator list: Scientific (PDF, 157.0 KB) Formula sheet: General Mathematics (PDF, 352.9 KB) 3 & 4: Sample: Paper 1 — Multiple choice question book (PDF, 1.2 MB) 3 & 4: Sample: Paper 1 — Question and response book (PDF, 1.5 MB) 3 & 4: Sample: Paper 2 — Question and response book (PDF, 1.3 MB) Timetable . Type your algebra problem into the text box. 7. Choose the exam specification that matches the one you study. Any thing which can be multiplied, divided, or measured, is called quantity.Thus, a line is a quantity, because it can be doubled, trebled, or halved; and can be measured, by applying to it another line, as a foot, a yard, or an ell.Weight is a quantity, which can be measured, in pounds, ounces, and grains. 3. Calculator.net's sole focus is to provide fast, comprehensive, convenient, free online calculators in a plethora of areas. Loading... Save for later. 1. Quick! en. An example of a polynomial of a single indeterminate x is x² − 4x + 7. This website uses cookies to ensure you get the best experience. FEATURES Calculate any formula you want and show them in a 2d or 3d plot. In mathematics, a set of simultaneous equations, also known as a system of equations or an equation system, is a finite set of equations for which common solutions are sought. Art. Cooking and Baking. For K-12 kids, teachers and parents. (a) 100 (b) 200 (c) 1 (d) 0. Browse through the Examples section for an idea of all you can do with this calculator. In its simplest form, a logarithm answers the question: How many of one number do we multiply to get another number? PDF 1MB; Sample assessment materials. If a = 0, then the equation is linear, not quadratic, as there is no ax² term. Free online calculators for math, algebra, chemistry, finance, plane geometry and solid geometry. (a) 10 (b) 20 (c) 30 (d) 40. In your kitchen also, the maths is performed. Teach numbers from 1 to 50 :-) Darts Calculator - Forget the maths, and play Darts!
|
2021-11-30 06:44:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45540565252304077, "perplexity": 1654.8739651332194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358953.29/warc/CC-MAIN-20211130050047-20211130080047-00259.warc.gz"}
|
https://www.physicsforums.com/threads/simple-pde.541289/
|
# Simple PDE
1. Oct 17, 2011
### Aidyan
Simple PDE....
I'm trying to solve the PDE:
$\frac{\partial^2 f(x,t)}{\partial x^2}=\frac{\partial f(x,t)}{\partial t}$ with $x \in [-1,1]$ and boundary conditions f(1,t)=f(-1,t)=0.
Thought that $e^{i(kx-\omega t)}$ would work, but that obviously does not fit with the boundary conditions. Has anyone an idea?
2. Oct 17, 2011
### Hootenanny
Staff Emeritus
Re: Simple PDE....
Your equation is the 1D heat equation, the solutions of which are very well known and understood. A google search should yield what you need.
P.S. You will also need some kind of initial condition.
3. Oct 17, 2011
### Aidyan
Re: Simple PDE....
Hmm... looks like it isn't just a simple solution, however. It seems I'm lacking the basics ... I thought this is sufficeint data to solve it uniquely, what is the difference between boundary and initial conditions?
4. Oct 17, 2011
### Hootenanny
Staff Emeritus
Re: Simple PDE....
Afraid not, without knowing the temperature distribution at a specific time you aren't going to obtain a (non-trivial) unique solution.
The former specifies the temperature on the spatial boundaries of the domain (in this case x=-1 and x=1). The latter specifies the temperature distribution at a specific point in time (usually t=0, hence the term initial condition).
|
2017-09-20 20:39:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6064019799232483, "perplexity": 1015.0926179906502}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687447.54/warc/CC-MAIN-20170920194628-20170920214628-00390.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-9-mid-chapter-check-point-page-1000/31
|
Precalculus (6th Edition) Blitzer
Step 1. Based on the given conditions, assume the center of the road is the origin. We can write the equation of the ellipse as $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ with $a=\frac{30}{2}=15, b=10$ Step 2. The truck has a width of 10 feet; its upper right corner will have coordinates of $(5, 9.5)$ Step 3. At $x=5$, we have $\frac{5^2}{15^2}+\frac{y^2}{10^2}=1$, which gives $y^2=\frac{800}{9}$. Thus $y\approx9.4\ ft$ which means that the truck with a height of $9.5\ ft$ will not be clear to pass.
|
2021-06-25 04:37:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8968663215637207, "perplexity": 129.7052304930309}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488567696.99/warc/CC-MAIN-20210625023840-20210625053840-00634.warc.gz"}
|
https://cortexjs.io/compute-engine/
|
The CortexJS Compute Engine is a JavaScript library for symbolic computing and numerical evaluation of mathematical expressions.
The Compute Engine is for anyone who wants to make technical computing apps in the browser or in server-side environments such as Node: educators, students, scientists and engineers.
The Compute Engine can:
## Parse and Serialize LaTeX
Internally, the Compute Engine manipulates expressions represented with the MathJSON format. It’s a JSON representation of the Abstract Syntax Tree of the expression. It is easy to manipulate programatically and can be written by hand. However, you might prefer to use a more concise and familiar syntax, such as LaTeX. The Compute Engine includes utilities to convert to and from LaTeX strings.
To parse a LaTeX string and serialize to a LaTeX string, use the ce.parse() and ce.serialize() functions.
import { ComputeEngine } from '@cortex-js/compute-engine';
const ce = new ComputeEngine();
console.log(ce.parse('5x + 1'));
// -> ["Add", ["Multiply", 5, "x"], 1]
// -> x^3 + 2
To input math using an interactive mathfield, use MathLive.
A MathLive mathfield works like a textarea in HTML, but for math. It provide its content as a LaTeX string or a MathJSON expression, ready to be used with the Compute Engine.
## Symbolic Computing and Numerical Evaluation
To evaluate a symbolic expression, use the evaluate() function.
The result of evaluate() is an expression:
• If the expression can be evaluated numerically, the result is a number
• If it can’t be evaluated numerically, the result is a symbolic expression.
import { evaluate, parse, serialize } from '@cortex-js/compute-engine';
// ➔ 5
console.log(evaluate(parse('\\frac{\\sqrt{5}}{3}'));
// ➔ 0.7453559925
console.log(serialize(evaluate(parse('2x + 3x')));
// ➔ 5x
The Compute Engine supports arbitrary precision floating points and complex numbers.
The Compute Engine can also simplify, find patterns, substitute terms, apply rewrite rules, compare and format expressions.
## Customization
The Compute Engine includes a robust library of mathematical functions.
To customize the dictionaries that define the math functions, create and configure a ComputeEngine instance.
The ComputeEngine instance also provides access to additional features such as defining assumptions about symbols: x is a positive Real number, n is an Integer.
const ce = new ComputeEngine(ComputeEngine.getDictionary('arithmetic'));
Arithmetic Add Multiply
Calculus Derive Integrate
Collections Sequence List Dictionary Set
Core Missing Nothing None All Identity InverseFunction LatexTokens
Logic And Or Not
Sets Union Intersection
Special Functions Erf Gamma Factorial
Trigonometry Cos Sin Tan
|
2021-09-23 03:10:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5528231859207153, "perplexity": 8174.93447654866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057416.67/warc/CC-MAIN-20210923013955-20210923043955-00114.warc.gz"}
|
http://academic.research.microsoft.com/Publication/27663668/long-range-connections-in-transportation-networks
|
## Keywords (7)
Publications
Long-Range Connections in Transportation Networks
# Long-Range Connections in Transportation Networks,Matheus P. Viana,Luciano da F. Costa
Long-Range Connections in Transportation Networks
Since its recent introduction, the small-world effect has been identified in several important real-world systems. Frequently, it is a consequence of the existence of a few long-range connections, which dominate the original regular structure of the systems and implies each node to become accessible from other nodes after a small number of steps, typically of order $\ell \propto \log N$. However, this effect has been observed in pure-topological networks, where the nodes have no spatial coordinates. In this paper, we present an alalogue of small-world effect observed in real-world transportation networks, where the nodes are embeded in a hree-dimensional space. Using the multidimensional scaling method, we demonstrate how the addition of a few long-range connections can suubstantially reduce the travel time in transportation systems. Also, we investigated the importance of long-range connections when the systems are under an attack process. Our findings are illustrated for two real-world systems, namely the London urban network (streets and underground) and the US highways network enhanced by some of the main US airlines routes.
Published in 2010.
View Publication
The following links allow you to view full publications. These links are maintained by other sources not affiliated with Microsoft Academic Search.
( arxiv.org )
## References (2)
### Physical Review Letters 19(Citations: 311)
Published in 1967.
### Intelligent data analysis: an introduction(Citations: 95)
Published in 1999.
|
2013-12-10 18:09:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3685851991176605, "perplexity": 1971.2175642053842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164022934/warc/CC-MAIN-20131204133342-00008-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/482150/which-method-of-calculating-mutual-inductance-to-use
|
# Which method of calculating mutual inductance to use?
I am planning to make a wireless charging device for a school project using coupled induction. However, I have found a few different ways of calculating mutual inductance for coupled solenoids and cannot figure out which to use.
The first method takes the form $$M_{12} = \frac{N_2 \Phi_{12}}{i_1}$$. For two perfectly coupled solenoids, this simplifies to $$N_2 * \frac{B_1 A}{i_1} = N_2 * \frac{\mu_0 N_1 i_1 A/ l_1}{i_1} = \mu_0 \frac{N_1 N_2 A}{l_1}$$, where $$\Phi_{12}$$ is the magnetic flux through solenoid 2 that is generated by solenoid 1, $$B_1$$ is the magnetic field generated by coil 1, $$A$$ is the cross-sectional area of the solenoids (it is the same for both), $$l_1$$ is the length of the first coil, and $$N_1$$ and $$N_2$$ are the numbers of loops in each respective solenoid. The assumption of perfect inductive coupling is equivalent to the assumption that all of the magnetic flux generated by coil 1 passes through coil 2 (used for simplicity).
The next method finds $$M_{21}$$, which is $$M_{21} = \frac{N_1 \Phi_{21}}{i_2} = \mu_0 \frac{N_1 N_2 A}{l_2}$$, where $$\Phi_{21}$$ is the magnetic flux through solenoid 1 that is generated by solenoid 2 and $$l_2$$ is the length of the second coil. The simplification of the formula follows the same steps as for $$M_{12}$$.
However, the reciprocity theorem states that $$M_{12} = M_{21}$$. There is a clear discrepancy in the first two calculations of mutual inductance; they vary in the denominator terms $$l_1$$ and $$l_2$$. I can't determine the cause of this discrepancy—does reciprocity only apply to solenoids of the same length? I have already examined a derivation provided in "Mutual inductance $$M_{12}=M_{21}$$: An elementary derivation" as mentioned in this Physics Stack Exchange answer. However, it is still not clear where the discrepancy orignates in my equations for the two mutual inductances.
The last method of calculating mutual inductance that I found was $$M = \sqrt{L_1 L_2}$$, where $$L_1$$ and $$L_2$$ are the inductances of each coil. The derivation of this formula also appears to rely on $$M_{12} = M_{21}$$, which brings back the same question as before about solenoid lengths.
Any help in figuring out where my calculations may be incorrect (or explaining generally what the mutual inductance of two solenoids depends on) would be highly appreciated. Thank you in advance.
In these types of configurations you must keep in mind that you're approximating fields. Let's say $$l_2 > l_1$$. Then when calculating $$M_{12}$$, we need to remember that a part of the 2nd solonoid is going to be extended beyond the shorter 1st solenoid. The field in this outer region is no longer $$\mu_0 n_1 I$$, and is instead the terrible fringing fields that we want to keep out of calculations.
To amend this, we take advantage of the fact that $$M_{12} = M_{21}$$. The fields are perhaps too complicated, but whatever they are, their mutual inductances will be the same. However, note that, in theory, if you did do all the calculations, they would agree exactly as you would expect. We're just lazy.
|
2019-10-16 17:34:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7999411821365356, "perplexity": 217.19810576866902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669057.0/warc/CC-MAIN-20191016163146-20191016190646-00302.warc.gz"}
|
https://math.stackexchange.com/questions/3542646/probability-winning-the-coin-flipping-game
|
Probability - Winning the coin flipping game
You play against your friend in a coin flipping game, where the objective is to get the most heads after three coin flips. A player wins if they have more heads than the opponent. If the numbers of heads are equal, then no one wins; it is a tie. You will take turns flipping coins, and your friend flips first. You want to cheat, so you created a fake coin that looks identical to a real, fair coin. The fake coin has a 80% chance of getting heads. Your friend is suspicious, so your friend gets to pick first from the two coins randomly, with equal probability. You are forced to take the other coin. You will both use your own coin for all three coin flips. The game begins, and your friend flips the coin once and gets heads.
(a)Given that your friend’s first coin flip returns heads, what is the probability that your friend got the fake coin, to your disadvantage?
(b)Given that your friend’s first coin flip returns heads, what is the probability that you will win the game?
I don't know what did I do wrong in part b as well as how to use part a's answer.
My Idea:
Me get real coin: Me get 2 head, friend gets 1 or Me get 3 head, friend gets 2;+
Me get fake coin: Me get 2 head, friend gets 1 or Me get 3 head, friend gets 2.
My try:
$$0.5*C^3_2*0.5^2*0.5*C^2_2*0.2^2+C^3_3*0.5^3*C^2_1*0.8*0.2+$$
$$0.5*C^3_2*0.8^2*0.2*C^2_2*0.5^2+C^3_3*0.8^3*C^2_1*0.5^2*0.5=0.2235$$
Answer:$$\frac{2.88}{13}=0.22154...$$
• For $a)$. There are two ways the friend could have gotten that $H$, either by getting the fake coin and throwing $H$ or by getting the real coin and throwing $H$, thus probability $\frac 12\times .8+\frac 12\times \frac 12$. Of that, $\frac 12\times .8$ is explained by getting the fake coin. – lulu Feb 11 at 13:07
• Why are you only considering the winning results $2:1$ and $3:2$? There are lots of others? – joriki Feb 11 at 14:22
• @joriki those are the only possibilities I can win. Friend already got one head in the first round, and it starts my turn – keanehui Feb 11 at 14:34
• @keanehui: Ah, right, so not lots. But still $3:1$ is missing? – joriki Feb 11 at 14:35
• Sorry, what I wrote about the sum being $2$ above was wrong; I hadn't realized that you included an extra factor $0.5$ that doesn't correspond to a coin flip. But this shouldn't be the prior probability $0.5$ but the posterior probability that you calculated in a). You're effectively ignoring the information that the result of your friend's first flip gave you about the coin assignment. – joriki Feb 11 at 15:39
1 Answer
Let us define the event $$A$$ which will represent the friend choosing the weighted coin and $$A^c$$ where the friend chooses the fair coin. Note that $$\mathbb{P}(A)=\frac{1}{2}=\mathbb{P}(A^c)$$. Now, let's define $$B$$ to be the event that a person flipped a heads.
Part (a): We want to find $$\mathbb{P}(A|B)$$. In words, given that he friend flipped a heads, what is the probability that the friend chose the weighted coin?
Since $$\mathbb{P}(B)>0$$, we can use Bayes' Theorem which says $$\mathbb{P}(A|B)=\frac{\mathbb{P}(B|A)\mathbb{P}(A)}{\mathbb{P}(B)}$$ In words, $$\mathbb{P}(B|A)$$ is "given that the friend chose the weighted coin, what is the probability that he flipped a head?". We know that that is equal to $$\frac{4}{5}$$. We also know that $$\mathbb{P}(A)=\frac{1}{2}$$. Now, using the Law of Total Probability, we can calculate $$\mathbb{P}(B)$$ $$\mathbb{P}(B)=\mathbb{P}(B|A)\mathbb{P}(A)+\mathbb{P}(B|A^c)\mathbb{P}(A^c)=\frac{4}{5}\cdot\frac{1}{2}+\frac{1}{2}\cdot\frac{1}{2}=\frac{13}{20}$$ Where $$\mathbb{P}(B|A^c)$$ is "given that the friend chose the fair coin, what is the probability that he flipped a heads?" Therefore, all together we have $$\mathbb{P}(A|B)=\frac{\frac{1}{2}\cdot\frac{4}{5}}{\frac{13}{20}}=\frac{8}{13}$$
Part (b): There are three (which are really six) possible scenarios where we win the game and the friend loses. They are:
1. We flip 3 heads and the friend flips 2.
2. We flip 3 heads and the friend flips 1.
3. We flip 2 heads and the friend flips 1.
And, each one of these scenarios is really two scenarios: one for the event where the friend has the weighted coin and one for when we do. We already calculated in part (a) what the probability is that our friend has the weighted coin $$\left(\frac{8}{13}\right)$$, and we also know what the probability is that we have the weighted coin ($$1-\frac{8}{13}=\frac{5}{13}$$) First, let's calculate the three scenarios in the event that we chose the weighted coin times the probability that we in fact chose the weighted coin: $$\mathbb{P}(A^c |B)\cdot[\mathbb{P}(3heads-2heads)+\mathbb{P}(3heads-1heads)+\mathbb{P}(2heads-1head)]$$ $$\frac{5}{13}\cdot\left[{3\choose 3}(4/5)^3 {2 \choose 1} (1/2)^2 + {3\choose 3}(4/5)^3 {2 \choose 2} (1/2)^2 + {3\choose 2}(4/5)^2 (1/5){2 \choose 2} (1/2)^2\right]=0.18462$$ Now, let's calculate the three scenarios in the event that the friend chose the weighted coin times the probability that he in fact chose the weighted coin: $$\mathbb{P}(A|B)\cdot[\mathbb{P}(3heads-2heads)+\mathbb{P}(3heads-1heads)+\mathbb{P}(2heads-1head)]$$ $$\frac{8}{13}\cdot\left[{3\choose 3}(1/2)^3 {2 \choose 1} (4/5)(1/5) + {3\choose 3}(1/2)^3 {2 \choose 2} (1/5)^2 + {3\choose 2}(1/2)^3 {2 \choose 2} (1/5)^2\right]=0.03692$$ All together we have $$0.18462+0.03692=0.22154$$
|
2020-06-03 07:36:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8302136063575745, "perplexity": 169.14427594969612}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347432237.67/warc/CC-MAIN-20200603050448-20200603080448-00533.warc.gz"}
|
http://www.kasbeel.cl/2010/09/22/integral-sin3x51cos3x5-solved/
|
# Integral (Sin[3x+5])/(1+Cos[3x+5]) Solved
Integral (Sin[3x+5])/(1+Cos[3x+5]) solved step by step.
$\bg_white \small \color{black} \int {{\sin (3x+5)} \over {1+\cos (3x+5)}}dx$
First applied substitute method, using the function arguments as substitution
$\bg_white \small \color{black} u={3x+5} \:\: and \:\: du={3dx} \therefore {du \over 3}=dx$
Now,we replace the function argument per u, that is our new argument, and we factorized per 1/3 because du/3=dx
$\bg_white \small \color{black} {1\over3}{\int {{\sin (u)} \over {1+\cos (u)}}du}$
till can’t we directly solve, but we can applied again the substitution method, but now using w as the substitute variable.
$\bg_white \small \color{black} w={\cos(u)+1} \:\: and \:\: dw=-\sin(u)du \therefore -dw=\sin(u)du$
And now, we replace w in the function
$\bg_white \small \color{black} -{1\over3}{\int {1\over {1+w}}dw}$
the latest, we know that integral of 1/x is equal to log(x)
$\bg_white \small \color{black} -{1\over3}{\log(w)+constant}$
now, replace back from w to cos(u)+1
$\bg_white \small \color{black} -{1\over3}{\log(\cos(u)+1)+constant}$
after, replace back from u to 3x+5
$\bg_white \small \color{black} -{1\over3}{\log(\cos(3x+5)+1)+constant}$
finally, we can say that is integral is
$\bg_white \small \color{black} \int {{\sin (3x+5)} \over {1+\cos (3x+5)}}dx=-{1\over3}{\log(\cos(3x+5)+1)+constant}$
Identities used for solved this exercise.
Substitute method
This method say that the integral will can by for a portion of the original integral by the derivative of this same portion.
If we have
$\bg_white \small \color{black} \int {(ab)}dx$
and b=a’
we can say that b=u and du=a’ therefore
$\bg_white \small \color{black} {\int {(ab)}dx} = {\int {udu}}$
Logarithm integral identity
$\bg_white \small \color{black} {\int {1/x}dx} = log(x) + constant$
|
2020-03-31 05:53:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9569357633590698, "perplexity": 2691.2476794052336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500331.13/warc/CC-MAIN-20200331053639-20200331083639-00034.warc.gz"}
|
http://tug.org/pipermail/texhax/2010-May/014995.html
|
# [texhax] Custom justification (for captions)
Joel C. Salomon joelcsalomon at gmail.com
Tue May 18 19:31:44 CEST 2010
I’m trying to emulate a particular justification style. (This is for
figure captions with the caption package.)
Specifics are:
Large left margin, flush to right margin, fully justified, EXCEPT---
if there’s only one line, it is to be right-justified.
Examples:
==================================================
Figure 2 ’Twas brillig & the slithy toves
did gyre and gimble in the wabe; all mimsy
were the borogoves.
==================================================
Figure 3 Short caption.
I have most of what I need with
\captionsetup{
margin={10em,0pt},oneside,
font=small,
labelfont=bf,labelsep=space}
except for the odd one-line justification rule.
I’m not sure how best to implement this. Having read the caption
package docs, I think I’ll either need to write something like
\DeclareCaptionJustification{myoddjustification}{...}
--- but what goes in the {...}? --- or else a custom caption format like
\DeclareCaptionFormat{myoddformat}{...#1#2#3\par}
where the "..." is some sort of very stretchable glue.
What would be the best way to accomplish this?
--Joel Salomon
|
2018-10-23 18:41:04
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9214702248573303, "perplexity": 9544.504390434742}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516892.84/warc/CC-MAIN-20181023174507-20181023200007-00188.warc.gz"}
|
https://howlingpixel.com/i-en/Color
|
# Color
Color (American English), or colour (Commonwealth English), is the characteristic of human visual perception described through color categories, with names such as red, orange, yellow, green, blue, or purple. This perception of color derives from the stimulation of cone cells in the human eye by electromagnetic radiation in the visible spectrum. Color categories and physical specifications of color are associated with objects through the wavelength of the light that is reflected from them. This reflection is governed by the object's physical properties such as light absorption, emission spectra, etc.
By defining a color space, colors can be identified numerically by coordinates, which in 1931 were also named in global agreement with internationally agreed color names like mentioned above (red, orange, etc.) by the International Commission on Illumination. The RGB color space for instance is a color space corresponding to human trichromacy and to the three cone cell types that respond to three bands of light: long wavelengths, peaking near 564–580 nm (red); medium-wavelength, peaking near 534–545 nm (green); and short-wavelength light, near 420–440 nm (blue).[1][2] There may also be more than three color dimensions in other color spaces, such as in the CMYK color model, wherein one of the dimensions relates to a color's colorfulness).
The photo-receptivity of the "eyes" of other species also varies considerably from that of humans and so results in correspondingly different color perceptions that cannot readily be compared to one another. Honeybees and bumblebees for instance have trichromatic color vision sensitive to ultraviolet but is insensitive to red. Papilio butterflies possess six types of photoreceptors and may have pentachromatic vision.[3] The most complex color vision system in the animal kingdom has been found in stomatopods (such as the mantis shrimp) with up to 12 spectral receptor types thought to work as multiple dichromatic units.[4]
The science of color is sometimes called chromatics, colorimetry, or simply color science. It includes the study of the perception of color by the human eye and brain, the origin of color in materials, color theory in art, and the physics of electromagnetic radiation in the visible range (that is, what is commonly referred to simply as light).
Color effect – Sunlight shining through stained glass onto carpet (Nasir ol Molk Mosque located in Shiraz, Iran)
Colors can appear different depending on their surrounding colors and shapes. The two small squares have exactly the same color, but the right one looks slightly darker, the Chubb illusion.
## Physics of color
Continuous optical spectrum rendered into the sRGB color space.
The colors of the visible light spectrum[5]
Color Wavelength
interval
Frequency
interval
Red ~ 700–635 nm ~ 430–480 THz
Orange ~ 635–590 nm ~ 480–510 THz
Yellow ~ 590–560 nm ~ 510–540 THz
Green ~ 560–520 nm ~ 540–580 THz
Cyan ~ 520–490 nm ~ 580–610 THz
Blue ~ 490–450 nm ~ 610–670 THz
Violet ~ 450–400 nm ~ 670–750 THz
Color, wavelength, frequency and energy of light
Color ${\displaystyle \lambda \,\!}$
(nm)
${\displaystyle \nu \,\!}$
(THz)
${\displaystyle \nu _{b}\,\!}$
(μm−1)
${\displaystyle E\,\!}$
(eV)
${\displaystyle E\,\!}$
(kJ mol−1)
Infrared >1000 <300 <1.00 <1.24 <120
Red 700 428 1.43 1.77 171
Orange 620 484 1.61 2.00 193
Yellow 580 517 1.72 2.14 206
Green 530 566 1.89 2.34 226
Cyan 500 600
Blue 470 638 2.13 2.64 254
Violet (visible) 420 714 2.38 2.95 285
Near ultraviolet 300 1000 3.33 4.15 400
Far ultraviolet <200 >1500 >5.00 >6.20 >598
Electromagnetic radiation is characterized by its wavelength (or frequency) and its intensity. When the wavelength is within the visible spectrum (the range of wavelengths humans can perceive, approximately from 390 nm to 700 nm), it is known as "visible light".
Most light sources emit light at many different wavelengths; a source's spectrum is a distribution giving its intensity at each wavelength. Although the spectrum of light arriving at the eye from a given direction determines the color sensation in that direction, there are many more possible spectral combinations than color sensations. In fact, one may formally define a color as a class of spectra that give rise to the same color sensation, although such classes would vary widely among different species, and to a lesser extent among individuals within the same species. In each such class the members are called metamers of the color in question.
### Spectral colors
The familiar colors of the rainbow in the spectrum—named using the Latin word for appearance or apparition by Isaac Newton in 1671—include all those colors that can be produced by visible light of a single wavelength only, the pure spectral or monochromatic colors. The table at right shows approximate frequencies (in terahertz) and wavelengths (in nanometers) for various pure spectral colors. The wavelengths listed are as measured in air or vacuum (see refractive index).
The color table should not be interpreted as a definitive list—the pure spectral colors form a continuous spectrum, and how it is divided into distinct colors linguistically is a matter of culture and historical contingency (although people everywhere have been shown to perceive colors in the same way[6]). A common list identifies six main bands: red, orange, yellow, green, blue, and violet. Newton's conception included a seventh color, indigo, between blue and violet. It is possible that what Newton referred to as blue is nearer to what today is known as cyan, and that indigo was simply the dark blue of the indigo dye that was being imported at the time.[7]
The intensity of a spectral color, relative to the context in which it is viewed, may alter its perception considerably; for example, a low-intensity orange-yellow is brown, and a low-intensity yellow-green is olive green.
### Color of objects
The color of an object depends on both the physics of the object in its environment and the characteristics of the perceiving eye and brain. Physically, objects can be said to have the color of the light leaving their surfaces, which normally depends on the spectrum of the incident illumination and the reflectance properties of the surface, as well as potentially on the angles of illumination and viewing. Some objects not only reflect light, but also transmit light or emit light themselves, which also contributes to the color. A viewer's perception of the object's color depends not only on the spectrum of the light leaving its surface, but also on a host of contextual cues, so that color differences between objects can be discerned mostly independent of the lighting spectrum, viewing angle, etc. This effect is known as color constancy.
The upper disk and the lower disk have exactly the same objective color, and are in identical gray surroundings; based on context differences, humans perceive the squares as having different reflectances, and may interpret the colors as different color categories; see checker shadow illusion.
Some generalizations of the physics can be drawn, neglecting perceptual effects for now:
• Light arriving at an opaque surface is either reflected "specularly" (that is, in the manner of a mirror), scattered (that is, reflected with diffuse scattering), or absorbed – or some combination of these.
• Opaque objects that do not reflect specularly (which tend to have rough surfaces) have their color determined by which wavelengths of light they scatter strongly (with the light that is not scattered being absorbed). If objects scatter all wavelengths with roughly equal strength, they appear white. If they absorb all wavelengths, they appear black.[8]
• Opaque objects that specularly reflect light of different wavelengths with different efficiencies look like mirrors tinted with colors determined by those differences. An object that reflects some fraction of impinging light and absorbs the rest may look black but also be faintly reflective; examples are black objects coated with layers of enamel or lacquer.
• Objects that transmit light are either translucent (scattering the transmitted light) or transparent (not scattering the transmitted light). If they also absorb (or reflect) light of various wavelengths differentially, they appear tinted with a color determined by the nature of that absorption (or that reflectance).
• Objects may emit light that they generate from having excited electrons, rather than merely reflecting or transmitting light. The electrons may be excited due to elevated temperature (incandescence), as a result of chemical reactions (chemoluminescence), after absorbing light of other frequencies ("fluorescence" or "phosphorescence") or from electrical contacts as in light emitting diodes, or other light sources.
To summarize, the color of an object is a complex result of its surface properties, its transmission properties, and its emission properties, all of which contribute to the mix of wavelengths in the light leaving the surface of the object. The perceived color is then further conditioned by the nature of the ambient illumination, and by the color properties of other objects nearby, and via other characteristics of the perceiving eye and brain.
## Perception
When viewed in full size, this image contains about 16 million pixels, each corresponding to a different color on the full set of RGB colors. The human eye can distinguish about 10 million different colors.[9]
### Development of theories of color vision
Although Aristotle and other ancient scientists had already written on the nature of light and color vision, it was not until Newton that light was identified as the source of the color sensation. In 1810, Goethe published his comprehensive Theory of Colors in which he ascribed physiological effects to color that are now understood as psychological.
In 1801 Thomas Young proposed his trichromatic theory, based on the observation that any color could be matched with a combination of three lights. This theory was later refined by James Clerk Maxwell and Hermann von Helmholtz. As Helmholtz puts it, "the principles of Newton's law of mixture were experimentally confirmed by Maxwell in 1856. Young's theory of color sensations, like so much else that this marvelous investigator achieved in advance of his time, remained unnoticed until Maxwell directed attention to it."[10]
At the same time as Helmholtz, Ewald Hering developed the opponent process theory of color, noting that color blindness and afterimages typically come in opponent pairs (red-green, blue-orange, yellow-violet, and black-white). Ultimately these two theories were synthesized in 1957 by Hurvich and Jameson, who showed that retinal processing corresponds to the trichromatic theory, while processing at the level of the lateral geniculate nucleus corresponds to the opponent theory.[11]
In 1931, an international group of experts known as the Commission internationale de l'éclairage (CIE) developed a mathematical color model, which mapped out the space of observable colors and assigned a set of three numbers to each.
### Color in the eye
Normalized typical human cone cell responses (S, M, and L types) to monochromatic spectral stimuli
The ability of the human eye to distinguish colors is based upon the varying sensitivity of different cells in the retina to light of different wavelengths. Humans are trichromatic—the retina contains three types of color receptor cells, or cones. One type, relatively distinct from the other two, is most responsive to light that is perceived as blue or blue-violet, with wavelengths around 450 nm; cones of this type are sometimes called short-wavelength cones, S cones, or blue cones. The other two types are closely related genetically and chemically: middle-wavelength cones, M cones, or green cones are most sensitive to light perceived as green, with wavelengths around 540 nm, while the long-wavelength cones, L cones, or red cones, are most sensitive to light is perceived as greenish yellow, with wavelengths around 570 nm.
Light, no matter how complex its composition of wavelengths, is reduced to three color components by the eye. Each cone type adheres to the principle of univariance, which is that each cone's output is determined by the amount of light that falls on it over all wavelengths. For each location in the visual field, the three types of cones yield three signals based on the extent to which each is stimulated. These amounts of stimulation are sometimes called tristimulus values.
The response curve as a function of wavelength varies for each type of cone. Because the curves overlap, some tristimulus values do not occur for any incoming light combination. For example, it is not possible to stimulate only the mid-wavelength (so-called "green") cones; the other cones will inevitably be stimulated to some degree at the same time. The set of all possible tristimulus values determines the human color space. It has been estimated that humans can distinguish roughly 10 million different colors.[9]
The other type of light-sensitive cell in the eye, the rod, has a different response curve. In normal situations, when light is bright enough to strongly stimulate the cones, rods play virtually no role in vision at all.[12] On the other hand, in dim light, the cones are understimulated leaving only the signal from the rods, resulting in a colorless response. (Furthermore, the rods are barely sensitive to light in the "red" range.) In certain conditions of intermediate illumination, the rod response and a weak cone response can together result in color discriminations not accounted for by cone responses alone. These effects, combined, are summarized also in the Kruithof curve, that describes the change of color perception and pleasingness of light as function of temperature and intensity.
### Color in the brain
The visual dorsal stream (green) and ventral stream (purple) are shown. The ventral stream is responsible for color perception.
While the mechanisms of color vision at the level of the retina are well-described in terms of tristimulus values, color processing after that point is organized differently. A dominant theory of color vision proposes that color information is transmitted out of the eye by three opponent processes, or opponent channels, each constructed from the raw output of the cones: a red–green channel, a blue–yellow channel, and a black–white "luminance" channel. This theory has been supported by neurobiology, and accounts for the structure of our subjective color experience. Specifically, it explains why humans cannot perceive a "reddish green" or "yellowish blue", and it predicts the color wheel: it is the collection of colors for which at least one of the two color channels measures a value at one of its extremes.
The exact nature of color perception beyond the processing already described, and indeed the status of color as a feature of the perceived world or rather as a feature of our perception of the world—a type of qualia—is a matter of complex and continuing philosophical dispute.
### Nonstandard color perception
#### Color deficiency
If one or more types of a person's color-sensing cones are missing or less responsive than normal to incoming light, that person can distinguish fewer colors and is said to be color deficient or color blind (though this latter term can be misleading; almost all color deficient individuals can distinguish at least some colors). Some kinds of color deficiency are caused by anomalies in the number or nature of cones in the retina. Others (like central or cortical achromatopsia) are caused by neural anomalies in those parts of the brain where visual processing takes place.
#### Tetrachromacy
While most humans are trichromatic (having three types of color receptors), many animals, known as tetrachromats, have four types. These include some species of spiders, most marsupials, birds, reptiles, and many species of fish. Other species are sensitive to only two axes of color or do not perceive color at all; these are called dichromats and monochromats respectively. A distinction is made between retinal tetrachromacy (having four pigments in cone cells in the retina, compared to three in trichromats) and functional tetrachromacy (having the ability to make enhanced color discriminations based on that retinal difference). As many as half of all women are retinal tetrachromats.[13]:p.256 The phenomenon arises when an individual receives two slightly different copies of the gene for either the medium- or long-wavelength cones, which are carried on the X chromosome. To have two different genes, a person must have two X chromosomes, which is why the phenomenon only occurs in women.[13] There is one scholarly report that confirms the existence of a functional tetrachromat.[14]
#### Synesthesia
In certain forms of synesthesia/ideasthesia, perceiving letters and numbers (grapheme–color synesthesia) or hearing musical sounds (music–color synesthesia) will lead to the unusual additional experiences of seeing colors. Behavioral and functional neuroimaging experiments have demonstrated that these color experiences lead to changes in behavioral tasks and lead to increased activation of brain regions involved in color perception, thus demonstrating their reality, and similarity to real color percepts, albeit evoked through a non-standard route.
### Afterimages
After exposure to strong light in their sensitivity range, photoreceptors of a given type become desensitized. For a few seconds after the light ceases, they will continue to signal less strongly than they otherwise would. Colors observed during that period will appear to lack the color component detected by the desensitized photoreceptors. This effect is responsible for the phenomenon of afterimages, in which the eye may continue to see a bright figure after looking away from it, but in a complementary color.
Afterimage effects have also been utilized by artists, including Vincent van Gogh.
### Color constancy
When an artist uses a limited color palette, the eye tends to compensate by seeing any gray or neutral color as the color which is missing from the color wheel. For example, in a limited palette consisting of red, yellow, black, and white, a mixture of yellow and black will appear as a variety of green, a mixture of red and black will appear as a variety of purple, and pure gray will appear bluish.[15]
The trichromatic theory is strictly true when the visual system is in a fixed state of adaptation. In reality, the visual system is constantly adapting to changes in the environment and compares the various colors in a scene to reduce the effects of the illumination. If a scene is illuminated with one light, and then with another, as long as the difference between the light sources stays within a reasonable range, the colors in the scene appear relatively constant to us. This was studied by Edwin Land in the 1970s and led to his retinex theory of color constancy.
Both phenomena are readily explained and mathematically modeled with modern theories of chromatic adaptation and color appearance (e.g. CIECAM02, iCAM).[16] There is no need to dismiss the trichromatic theory of vision, but rather it can be enhanced with an understanding of how the visual system adapts to changes in the viewing environment.
### Color naming
This picture contains one million pixels, each one a different color
Colors vary in several different ways, including hue (shades of red, orange, yellow, green, blue, and violet), saturation, brightness, and gloss. Some color words are derived from the name of an object of that color, such as "orange" or "salmon", while others are abstract, like "red".
In the 1969 study Basic Color Terms: Their Universality and Evolution, Brent Berlin and Paul Kay describe a pattern in naming "basic" colors (like "red" but not "red-orange" or "dark red" or "blood red", which are "shades" of red). All languages that have two "basic" color names distinguish dark/cool colors from bright/warm colors. The next colors to be distinguished are usually red and then yellow or green. All languages with six "basic" colors include black, white, red, green, blue, and yellow. The pattern holds up to a set of twelve: black, gray, white, pink, red, orange, yellow, green, blue, purple, brown, and azure (distinct from blue in Russian and Italian, but not English).
### Associations
Individual colors have a variety of cultural associations such as national colors (in general described in individual color articles and color symbolism). The field of color psychology attempts to identify the effects of color on human emotion and activity. Chromotherapy is a form of alternative medicine attributed to various Eastern traditions. Colors have different associations in different countries and cultures.[17]
Different colors have been demonstrated to have effects on cognition. For example, researchers at the University of Linz in Austria demonstrated that the color red significantly decreases cognitive functioning in men.[18]
## Spectral colors and color reproduction
The CIE 1931 color space chromaticity diagram. The outer curved boundary is the spectral (or monochromatic) locus, with wavelengths shown in nanometers. The colors depicted depend on the color space of the device on which you are viewing the image, and therefore may not be a strictly accurate representation of the color at a particular position, and especially not for monochromatic colors.
Most light sources are mixtures of various wavelengths of light. Many such sources can still effectively produce a spectral color, as the eye cannot distinguish them from single-wavelength sources. For example, most computer displays reproduce the spectral color orange as a combination of red and green light; it appears orange because the red and green are mixed in the right proportions to allow the eye's cones to respond the way they do to the spectral color orange.
A useful concept in understanding the perceived color of a non-monochromatic light source is the dominant wavelength, which identifies the single wavelength of light that produces a sensation most similar to the light source. Dominant wavelength is roughly akin to hue.
There are many color perceptions that by definition cannot be pure spectral colors due to desaturation or because they are purples (mixtures of red and violet light, from opposite ends of the spectrum). Some examples of necessarily non-spectral colors are the achromatic colors (black, gray, and white) and colors such as pink, tan, and magenta.
Two different light spectra that have the same effect on the three color receptors in the human eye will be perceived as the same color. They are metamers of that color. This is exemplified by the white light emitted by fluorescent lamps, which typically has a spectrum of a few narrow bands, while daylight has a continuous spectrum. The human eye cannot tell the difference between such light spectra just by looking into the light source, although reflected colors from objects can look different. (This is often exploited; for example, to make fruit or tomatoes look more intensely red.)
Similarly, most human color perceptions can be generated by a mixture of three colors called primaries. This is used to reproduce color scenes in photography, printing, television, and other media. There are a number of methods or color spaces for specifying a color in terms of three particular primary colors. Each method has its advantages and disadvantages depending on the particular application.
No mixture of colors, however, can produce a response truly identical to that of a spectral color, although one can get close, especially for the longer wavelengths, where the CIE 1931 color space chromaticity diagram has a nearly straight edge. For example, mixing green light (530 nm) and blue light (460 nm) produces cyan light that is slightly desaturated, because response of the red color receptor would be greater to the green and blue light in the mixture than it would be to a pure cyan light at 485 nm that has the same intensity as the mixture of blue and green.
Because of this, and because the primaries in color printing systems generally are not pure themselves, the colors reproduced are never perfectly saturated spectral colors, and so spectral colors cannot be matched exactly. However, natural scenes rarely contain fully saturated colors, thus such scenes can usually be approximated well by these systems. The range of colors that can be reproduced with a given color reproduction system is called the gamut. The CIE chromaticity diagram can be used to describe the gamut.
Another problem with color reproduction systems is connected with the acquisition devices, like cameras or scanners. The characteristics of the color sensors in the devices are often very far from the characteristics of the receptors in the human eye. In effect, acquisition of colors can be relatively poor if they have special, often very "jagged", spectra caused for example by unusual lighting of the photographed scene. A color reproduction system "tuned" to a human with normal color vision may give very inaccurate results for other observers.
The different color response of different devices can be problematic if not properly managed. For color information stored and transferred in digital form, color management techniques, such as those based on ICC profiles, can help to avoid distortions of the reproduced colors. Color management does not circumvent the gamut limitations of particular output devices, but can assist in finding good mapping of input colors into the gamut that can be reproduced.
Additive color mixing: combining red and green yields yellow; combining all three primary colors together yields white.
Additive color is light created by mixing together light of two or more different colors. Red, green, and blue are the additive primary colors normally used in additive color systems such as projectors and computer terminals.
### Subtractive coloring
Subtractive color mixing: combining yellow and magenta yields red; combining all three primary colors together yields black
Subtractive coloring uses dyes, inks, pigments, or filters to absorb some wavelengths of light and not others. The color that a surface displays comes from the parts of the visible spectrum that are not absorbed and therefore remain visible. Without pigments or dye, fabric fibers, paint base and paper are usually made of particles that scatter white light (all colors) well in all directions. When a pigment or ink is added, wavelengths are absorbed or "subtracted" from white light, so light of another color reaches the eye.
If the light is not a pure white source (the case of nearly all forms of artificial lighting), the resulting spectrum will appear a slightly different color. Red paint, viewed under blue light, may appear black. Red paint is red because it scatters only the red components of the spectrum. If red paint is illuminated by blue light, it will be absorbed by the red paint, creating the appearance of a black object.
### Structural color
Structural colors are colors caused by interference effects rather than by pigments. Color effects are produced when a material is scored with fine parallel lines, formed of one or more parallel thin layers, or otherwise composed of microstructures on the scale of the color's wavelength. If the microstructures are spaced randomly, light of shorter wavelengths will be scattered preferentially to produce Tyndall effect colors: the blue of the sky (Rayleigh scattering, caused by structures much smaller than the wavelength of light, in this case air molecules), the luster of opals, and the blue of human irises. If the microstructures are aligned in arrays, for example the array of pits in a CD, they behave as a diffraction grating: the grating reflects different wavelengths in different directions due to interference phenomena, separating mixed "white" light into light of different wavelengths. If the structure is one or more thin layers then it will reflect some wavelengths and transmit others, depending on the layers' thickness.
Structural color is studied in the field of thin-film optics. The most ordered or the most changeable structural colors are iridescent. Structural color is responsible for the blues and greens of the feathers of many birds (the blue jay, for example), as well as certain butterfly wings and beetle shells. Variations in the pattern's spacing often give rise to an iridescent effect, as seen in peacock feathers, soap bubbles, films of oil, and mother of pearl, because the reflected color depends upon the viewing angle. Numerous scientists have carried out research in butterfly wings and beetle shells, including Isaac Newton and Robert Hooke. Since 1942, electron micrography has been used, advancing the development of products that exploit structural color, such as "photonic" cosmetics.[19]
• Color wheel: an illustrative organization of color hues in a circle that shows relationships.
• Colorfulness, chroma, purity, or saturation: how "intense" or "concentrated" a color is. Technical definitions distinguish between colorfulness, chroma, and saturation as distinct perceptual attributes and include purity as a physical quantity. These terms, and others related to light and color are internationally agreed upon and published in the CIE Lighting Vocabulary.[20] More readily available texts on colorimetry also define and explain these terms.[16][21]
• Dichromatism: a phenomenon where the hue is dependent on concentration and thickness of the absorbing substance.
• Hue: the color's direction from white, for example in a color wheel or chromaticity diagram.
• Value, brightness, lightness, or luminosity: how light or dark a color is.
## References
1. ^ Wyszecki, Günther; Stiles, W.S. (1982). Colour Science: Concepts and Methods, Quantitative Data and Formulae (2nd ed.). New York: Wiley Series in Pure and Applied Optics. ISBN 978-0-471-02106-3.
2. ^ R.W.G. Hunt (2004). The Reproduction of Colour (6th ed.). Chichester UK: Wiley–IS&T Series in Imaging Science and Technology. pp. 11–12. ISBN 978-0-470-02425-6.
3. ^ Arikawa K (November 2003). "Spectral organization of the eye of a butterfly, Papilio". J. Comp. Physiol. A. 189 (11): 791–800. doi:10.1007/s00359-003-0454-7. PMID 14520495.
4. ^ Cronin TW, Marshall NJ (1989). "A retina with at least ten spectral types of photoreceptors in a mantis shrimp". Nature. 339 (6220): 137–40. Bibcode:1989Natur.339..137C. doi:10.1038/339137a0.
5. ^ Craig F. Bohren (2006). Fundamentals of Atmospheric Radiation: An Introduction with 400 Problems. Wiley-VCH. p. 214. Bibcode:2006fari.book.....B. ISBN 978-3-527-40503-9.
6. ^ Berlin, B. and Kay, P., Basic Color Terms: Their Universality and Evolution, Berkeley: University of California Press, 1969.
7. ^ Waldman, Gary (2002). Introduction to light : the physics of light, vision, and color. Mineola: Dover Publications. p. 193. ISBN 978-0-486-42118-6.
8. ^ Pastoureau, Michael (2008). Black: The History of a Color. Princeton University Press. p. 216. ISBN 978-0691139302.
9. ^ a b Judd, Deane B.; Wyszecki, Günter (1975). Color in Business, Science and Industry. Wiley Series in Pure and Applied Optics (third ed.). New York: Wiley-Interscience. p. 388. ISBN 978-0-471-45212-6.
10. ^ Hermann von Helmholtz, Physiological Optics – The Sensations of Vision, 1866, as translated in Sources of Color Science, David L. MacAdam, ed., Cambridge: MIT Press, 1970.
11. ^ Palmer, S.E. (1999). Vision Science: Photons to Phenomenology, Cambridge, MA: MIT Press. ISBN 0-262-16183-4.
12. ^ "Under well-lit viewing conditions (photopic vision), cones ...are highly active and rods are inactive." Hirakawa, K.; Parks, T.W. (2005). Chromatic Adaptation and White-Balance Problem (PDF). IEEE ICIP. doi:10.1109/ICIP.2005.1530559. Archived from the original (PDF) on November 28, 2006.
13. ^ a b Jameson, K.A.; Highnote, S.M.; Wasserman, L.M. (2001). "Richer color experience in observers with multiple photopigment opsin genes" (PDF). Psychonomic Bulletin and Review. 8 (2): 244–61. doi:10.3758/BF03196159. PMID 11495112.
14. ^ Jordan, G.; Deeb, S.S.; Bosten, J.M.; Mollon, J.D. (20 July 2010). "The dimensionality of color vision in carriers of anomalous trichromacy". Journal of Vision. 10 (8): 12. doi:10.1167/10.8.12. PMID 20884587.
15. ^ Depauw, Robert C. "United States Patent". Retrieved 20 March 2011.
16. ^ a b M.D. Fairchild, Color Appearance Models Archived May 5, 2011, at the Wayback Machine, 2nd Ed., Wiley, Chichester (2005).
17. ^ "Chart: Color Meanings by Culture". Archived from the original on 2010-10-12. Retrieved 2010-06-29.
18. ^ Gnambs, Timo; Appel, Markus; Batinic, Bernad (2010). "Color red in web-based knowledge testing". Computers in Human Behavior. 26 (6): 1625–31. doi:10.1016/j.chb.2010.06.010.
19. ^ "Economic and Social Research Council – Science in the Dock, Art in the Stocks". Archived from the original on November 2, 2007. Retrieved 2007-10-07.
20. ^ CIE Pub. 17-4, International Lighting Vocabulary Archived 2010-02-27 at the Wayback Machine, 1987. "Archived copy". Archived from the original on 2010-02-27. Retrieved 2010-02-05.CS1 maint: Archived copy as title (link)
21. ^ R.S. Berns, Principles of Color Technology Archived 2012-01-05 at the Wayback Machine, 3rd Ed., Wiley, New York (2001).
• ColorLab MATLAB toolbox for color science computation and accurate color reproduction (by Jesus Malo and Maria Jose Luque, Universitat de Valencia). It includes CIE standard tristimulus colorimetry and transformations to a number of non-linear color appearance models (CIE Lab, CIE CAM, etc.).
Black
Black is the darkest color, the result of the absence or complete absorption of visible light. It is an achromatic color, literally a color without hue, like white and gray. It is often used symbolically or figuratively to represent darkness, while white represents light. Black and white have often been used to describe opposites; particularly truth and ignorance, good and evil, the Dark Ages versus Age of Enlightenment. Since the Middle Ages, black has been the symbolic color of solemnity and authority, and for this reason is still commonly worn by judges and magistrates.Black was one of the first colors used by artists in neolithic cave paintings. In the 14th century, it was worn by royalty, clergy, judges and government officials in much of Europe. It became the color worn by English romantic poets, businessmen and statesmen in the 19th century, and a high fashion color in the 20th century. In the Roman Empire, it became the color of mourning, and over the centuries it was frequently associated with death, evil, witches and magic. According to surveys in Europe and North America, it is the color most commonly associated with mourning, the end, secrets, magic, force, violence, evil, and elegance.Black ink is the most common color used for printing books, newspapers and documents, as provides the highest contrast with white paper and thus the easiest color to read. Similarly, black text on a white screen is the most common format used on computer screens.
Black and white
Black-and-white (B/W or B&W) images combine black and white in a continuous spectrum, producing a range of shades of gray.
CMYK color model
The CMYK color model (process color, four color) is a subtractive color model, used in color printing, and is also used to describe the printing process itself. CMYK refers to the four inks used in some color printing: cyan, magenta, yellow, and key.
The CMYK model works by partially or entirely masking colors on a lighter, usually white, background. The ink reduces the light that would otherwise be reflected. Such a model is called subtractive because inks "subtract" the colors red, green and blue from white light. White light minus red leaves cyan, white light minus green leaves magenta, and white light minus blue leaves yellow.
In additive color models, such as RGB, white is the "additive" combination of all primary colored lights, while black is the absence of light. In the CMYK model, it is the opposite: white is the natural color of the paper or other background, while black results from a full combination of colored inks. To save cost on ink, and to produce deeper black tones, unsaturated and dark colors are produced by using black ink instead of the combination of cyan, magenta, and yellow.
Color blindness
Color blindness, also known as color vision deficiency, is the decreased ability to see color or differences in color. Simple tasks such as selecting ripe fruit, choosing clothing, and reading traffic lights can be more challenging. Color blindness may also make some educational activities more difficult. However, problems are generally minor, and most people find that they can adapt. People with total color blindness (achromatopsia) may also have decreased visual acuity and be uncomfortable in bright environments.The most common cause of color blindness is an inherited problem in the development of one or more of the three sets of color sensing cones in the eye. Males are more likely to be color blind than females, as the genes responsible for the most common forms of color blindness are on the X chromosome. As females have two X chromosomes, a defect in one is typically compensated for by the other, while males only have one X chromosome. Color blindness can also result from physical or chemical damage to the eye, optic nerve or parts of the brain. Diagnosis is typically with the Ishihara color test; however, a number of other testing methods also exist.There is no cure for color blindness. Diagnosis may allow a person's teacher to change their method of teaching to accommodate the decreased ability to recognize colors. Special lenses may help people with red-green color blindness when under bright conditions. There are also mobile apps that can help people identify colors.Red-green color blindness is the most common form, followed by blue-yellow color blindness and total color blindness. Red-green color blindness affects up to 8% of males and 0.5% of females of Northern European descent. The ability to see color also decreases in old age. Being color blind may make people ineligible for certain jobs in certain countries. This may include being a pilot, train driver and working in the armed forces. The effect of color blindness on artistic ability, however, is controversial. The ability to draw appears to be unchanged, and a number of famous artists are believed to have been color blind.
Eye color
Eye color is a polygenic phenotypic character determined by two distinct factors: the pigmentation of the eye's iris and the frequency-dependence of the scattering of light by the turbid medium in the stroma of the iris.In humans, the pigmentation of the iris varies from light brown to black, depending on the concentration of melanin in the iris pigment epithelium (located on the back of the iris), the melanin content within the iris stroma (located at the front of the iris), and the cellular density of the stroma. The appearance of blue and green, as well as hazel eyes, results from the Tyndall scattering of light in the stroma, a phenomenon similar to that which accounts for the blueness of the sky called Rayleigh scattering. Neither blue nor green pigments are ever present in the human iris or ocular fluid. Eye color is thus an instance of structural color and varies depending on the lighting conditions, especially for lighter-colored eyes.
The brightly colored eyes of many bird species result from the presence of other pigments, such as pteridines, purines, and carotenoids. Humans and other animals have many phenotypic variations in eye color. The genetics of eye color are complicated, and color is determined by multiple genes. So far, as many as 15 genes have been associated with eye color inheritance. Some of the eye-color genes include OCA2 and HERC2. The earlier belief that blue eye color is a simple recessive trait has been shown to be incorrect. The genetics of eye color are so complex that almost any parent-child combination of eye colors can occur. However, OCA2 gene polymorphism, close to proximal 5′ regulatory region, explains most human eye-color variation.
GIF
The Graphics Interchange Format (GIF, JIF or GHIF), is a bitmap image format that was developed by a team at the online services provider CompuServe led by American computer scientist Steve Wilhite on June 15, 1987. It has since come into widespread usage on the World Wide Web due to its wide support and portability.The format supports up to 8 bits per pixel for each image, allowing a single image to reference its own palette of up to 256 different colors chosen from the 24-bit RGB color space. It also supports animations and allows a separate palette of up to 256 colors for each frame. These palette limitations make GIF less suitable for reproducing color photographs and other images with color gradients, but it is well-suited for simpler images such as graphics or logos with solid areas of color.
GIF images are compressed using the Lempel–Ziv–Welch (LZW) lossless data compression technique to reduce the file size without degrading the visual quality. This compression technique was patented in 1985. Controversy over the licensing agreement between the software patent holder, Unisys, and CompuServe in 1994 spurred the development of the Portable Network Graphics (PNG) standard. By 2004 all the relevant patents had expired.
Game Boy Color
The Game Boy Color (GBC) is a handheld game console manufactured by Nintendo, which was released on October 21, 1998 in Japan and was released in November of the same year in international markets. It is the successor of the Game Boy.
The Game Boy Color features a color screen. It is slightly thicker and taller and features a slightly smaller screen than the Game Boy Pocket, its predecessor. As with the original Game Boy, it has a custom 8-bit processor made by Sharp that is considered a hybrid between the Intel 8080 and the Zilog Z80. The spelling of the system's name, Game Boy Color, remains consistent throughout the world with its American English spelling of color.
The Game Boy Color's primary competitors in Japan were the grayscale 16-bit handhelds Neo Geo Pocket and the WonderSwan, though the Game Boy Color outsold these by a wide margin. SNK and Bandai countered with the Neo Geo Pocket Color and the Wonderswan Color respectively but this did little to change Nintendo's sales dominance. With Sega discontinuing the Game Gear in 1997, the Game Boy Color's only competitor in the United States was its predecessor, the Game Boy, until the short-lived Neo Geo Pocket Color was released in August 1999. The Game Boy and Game Boy Color combined have sold 118.69 million units worldwide making it the 3rd best selling system of all time. including Game Boy units It was discontinued in 2003, shortly after the release of the Game Boy Advance SP. Its best-selling game was Pokémon Gold and Silver, shipping approximately 14.51 million combined in Japan and the USA.
Green
Green is the color between blue and yellow on the visible spectrum. It is evoked by light which has a dominant wavelength of roughly 495–570 nm. In subtractive color systems, used in painting and color printing, it is created by a combination of yellow and blue, or yellow and cyan; in the RGB color model, used on television and computer screens, it is one of the additive primary colors, along with red and blue, which are mixed in different combinations to create all other colors. By far the largest contributor to green in nature is chlorophyll, the chemical by which plants photosynthesize and convert sunlight into chemical energy. Many creatures have adapted to their green environments by taking on a green hue themselves as camouflage. Several minerals have a green color, including the emerald, which is colored green by its chromium content.
During post-classical and early modern Europe, green was the color commonly associated with wealth, merchants, bankers and the gentry, while red was reserved for the nobility. For this reason, the costume of the Mona Lisa by Leonardo da Vinci and the benches in the British House of Commons are green while those in the House of Lords are red. It also has a long historical tradition as the color of Ireland and of Gaelic culture. It is the historic color of Islam, representing the lush vegetation of Paradise. It was the color of the banner of Muhammad, and is found in the flags of nearly all Islamic countries.In surveys made in American, European, and Islamic countries, green is the color most commonly associated with nature, life, health, youth, spring, hope and envy. In the European Union and the United States, green is also sometimes associated with toxicity and poor health, but in China and most of Asia, its associations are very positive, as the symbol of fertility and happiness. Because of its association with nature, it is the color of the environmental movement. Political groups advocating environmental protection and social justice describe themselves as part of the Green movement, some naming themselves Green parties. This has led to similar campaigns in advertising, as companies have sold green, or environmentally friendly, products. Green is also the traditional color of safety and permission; a green light means go ahead, a green card permits permanent residence in the United States.
HSL and HSV
HSL (hue, saturation, lightness) and HSV (hue, saturation, value) are alternative representations of the RGB color model, designed in the 1970s by computer graphics researchers to more closely align with the way human vision perceives color-making attributes. In these models, colors of each hue are arranged in a radial slice, around a central axis of neutral colors which ranges from black at the bottom to white at the top. The HSV representation models the way paints of different colors mix together, with the saturation dimension resembling various shades of brightly colored paint, and the value dimension resembling the mixture of those paints with varying amounts of black or white paint. The HSL model attempts to resemble more perceptual color models such as the Natural Color System (NCS) or Munsell color system, placing fully saturated colors around a circle at a lightness value of 1⁄2, where a lightness value of 0 or 1 is fully black or white, respectively.
Indigo
Indigo is a deep and rich color close to the color wheel blue (a primary color in the RGB color space), as well as to some variants of ultramarine. It is traditionally regarded as a color in the visible spectrum, as well as one of the seven colors of the rainbow: the color between violet and blue; however, sources differ as to its actual position in the electromagnetic spectrum.
The color indigo is named after the indigo dye derived from the plant Indigofera tinctoria and related species.
The first known recorded use of indigo as a color name in English was in 1289.
Light-emitting diode
A light-emitting diode (LED) is a semiconductor light source that emits light when current flows through it. Electrons in the semiconductor recombine with electron holes, releasing energy in the form of photons. This effect is called electroluminescence. The color of the light (corresponding to the energy of the photons) is determined by the energy required for electrons to cross the band gap of the semiconductor. White light is obtained by using multiple semiconductors or a layer of light-emitting phosphor on the semiconductor device.Appearing as practical electronic components in 1962, the earliest LEDs emitted low-intensity infrared light. Infrared LEDs are used in remote-control circuits, such as those used with a wide variety of consumer electronics. The first visible-light LEDs were of low intensity and limited to red. Modern LEDs are available across the visible, ultraviolet, and infrared wavelengths, with high light output.
Early LEDs were often used as indicator lamps, replacing small incandescent bulbs, and in seven-segment displays. Recent developments have produced white-light LEDs suitable for room lighting. LEDs have led to new displays and sensors, while their high switching rates are useful in advanced communications technology.
LEDs have many advantages over incandescent light sources, including lower energy consumption, longer lifetime, improved physical robustness, smaller size, and faster switching. Light-emitting diodes are used in applications as diverse as aviation lighting, automotive headlamps, advertising, general lighting, traffic signals, camera flashes, lighted wallpaper and medical devices.Unlike a laser, the color of light emitted from an LED is neither coherent nor monochromatic, but the spectrum is narrow with respect to human vision, and functionally monochromatic.
Lists of colors
These are lists of colors:
List of colors: A–F
List of colors: G–M
List of colors: N–Z
List of colors (compact)
List of color palettes
List of fictional colors
List of Crayola crayon colors
X11 color names
Portable Network Graphics
Portable Network Graphics (PNG, pronounced PEE-en-JEE or PING) is a raster-graphics file-format that supports lossless data compression. PNG was developed as an improved, non-patented replacement for Graphics Interchange Format (GIF).
PNG supports palette-based images (with palettes of 24-bit RGB or 32-bit RGBA colors), grayscale images (with or without alpha channel for transparency), and full-color non-palette-based RGB/RGBA images (with or without alpha channel). The PNG working group designed the format for transferring images on the Internet, not for professional-quality print graphics, and therefore it does not support non-RGB color spaces such as CMYK. A PNG file contains a single image in an extensible structure of "chunks", encoding the basic pixels and other information such as textual comments and integrity checks documented in RFC 2083.PNG files nearly always use the file extension PNG or png and are assigned MIME media type image/png.
PNG was published as informational RFC 2083 in March 1997 and as an ISO/IEC standard in 2004.
RGB color model
The RGB color model is an additive color model in which red, green and blue light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additive primary colors, red, green, and blue.
The main purpose of the RGB color model is for the sensing, representation and display of images in electronic systems, such as televisions and computers, though it has also been used in conventional photography. Before the electronic age, the RGB color model already had a solid theory behind it, based in human perception of colors.
RGB is a device-dependent color model: different devices detect or reproduce a given RGB value differently, since the color elements (such as phosphors or dyes) and their response to the individual R, G, and B levels vary from manufacturer to manufacturer, or even in the same device over time. Thus an RGB value does not define the same color across devices without some kind of color management.
Typical RGB input devices are color TV and video cameras, image scanners, and digital cameras. Typical RGB output devices are TV sets of various technologies (CRT, LCD, plasma, OLED, quantum dots, etc.), computer and mobile phone displays, video projectors, multicolor LED displays and large screens such as JumboTron. Color printers, on the other hand are not RGB devices, but subtractive color devices (typically CMYK color model).
This article discusses concepts common to all the different color spaces that use the RGB color model, which are used in one implementation or another in color image-producing technology.
Red
Red is the color at the end of the visible spectrum of light, next to orange and opposite violet. It has a dominant wavelength of approximately 625–740 nanometres. It is a primary color in the RGB color model and the CMYK color model, and is the complementary color of cyan. Reds range from the brilliant yellow-tinged scarlet and vermillion to bluish-red crimson, and vary in shade from the pale red pink to the dark red burgundy. The red sky at sunset results from Rayleigh scattering, while the red color of the Grand Canyon and other geological features is caused by hematite or red ochre, both forms of iron oxide. Iron oxide also gives the red color to the planet Mars. The red colour of blood comes from protein hemoglobin, while ripe strawberries, red apples and reddish autumn leaves are colored by anthocyanins.Red pigment made from ochre was one of the first colors used in prehistoric art. The Ancient Egytians and Mayans colored their faces red in ceremonies; Roman generals had their bodies colored red to celebrate victories. It was also an important color in China, where it was used to colour early pottery and later the gates and walls of palaces. In the Renaissance, the brilliant red costumes for the nobility and wealthy were dyed with kermes and cochineal. The 19th century brought the introduction of the first synthetic red dyes, which replaced the traditional dyes. Red also became the color of revolution; Soviet Russia adopted a red flag following the Bolshevik Revolution in 1917, later followed by China, Vietnam, and other communist countries.
Since red is the color of blood, it has historically been associated with sacrifice, danger and courage. Modern surveys in Europe and the United States show red is also the color most commonly associated with heat, activity, passion, sexuality, anger, love and joy. In China, India and many other Asian countries it is the color of symbolizing happiness and good fortune.
Varieties of the color green may differ in hue, chroma (also called saturation or intensity) or lightness (or value, tone, or brightness), or in two or three of these qualities. Variations in value are also called tints and shades, a tint being a green or other hue mixed with white, a shade being mixed with black. A large selection of these various colors is shown below.
Visible spectrum
The visible spectrum is the portion of the electromagnetic spectrum that is visible to the human eye. Electromagnetic radiation in this range of wavelengths is called visible light or simply light. A typical human eye will respond to wavelengths from about 380 to 740 nanometers. In terms of frequency, this corresponds to a band in the vicinity of 430–770 THz.
The spectrum does not contain all the colors that the human eyes and brain can distinguish. Unsaturated colors such as pink, or purple variations like magenta, for example, are absent because they can only be made from a mix of multiple wavelengths. Colors containing only one wavelength are also called pure colors or spectral colors.
Visible wavelengths pass largely unattenuated through the Earth's atmosphere via the "optical window" region of the electromagnetic spectrum. An example of this phenomenon is when clean air scatters blue light more than red light, and so the midday sky appears blue. The optical window is also referred to as the "visible window" because it overlaps the human visible response spectrum. The near infrared (NIR) window lies just out of the human vision, as well as the medium wavelength infrared (MWIR) window, and the long wavelength or far infrared (LWIR or FIR) window, although other animals may experience them.
Web colors
Web colors are colors used in displaying web pages on the World Wide Web, and the methods for describing and specifying those colors. Colors may be specified as an RGB triplet or in hexadecimal format (a hex triplet) or according to their common English names in some cases. A color tool or other graphics software is often used to generate color values. In some uses, hexadecimal color codes are specified with notation using a leading number sign (#). A color is specified according to the intensity of its red, green and blue components, each represented by eight bits. Thus, there are 24 bits used to specify a web color within the sRGB gamut, and 16,777,216 colors that may be so specified.
Colors outside the sRGB gamut can be specified in Cascading Style Sheets by making one or more of the red, green and blue components negative or greater than 100%, so the color space is theoretically an unbounded extrapolation of sRGB similar to scRGB. Specifying a non-sRGB color this way requires the RGB() function call; it is impossible with the hexadecimal syntax (and thus impossible in legacy HTML documents that do not use CSS).
The first versions of Mosaic and Netscape Navigator used the X11 color names as the basis for their color lists, as both started as X Window System applications.
Web colors have an unambiguous colorimetric definition, sRGB, which relates the chromaticities of a particular phosphor set, a given transfer curve, adaptive whitepoint, and viewing conditions. These have been chosen to be similar to many real-world monitors and viewing conditions, in order to allow rendering to be fairly close to the specified values even without color management. User agents vary in the fidelity with which they represent the specified colors. More advanced user agents use color management to provide better color fidelity; this is particularly important for Web-to-print applications.
White
White is the lightest color and is achromatic (having no hue). It is the color of fresh snow, chalk, and milk, and is the opposite of black. White objects fully reflect and scatter all the visible wavelengths of light. White on television and computer screens is created by a mixture of red, blue and green light.
In ancient Egypt and ancient Rome, priestesses wore white as a symbol of purity, and Romans wore a white toga as a symbol of citizenship. In the Middle Ages and Renaissance a white unicorn symbolized chastity, and a white lamb sacrifice and purity. It was the royal color of the Kings of France, and of the monarchist movement that opposed the Bolsheviks during the Russian Civil War (1917–1922). Greek and Roman temples were faced with white marble, and beginning in the 18th century, with the advent of neoclassical architecture, white became the most common color of new churches, capitols and other government buildings, especially in the United States. It was also widely used in 20th century modern architecture as a symbol of modernity and simplicity.
According to surveys in Europe and the United States, white is the color most often associated with perfection, the good, honesty, cleanliness, the beginning, the new, neutrality, and exactitude. White is an important color for almost all world religions. The Pope, the head of the Roman Catholic Church, has worn white since 1566, as a symbol of purity and sacrifice. In Islam, and in the Shinto religion of Japan, it is worn by pilgrims. In Western cultures and in Japan, white is the most common color for wedding dresses, symbolizing purity and virginity. In many Asian cultures, white is also the color of mourning.
This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.
|
2019-02-24 00:46:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4054524600505829, "perplexity": 2344.1026318889467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249569386.95/warc/CC-MAIN-20190224003630-20190224025630-00621.warc.gz"}
|
https://brilliant.org/discussions/thread/tenth-grade-math-in-usa/
|
×
# tenth grade math in USA
hey I am going to travel to the USA and I am going to study tenth grade there what should I prepare? and by the way how can you guy solve so many difficult math problems is that by studying from the library
Note by Khoi Trinh Dinh
3 years, 11 months ago
Sort by:
Tenth grade math in the USA isn't really that hard, depending on what class you're going into. If you can solve even half the problems on here you should do fine. · 3 years, 11 months ago
10th grade math in USA is algebra and geometry. Check out the state standards, as each state is different. · 3 years, 11 months ago
A concept means ... one topic of maths ..eg :- Probability , Locus etc.,
eg.. 5th grade ..mensuration , trigonometry 6th grade .. Coordinate system , Equations. (* in this class ..there will be no remains of menstruation &trig.)
QUESTION TO KENNETH C · 3 years, 11 months ago
No, not exactly. To some extent you may be right, but I know from experience that it's different in middle school and high school. · 3 years, 11 months ago
thankx · 3 years, 11 months ago
what the tenth grade means? · 3 years, 11 months ago
It means $$10^{th}$$ CLASS in India
$$10^{th}$$ Class.... In India ..we Write SSC Exams., now We also have NTSE for $10^{th}$ Class Students · 3 years, 11 months ago
topics are same? · 3 years, 11 months ago
ya · 3 years, 11 months ago
how we can learn new topics regarding mathematics? · 3 years, 11 months ago
A good way to learn new math is to get some good books about math. I highly recommend the Art of Problem Solving books and their website. Specifically Volume 1 and Volume 2 of their books. · 3 years, 11 months ago
Spend some time Daily in Brilliant.. · 3 years, 11 months ago
|
2017-05-25 16:20:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6322567462921143, "perplexity": 3492.7930037614187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608107.28/warc/CC-MAIN-20170525155936-20170525175936-00005.warc.gz"}
|
https://read.dukeupress.edu/world-policy-journal/article/29/1/82/79772/Clearing-the-Air
|
Ulaanbaatar, Mongolia—As the sun rises over the frozen steppes, mothers and grandmothers across Mongolia emerge from their homes—white, felt-covered, round tents called gers. Hands hidden from the cold in the long sleeves of their warm deels, they clutch a ladle in one hand and an urn of milk tea in the other. Offering tsainii deej urguh, they throw a ladle-full of milk tea into the sky to honor the heavens. For many Mongolian women, the view is of blue sky and the open steppe, the horizon perhaps dotted with their family’s herd of goats and sheep. But for those who live within sight of the capital, the panorama is quite different. Before them lies a vast city, home to more than a million people, jammed into an urban sprawl of closely packed gers, Soviet-era apartments, and new high-rises. Yet in the heart of the Mongolian winter, they can see none of this. Instead, a thick, gray layer of pollution obscures the horizon. Ulaanbaatar, capital of the most sparsely populated country on the planet and renowned for its pristine countryside and nomadic herdsmen, has some of the world’s most toxic air.
The pollution is at its most intense in the winter, when over 100,000 ger stoves must work overtime to offset frigid outdoor temperatures that can dip as low as 40 degrees below zero—where Fahrenheit and Celsius overlap. In addition to generating heat, the ger stoves spew a hazardous type of pollution called particulate matter (pm) in the form of soot. Other sources, such as coal-fired power plants and vehicles, also contribute to the city’s pollution problem. When concentrations of pm in the air are elevated and the individual particles are small enough, as they are in soot, these small particles penetrate deep into the lungs, causing serious respiratory and cardiovascular problems. In the winter, portions of Ulaanbaatar can reach daily average concentrations of a particular type of pm known as pm10—particles with a diameter of 10 microns or smaller—that are 70 to 85 times the maximum daily exposure recommended by the World Health Organization (WHO).
Despite comparatively less-polluted summers when indoor heating is unnecessary, the intense winter pollution raises the annual average pm10 concentration so high that Ulaanbaatar ranks second out of over 1,000 cities in a list of the most pm10-polluted cities on the planet, according to a 2011 WHO survey—far worse than Beijing, Mexico City, or Bangkok. Only Ahwaz, Iran has a higher annual pm10 concentration.
## Extremes
While Ulaanbaatar represents an extreme case, it is not alone in having increasing difficulty balancing its growing need for energy with maintaining safe air. Since 2008, and for the first time in human history, we now live in a world where over half of the population resides in urban areas. The urbanization rate is highest in developing nations, where more than three-quarters of humanity live. While urban centers create the potential for efficient energy and resource disbursement, the reality is that many cities in developing countries are forced to produce more energy in inexpensive ways that compromise local air quality and their citizens’ health.
Currently, the WHO estimates that indoor and outdoor pollution causes 3.3 million premature deaths per year—substantially higher than the annual death rate from malaria and AIDS combined. The health impacts of air pollution are felt most acutely in Asia, where over the last 30 years, the population has doubled and the need for cheap energy skyrocketed. A 2002 WHO report states, that of the total urban air pollution related-deaths in 2000, nearly two-thirds of them occurred in developing countries in Asia. Of the WHO’s 10 most polluted cities, only Gaberone in Botswana is not a growing, impoverished city in Asia.
In Ulaanbaatar, the health impacts are already severe. A recent study from British Columbia’s Simon Fraser University offers a conservative estimate that 10 percent of all deaths in Ulaanbaatar are traceable to pm pollution, while a December 2011 World Bank report estimates that approximately 1,600 deaths in Ulaanbaatar, or nearly 25 percent of all the city’s deaths, can be attributed to pm every year. pm pollution has also been linked with a wide range of other negative health impacts including reproductive effects, respiratory infections, asthma irritation, impaired lung growth, cardiovascular disease, heart attacks, and strokes. The World Bank calculates that the adverse health impact of high pm levels results in nearly $500 million in losses each year—some 20 percent of Ulaanbaatar’s 2008 GDP. In short, the air pollution in Ulaanbaatar is at tragically high levels with devastating health and economic consequences to the one-third of Mongolia’s total population who live in the capital. The health impact of particulate matter isn’t the only concern. Air pollution, especially soot from combustion sources, can affect global and regional climates far beyond their immediate source. According to some estimates, a major component of soot called black carbon is second only to carbon dioxide as the largest contributor to global warming. This material, as its name suggests, is black, and therefore can efficiently absorb and scatter incoming sunlight. Black carbon in soot can trap radiation from the sun in the Earth’s atmosphere while simultaneously dimming the surface. Regionally, pollution in the form of pm can affect local climate by altering the types of clouds that form—indeed whether they form at all, and consequently can influence regional precipitation patterns. The increasing urbanization of Ulaanbaatar and its semi-arid climate make it especially vulnerable to small changes in precipitation. The city already struggles to provide safe water to many of those living in the ger district. Currently, there is no published research investigating how pm levels in Ulaanbaatar could be affecting precipitation and consequently threatening the delicate water supply in the region. ## Push Me, Pull You The engine of one of the world’s most rapidly developing economies, Ulaanbaatar is a prime example of an Asian city that is experiencing rapid population and economic growth while suffering severe environmental consequences due to increased energy consumption. Yet, Mongolia’s capital has its own unique set of circumstances that have created a perfect storm for the current air pollution crisis that now plagues the city. Ulaanbaatar is spread out along the Tuul River in a valley formed by the Khentii Mountains on the north and south. Though historically the valley provided proximity to fresh water and shelter from harsh Siberian winds to the north, the city’s geography now intensifies the effects of the particles released from its pollution sources. Its valley location keeps winds from moving the pollution and dispersing it over a wider area. In the winter, the air in the valley also often experiences what is known as a temperature inversion, a common phenomenon found at higher latitudes and in mountain valleys. Essentially, temperature inversions cause pockets of cold air to lay stagnant directly above the valley, keeping polluted air trapped there, enveloping the entire city. the nation’s growing wealth and burgeoning middle class have led to a proliferation of automobiles. But there are many more man-made problems generated by the movement of the bulk of Mongolia’s population into this single, increasingly congested urban area. Over the past two decades, the population of the city has more than doubled. Most of this growth is attributed to an influx of nomads from the countryside. In the 1990s, the Mongolian government transitioned from a Communist authoritarian state to a parliamentary democracy, and the accompanying switch from a planned economy to a free one was not gentle. Deep recessions hit nomads with the privatization of herds and the disappearance of social safety nets that had been in place during the Communist- era. Additionally, the 2000s brought four dzuds—winter conditions that include abnormally extended periods of snowcover, high winds, and extreme cold that make it impossible for livestock to graze. Extraordinarily harsh even by Mongolian standards, the most recent dzud of Winter 2009-2010 wiped out millions of livestock, and with them, the livelihoods of hundreds of thousands of nomads. Compounding the natural force pushing nomads into Ulaanbaatar is the pull of the city itself. In addition to being the designated seat of government, Ulaanbaatar is unquestionably the cultural, educational, and economic capital of the country. The two next most-populated cities of Erdenet and Darkhan are an order of magnitude smaller, with populations below 100,000. And perhaps most crucially, Ulaanbaatar is the business epicenter of a copper, gold, and coal mining boom that some economists project will double the 2010 GDP per capita in Mongolia by 2015. The massive influx of former nomads into Ulaanbaatar has spawned a makeshift settlement known as the ger district. Blanketing the northern portion of the city, this vast collection of gers, wooden shacks, and fences held together by a tenuous network of dirt roads now accounts for over half of Ulaanbaatar’s population. Most of this area remains disconnected from Ulaanbaatar’s centralized heating system. The extreme cold of the Mongolian winter creates an urgent need for at least 150,000 individual heating sources in the ger district—largely individual coal and wood-burning stoves with chimneys. But wood and coal can be costly for those in the ger district, accounting for nearly half of the poorest ger families’ monthly expenses during the winter. Those who can’t afford wood or coal resort to burning rubber-coated bricks, tires, and even garbage. While the main source of particulate pollution is from the individual stoves in the ger district, other contributing sources include heat-only-boilers that provide warmth to larger ger district buildings, such as schools and hospitals; three coal-fired power plants that provide electricity to the central part of the city; and road dust from the vast network of dirt paths that persist even within the city limits. At the same time, at the other end of the economic spectrum, the nation’s growing wealth and burgeoning middle class have led to a proliferation of automobiles far beyond the capacity of the existing fuel infrastructure and fledgling road network to handle them safely. In just the past few years, Mongolia finally restricted the sale of leaded gasoline, notorious for its harmful neurological effects and banned many years ago by the vast majority of other countries. But vehicles that used leaded gasoline before the restriction had their catalytic converters irreparably damaged, so they now release pollutants directly into the atmosphere. About a third of all Mongolian vehicles require diesel, which produces more pm than the same mass of other fuels. Moreover, the proliferation of cars far exceeding road capacity leads to miles-long traffic jams along the capital’s single main thoroughfare, Peace Avenue. Cars, taxis, buses, and trucks—at least half of them more than ten years old—sit idling, at times for hours, belching noxious fumes and particulate matter into the already polluted atmosphere. The high pollution has had a significant effect on the psyche of Ulaanbaatar residents. There are jokes about renaming the city “Utaanbaatar,” utaan meaning smog in Mongolian. Many express pessimism about conditions improving any time soon. “I have a headache when I walk in the polluted air, and I am thinking lately to go abroad and live there for several years just to escape from the pollution,” says Khishigbayar Tsogbadrakh, a young professional woman working in Ulaanbaatar. For those Mongolians without the option of living abroad, the pollution represents a real dilemma, choosing between a family’s health and living in the city with the nation’s best economic and educational opportunities. the government instituted a raw coal ban in one of its nine districts as a test case. Sereeter Lodoysamba, a professor at the National University of Mongolia who has been studying air pollution in Ulaanbaatar for several years and helped launch the first national stove-testing laboratory, raises the next logical question: “Air pollution levels exceed the WHO guidelines by a factor of 35. It is obvious that urgent intervention is needed. But how?” ## Mitigation One organization aggressively working to tackle the problem is the Millennium Challenge Account-Mongolia (mca-m)—a Mongolian-run agency funded by its parent organization, the Millennium Challenge Corporation, a United States foreign aid program. Since 2010, the Energy and Environment project of the mca-m has been concentrating on addressing air pollution issues by identifying and selling heavily subsidized energy efficient products, which have the added advantage of not only reducing emissions but cutting fuel costs to consumers as well. In 2011, the$45 million project sold 40,000 energy efficient stoves, representing over a quarter of all Ulaanbaatar ger district homes. The stoves, subsidized by both the Government of Mongolia and mca-m, sell for only a fraction of their actual cost and at a price that is five to eight times cheaper than traditional ger stoves. According to mca-m Energy and Environment Project Director Dr. Sovd Mangal, the particulate matter emitted from each energy efficient ger stove should be reduced by at least 70 to 80 percent, and the stoves also require 30 to 50 percent less raw coal than traditional stoves—translating into significantly lower fuel costs for the homeowner. Mangal predicts another 30,000 to 40,000 new stoves will be sold over the rest of the winter and spring of 2012. In addition to energy-efficient stoves, mca-m has sold over 9,000 ger blankets, which provide extra insulation to the walls of the ger; 2,800 ger vestibules, which provide a buffer space between the outside and the ger living area; and nearly 100 energy efficient concrete homes.
The Mongolian government has been working with mca-m and is also instituting other measures to address the problem. The government has helped subsidize the energy efficient stoves sold by mca-m and has begun to promote what it claims are cleaner-burning, alternative stove fuels. This past winter, the government instituted a raw coal ban in one of its nine districts as a test case, offering alternative fuels such as semi-coke coal and wood chips, subsidized to 60 percent of their normal costs and comparable to the price of raw coal. If successful, the area of the ban on polluting fuels may be expanded. It is unclear how the government will gauge success. But later this year, mca-m will be testing alternative fuels with their stoves to assess how compatible the two are for generating low pm emissions. The federal government has also replaced hundreds of diesel-burning public buses and taxis with newer, more efficient and cleaner-burning models. Meanwhile, the Asian Development Bank has also funded the creation of the first stove efficiency testing laboratory in Mongolia, which could help determine what stoves are both pm and fuel efficient. The laboratory could help provide important data necessary for a national stove rating system that would inform consumers on a specific brand’s pm and fuel efficiency. Unfortunately, the source of the laboratory’s long-term funding remains uncertain.
## Signs of Progress
In January 2012, in the midst of the first heating season since the mca-m energy efficient stoves have been in use, Mangal shared quantitative evidence of the project’s success: “Ulaanbaatar City Air Quality Office has been doing measurements near the ger areas and they are saying that the numbers on the measurements show, compared to the previous year, 20 percent, sometimes 25 percent reduction in pm.” It is unclear whether this comparison takes into account possible temperature differences and meteorological conditions between the two years. Still, it is a statistic that inspires optimism, considering Mangal’s estimate that 15,000 to 20,000 additional families had migrated to Ulaanbaatar by mid-winter. A more definitive verdict on the success of the project should become possible as particulate monitoring data from various air quality stations positioned around the city are analyzed in detail in the upcoming months.
No matter what the ultimate ability may be of the mca-m program to directly ease air pollution in the short-term, the project has already had long-term impact on the way residents of Ulaanbaatar view their individual ability to effect change. Mangal says when he and his team have meetings with the public, “people tell stories, like ‘On our street, everyone bought these products, and now our street is clean, with no smoke. But we look at the other streets, they are smoking. Why can’t they follow us too?’ Now they are realizing that it is possible for people to reduce the pollution. So now they are pushing others to take care of the pollution.”
He continues, “The main [long-term] advantage of this project is to show people how they can improve their living conditions by using better energy efficient products, better insulation, better combusting stoves. And of course, in the near future, maybe better, cleaner fuel. Now many people know air pollution is very bad for the health, know how to reduce the smoke pollution … and that it takes everyone’s effort to use energy efficient products.”
## Futurescape
The mca-m program will end in Fall 2013, though there are plans for the Mongolian government to continue its work through the Clean Air Fund, a project managed by the Ministries of Environment and Energy as well as the Ulaanbaatar city government. The government also has plans to offer significant discounts for electric rates for ger district households that use electric heaters instead of stoves. While electric heaters would still consume electricity from the city’s coal-fired power plants, the net result would lower overall pm emissions. Other longer-range plans involve building and subsidizing apartments to move residents out of the ger district and into the main heating grid. All these efforts are indeed more efficient. Central power plants generate heat through combustion at much higher temperatures, which means less pm is produced. At the same time, power plants emit pollution at higher altitudes than ger chimneys and do have some pm filters. Reduced levels of pm injected higher into the atmosphere have a smaller effect on air quality. As the pm travels outside Ulaanbaatar, it will diffuse into very low concentrations as it mixes with clean air.
if successful, ulaanbaatar could serve as a role model for other cities.
National University’s Lodoysamba points out that another essential key to solving the air pollution crisis is to continually assess the progress and future direction of mitigation efforts through research like air quality monitoring, pollution source assessment, health impact analysis, and maintenance of the Asian Development Bank Stove Testing Laboratory. “Without research, huge money will be spent on nothing. We have an example that previous years’ government spent 7 billion tugriks [\$5 million] on the production of briquettes, which cannot be burned well in simple stoves. … Research can [help] tell policymakers what they can do,” he says.
The World Bank’s December 2011 report on Ulaanbaatar air pollution emphasizes the integral role research should play in reducing the city’s air pollution in four parts of its 11-part recommended strategy plan. The strategy includes guidelines such as establishing target dates for reaching basic air quality standards; ensuring that abatement methods have demonstrated the ability to sufficiently lower emissions; strengthening air quality monitoring, emissions inventory, and health impact studies; and providing long-term funding for the stove testing laboratory. Following these guidelines will require more funding than is currently available for air pollution-related research. It will entail both attracting international talent and retaining Mongolia’s own experts in air pollution, atmospheric science, and public health-related fields.
|
2020-08-12 18:56:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.285398006439209, "perplexity": 3909.237370754682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738913.60/warc/CC-MAIN-20200812171125-20200812201125-00367.warc.gz"}
|
https://smartodds.blog/2019/11/22/one-in-a-million/
|
# One-in-a-million
Suppose you can play on either of 2 slot machines:
1. Slot machine A pays out with probability one in a million.
2. Slot machine B pays out with probability one in 10.
Are you more likely to get a payout with one million attempts with slot machine A or with 10 attempts on slot machine B?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
So, there’s a bigger probability (0.65) that you’ll get a payout from 10 spins of slot machine B than from a million spins of slot machine A (probability 0.63).
Hopefully, the calculations above are self-explanatory. But just in case, here’s the detail. Suppose you have N attempts to win with a slot machine that pays out with probability 1/N.
1. First we’ll calculate the probability of zero payouts in the N spins.
2. This means we get a zero payout on every spin.
3. The probability of a zero payout on one spin is one minus the probability of a win: 1 – 1/N.
4. So the probability of no payout on all the spins is
$(1-1/N)^N$
5. And the probability of at least one payout is
$1- (1-1/N)^N$
As explained in the tweet, with N=10 this gives 0.65 and with N=1,000,000 it gives 0.63. The tweet’s author explains in a follow-up tweet that he was expecting the same answer both ways.
But as someone in the discussion pointed out, that logic can’t be right. Suppose you had one attempt with slot machine C which paid out with probability 1. In other words, N=1 in my example above. Then, of course, you’d be bound to get a payout, so the probability of at least one payout is 1. So, although it’s initially perhaps surprising that you’re more likely to get a payout with 10 shots at slot machine B than with a million shots at slot machine A, the dependence on N becomes obvious when you look at the extreme case of slot machine C.
Footnote: What does stay the same in each case however is the average number of times you will win. With N shots at a slot machine with win probability 1/N, you will win on average once for any choice of N. Sometimes you’ll win more often, and sometimes you may not win at all (except when N=1). But the average number of wins if you play many times will always be 1.
|
2021-07-29 12:03:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7550243735313416, "perplexity": 590.2938259655435}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153857.70/warc/CC-MAIN-20210729105515-20210729135515-00371.warc.gz"}
|
https://crypto.stackexchange.com/questions/6029/aes-cbc-mode-or-aes-ctr-mode-recommended/89053
|
# AES CBC mode or AES CTR mode recommended?
What are the benefits and disadvantages of CBC vs. CTR mode? Which one is more secure?
• Please show some research effort in your questions.. and they can both be made just as secure if done right. They both have their pros and cons and target different uses. – Thomas Jan 19 '13 at 12:19
• Ok because using CBC mode in SSH and using TLS 1.0 or below should not be used. Of course TLS 1.0 and below should not be used anyway. I am concerned in that I have seen no bugs on OpenSSL for CTR, but several for CBC. I'm wondering if AES-CTR is a better choice with TLSv1.1 and above. openssh.com/txt/cbc.adv kb.cert.org/vuls/id/958563 – deb_infosec Sep 29 '16 at 17:40
• The Q does not specify communications at all much less SSH or SSL/TLS in particular. That SSH "vulnerability" hasn't been heard from again since 2008, and the OpenSSH folks, who are above-averagely aggressive on security, still have AES- and 3DES-CBC enabled (but not preferred) client-side in 7.3. There are no SSL/TLS ciphersuites with AES-CTR (or anything-CTR) as such, but in TLS1.2 (and 1.3 when it arrives) there are AEAD suites using AES-GCM and AES-CCM both of which are based on CTR. (Also Camellia-GCM, but I haven't seen that implemented.) – dave_thompson_085 Sep 30 '16 at 11:22
## 2 Answers
I wrote a rather lengthy answer on another site a few days ago. Bottom-line is that CTR appears to be the "safest" choice, but that does not mean safe. The block cipher mode is only part of the overall protocol. Every mode has its quirks and requires some extra systems in order to use it properly; but in the case of CTR, the design of these extra systems is somewhat easier. For instance, when compared to OFB, there is no risk of a "short cycle" with CTR.
This is why actually usable modes like EAX and GCM internally use CTR.
• You should make it clearer that he should use EAX or GCM, or encrypt-then-HMAC. CTR or CBC by themselves can be very, very dangerous. – Stephen Touset Jan 20 '13 at 1:16
• CTR mode is also easier to run in parallel on multiple cores, resulting in better performance. – r3mainer Sep 29 '16 at 21:55
• Why do you intrigue us so much??? "actually usable models like EAX and GCM"... Thanks for providing linkes though! – Paul-Sebastian Manole Jul 22 '19 at 8:46
• @squeamishossifrage, thanks for the quite important tip. Here's a validation reference from Wikipedia stating that CTR mode is well suited to operate on a multi-processor machine where blocks can be encrypted in parallel. If you can include it in your comment, I will delete this comment. – Paul-Sebastian Manole Jul 22 '19 at 8:49
### In the context of the encryption of a mariadb database, according to the documentation.
choosing-an-encryption-algorithm
There are 2 modes of choice of encryption algorithm:
1. The AES_CBC mode uses AES in Cipher Block Chaining (CBC) mode.
2. The AES_CTR mode uses AES in two slightly different modes in different contexts. When encrypting table space pages (such as pages in InnoDB, XtraDB, and Aria tables), you use AES in Counter (CTR) mode. When encrypting temporary files (where ciphertext is allowed to be larger than plain text), use AES in Galois / Authenticated Counter (GCM) mode.
The recommended algorithm is AES_CTR, but this algorithm is only available when MariaDB is built with recent versions of OpenSSL. If the server is built with wolfSSL or yaSSL, then this algorithm is not available.
MariaDB [(none)]> show global variables like "version_ssl_library";
+---------------------+-----------------------------+
| Variable_name | Value |
+---------------------+-----------------------------+
| version_ssl_library | OpenSSL 1.1.1f 31 Mar 2020 |
+---------------------+-----------------------------+
MariaDB [mysql]> show global variables like "tls_version";
+---------------+-------------------------+
| Variable_name | Value |
+---------------+-------------------------+
| tls_version | TLSv1.1,TLSv1.2,TLSv1.3 |
+---------------+-------------------------+
|
2021-05-12 21:33:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3185836672782898, "perplexity": 4186.328181666162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00414.warc.gz"}
|
http://chasethedevil.blogspot.com/2014/11/flat-volatility-surfaces-discrete.html
|
## Tuesday, November 25, 2014
### Flat Volatility Surfaces & Discrete Dividends
In papers around volatility and cash (discrete) dividends, we often encounter the example of the flat volatility surface. For example, the OpenGamma paper presents this graph:
It shows that if the Black volatility surface is fully flat, there are jumps in the pure volatility surface (corresponding to a process that includes discrete dividends in a consistent manner) at the dividend dates or equivalently if the pure volatility surface is flat, the Black volatility jumps.
This can be traced to the fact that the Black formula does not respect C(S,K,Td-) = C(S,K-d,Td) as the forward drops from F(Td-) to F(Td-)-d where d is dividend amount at td, the dividend ex date.
Unfortunately, those examples are not very helpful. In practice, the market observables are just Black volatility points, which can be interpolated to volatility slices for each expiry without regards to dividends, not a full volatility surface. Discrete dividends will mostly happen between two slices: the Black volatility jump will happen on some time-interpolated data.
While the jump size is known (it must obey to the call price continuity), the question of how one should interpolate that data until the jump is far from trivial even using two flat Black volatility slices.
The most logical is to consider a model that includes discrete dividends consistently. For example, one can fully lookup the Black volatility corresponding the price of an option assuming a piecewise lognormal process with jumps at the dividend dates. It can be priced by applying a finite difference method on the PDE. Alternatively, Bos & Vandermark propose a simple spot and strike adjusted Black formula that obey the continuity requirement (the Lehman model), which, in practice, stays quite close to the piecewise lognormal model price. Another possibility is to rely on a forward modelling of the dividends, as in Buehler (if one is comfortable with the idea that the option price will then depend ultimately on dividends past the option expiry).
Recently, a Wilmott article suggested to only rely on the jump adjustment, but did not really mention how to find the volatility just before or just after the dividend. Here is an illustration of how those assumptions can change the volatility in between slices using two dividends at T=0.9 and T=1.1.
In the first graph, we just interpolate linearly in forward moneyness the pure vol from the Bos & Vandermark formula, as it should be continuous with the forward (the PDE would give nearly the same result) and compute the equivalent Black volatility (and thus the jump at the dividend dates).
In the second graph, we interpolate linearly the two Black slices, until we find a dividend, at which point we impose the jump condition and repeat the process until the next slice. We process forward (while the Wilmott article processes backward) as it seemed a bit more natural to make the interpolation not depend on future dividends. Processing backward would just make the last part flat and first part down-slopping. On this example backward would be closer to the Bos Black volatility, but when the dividends are near the first slice, the opposite becomes true.
While the scale of those changes is not that large on the example considered, the choice can make quite a difference in the price of structures that depend on the volatility in between slices. A recent example I encountered is the variance swap when one includes adjustment for discrete dividends (then the prices just after the dividend date are used).
To conclude, if one wants to use the classic Black formula everywhere, the volatility must jump at the dividend dates. Interpolation in time is then not straightforward and one will need to rely on a consistent model to interpolate. It is not exactly clear then why would anyone stay with the Black formula except familiarity.
|
2017-03-28 06:18:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8417167067527771, "perplexity": 1256.8171280178124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189680.78/warc/CC-MAIN-20170322212949-00519-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://cs.stackexchange.com/questions/74623/form-of-conditional-observation-probabilities-in-a-pomdp
|
# Form of conditional observation probabilities in a POMDP
Consider a partially observable Markov decision process (POMDP), see here for a complete definition.
My question is in relation to the conditional observation probabilities (denoted by $O(o|s',a)$ in the above link). This represents the probability of seeing observation $o$ if the current state is $s'$ and the action was $a$. Why is the conditional probability defined in this way?
Is there any issue with defining the conditional observation probability in terms of the previous state and action, $O(o|s,a)$, or in terms of the current state, action, and previous state as $O(o|s',a,s)$?
• A current observation is supposed to be based on the newest state, which is $s'$ in your case. Due to this, the observation need not be conditioned on the past state $s$. This makes sense because given you want to know where you are now, you just care about your current state and not where you were before. Now if your problem couldn't depend on the Markov Property, this could be different. – spektr Apr 27 '17 at 23:59
|
2019-05-27 04:12:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8822899460792542, "perplexity": 140.96309247666412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232260658.98/warc/CC-MAIN-20190527025527-20190527051527-00155.warc.gz"}
|
http://csf.wbtravis.org/curriculum/6/notes/
|
Lecture 6
Last Time
• We learned some basics about the internet, and technologies like:
• TCP/IP, protocols by which computers can send each other messages across a network of many computers, using IP addresses and port numbers.
• HTTP, a protocol by which browsers, and other programs, can make a request for a webpage (or other content) from a server.
• URLs, including a domain name and parameters like ?q=cats, to pass along additional inputs to a server.
• HTTP status codes, like 404 Not Found, which shows us an error page, and 301 Moved Permanently, which redirects us to the right URL if a website has moved.
• HTML and CSS, languages by which we can format and stylize webpages.
• JavaScript and the DOM, Document Object Model, by which we can change nodes in a tree representation of an HTML page, thereby changing the page itself.
Python
• Python is another programming language, but it is interpreted (run top to bottom by an interpreter, like JavaScript) and higher-level (including features and libraries that are more powerful).
• For example, we can implement the entire resize program in just a few lines with Python:
import sys
from PIL import Image
if len(sys.argv) != 4:
sys.exit("Usage: python resize.py n infile outfile")
n = int(sys.argv[1])
infile = sys.argv[2]
outfile = sys.argv[3]
inimage = Image.open(infile)
width, height = inimage.size
outimage = inimage.resize((width * n, height * n))
outimage.save(outfile)
• First, we import (like include) a sys library (for command-line arguments) and an Image library.
• We check that there are the right number of command-line arguments with len(sys.argv), and then create some variables n, infile, and outfile, without having to specify their types.
• Then, we use the Image library to open the input image, getting its width and height, resizing it with a resize function, and finally saving it to an output file.
• Let’s take a look at some new syntax. In Python, we can create variables with just counter = 0. To increment a variable, we can use counter = counter + 1 or counter += 1.
• Conditions look like:
if x < y:
something
elif:
something
else:
something
• Unlike in C and JavaScript (whereby braces { } are used for blocks of code), the exact indentation of each line is what determines the level of nesting in Python.
• Boolean expressions are slightly different, too:
while True:
something
• Loops can be created with another function, range, that, in the example below, returns a range of numbers from 0, up to but not including 50:
for i in range(50):
something
• In Python, we’ll start by looking at just a few data types:
• bool, True or False
• float, real numbers
• int, integers
• str, strings
• dict, a dictionary of key-value pairs, that act like hash tables
• list, like arrays, but can automatically resize
• range, range of values
• set, a collection of unique things
• tuple, a group of two or more things
• In Python, we can too include the CS50 library, but our syntax will be:
from cs50 import get_float, get_int, get_string
• Notice that we specify the functions we want to use.
• In Python, we can run our program without compiling it with python hello.py (or whatever the name of our file is).
• python is name of the program that we’re actually running at the command line, and it is an interpreter which can read our source code (written in the language Python) and run it, one line at a time. (Technically, there is a compiler that turns our source code into something called bytecode that the interpreter actually runs, but that is abstracted away for us.)
Data types in Python
• Our first hello.py program is just:
print("hello, world")
• Notice that we didn’t need a main function, or anything that we needed to import for the print function. The print function in Python also adds a new line for us automatically.
• Now we can run it with python hello.py.
• We can get strings from a user:
from cs50 import get_string
s = get_string("Name: ")
print("hello,", s)
• We create a variable called s, without specifying the type, and we can pass in multiple variables into the print function, which will print them for us on the same line, separated by a space automatically.
• To avoid the extra spaces, we can put variables inside a string similar to how they are included in C: print(f"hello, {s}"). Here, we’re saying that the string hello, {s} is a formatted string, with the f in front of the string, and so the variable s will be substituted in the string. And we don’t need to worry about the variable type; we can just include them inside strings.
• We can do some math, too:
from cs50 import get_int
x = get_int("x: ")
y = get_int("y: ")
print(f"x + y = {x + y}")
print(f"x - y = {x - y}")
print(f"x * y = {x * y}")
print(f"x / y = {x / y}")
print(f"x mod y = {x % y}")
• Notice that expressions like {x + y} will be evaluated, or calculated, before it’s substituted into the string to be printed.
• By running this program, we see that everything works as we might expect, even dividing two integers to get a floating-point value. (To keep the old behavior of always returning a truncated integer with division, there is the // operator.)
• We can experiment with floating-point values:
from cs50 import get_float
x = get_float("x: ")
y = get_float("y: ")
z = x / y
print(f"x / y = {z}")
• We see the following when we run this program:
\$ python floats.py
x: 1
y: 10
x / y = 0.1
• We can print more decimal places with syntax like print(f"x / y = {z:.50f}"):
x / y = 0.10000000000000000555111512312578270211815834045410
• It turns out that Python still has floating-point imprecision by default, but there are some libraries that will use more memory to store decimal values more precisely.
• We can see if Python has integer overflow:
from time import sleep
i = 1
while True:
print(i)
i *= 2
sleep(1)
• We use the sleep function to pause our program for one second, but double i over and over. And it turns out that integers in Python can be as big as memory allows, so we won’t experience overflow for a much longer time.
Programming in Python
• Let’s take a closer look at conditions:
from cs50 import get_int
# Get x from user
x = get_int("x: ")
# Get y from user
y = get_int("y: ")
# Compare x and y
if x < y:
print("x is less than y")
elif x > y:
print("x is greater than y")
else:
print("x is equal to y")
• Notice that we use consistent indentation, but we don’t need parentheses or braces for our conditions.
• Comments, too, start with just a single # character.
• We can compare strings the way we might expect:
from cs50 import get_char
if c == "Y" or c == "y":
print("yes")
elif c == "N" or c == "n":
print("no")
• Strings can be compared directly, and Boolean expressions can include the words and and or.
• We can write functions in Pythons like this:
def main():
for i in range(3):
cough()
def cough():
"""Cough once"""
print("cough")
if __name__ == "__main__":
main()
• We use the def keyword to define a function cough, indicating that it takes no parameters, or inputs, by using just (), and call it from our main function. Notice that all the code for each function is indented additionally, instead of surrounded by braces.
• Then, at the below, we use a special line if __name__ == "__main__": to call our main function when our program is run. This way, the interpreter will know about the cough function by the time main actually calls it. We could also call cough directly, instead of main, though that would be unconventional in Python. (Instead, we want to try to be “Pythonic”, or following the styles and patterns encouraged by the language and its community.)
• We can add parameters and loops to our cough function, too:
def main():
cough(3)
def cough(n):
for i in range(n):
print("cough")
if __name__ == "__main__":
main()
• n is a variable that can be passed into cough, which we can also pass into range. And notice that we don’t specify types in Python, so n can be of any data type (and can even be assigned to have a value of another type). It’s up to us, the programmer, to use this great power with great responsibility.
• We can define a function to get a positive integer:
from cs50 import get_int
def main():
i = get_positive_int("Positive integer: ")
print(i)
def get_positive_int(prompt):
while True:
n = get_int(prompt)
if n > 0:
break
return n
if __name__ == "__main__":
main()
• Since there is no do-while loop in Python as there is in C, we have a while loop that will go on infinitely, but we use break to end the loop if n > 0. Then, our function will just return n.
• Notice that variables in Python have function scope by default, meaning that n can be initialized within a loop, but still be accessible later in the function.
• We can print each character in a string and capitalize them:
from cs50 import get_string
s = get_string()
for c in s:
print(c.upper(), end="")
print()
• Notice that we can easily iterate over characters in a string with something like for c in s, and we print the uppercase version of each character with c.upper(). Strings in Python are objects, like a data structure with both the value it stores, as well as built-in functions like .upper() that we can call.
• Finally, we pass in another argument to the print function, end="", to prevent a new line from being printed each time. Python has named arguments, where we can name arguments that we can pass in, in addition to positional arguments, based on the position they are in the list. With named arguments, we can pass in arguments in different orders, and omit optional arguments entirely. Notice that this example is labeled with end, indicating the string that we want to end each printed line with. By passing in an empty string, "", nothing will be printed after each character. Before, when we called print without the end argument, the function used \n as the default for end, which is how we got new lines automatically.
• We can get the length of the string with the len() function.
from cs50 import get_string
s = get_string("Name: ")
print(len(s))
• We’ll be using version 3 of Python, which the world is starting to use more and more, so when searching for documentation, we want to be sure that it’s for the right version.
• We can take command-line arguments with:
from sys import argv
if len(argv) == 2:
print(f"hello, {argv[1]}")
else:
print("hello, world")
• We check the number of arguments by looking at the length of argv, a list of arguments, and if there is 2, we print the second one. Like in C, the first command-line argument is the name of the program we wrote, rather than the word python, which is technically the name of the program we run at the command-line.
• We can print each argument in the list:
from sys import argv
for s in argv:
print(s)
• This will iterate over each element in the list argv, allowing us to use it as s.
• And we can iterate over each character, of each argument:
from sys import argv
for s in argv:
for c in s:
print(c)
print()
• We can swap two variables in Python just by reversing their orders:
x = 1
y = 2
print(f"x is {x}, y is {y}")
x, y = y, x
print(f"x is {x}, y is {y}")
• Here, we’re using x, y = y, x to set x to y at the same time as setting y to x.
• We can create a list and add to it:
from cs50 import get_int
numbers = []
# Prompt for numbers (until EOF)
while True:
# Prompt for number
number = get_int("number: ")
# Check for EOF
if not number:
break
# Check whether number is already in list
if number not in numbers:
numbers.append(number)
# Print numbers
print()
for number in numbers:
print(number)
• Here, we create a empty list called numbers with numbers = [], and we get a number from the user. If that number is not already in our list, we add it to our list. We can use not in to check if a value is (not) in a list, and append to add a value to the end of a list.
• We can create our own data structures, objects:
from cs50 import get_string
# Space for students
students = []
# Prompt for students' names and dorms
for i in range(3):
name = get_string("name: ")
dorm = get_string("dorm: ")
students.append({"name": name, "dorm": dorm})
# Print students' names and dorms
for student in students:
print(f"{student['name']} is in {student['dorm']}.")
• We create a list called students, and after we get some input from the user, we append a dictionary of key-value pairs, {"name": name, "dorm": dorm}, to that list. Here, "name" and "dorm" are the keys, and we want their values to be the variables we gathered as input. Then, we can later access each object’s values with student['name'] or student['dorm'] to print them out. In Python, we can index into dictionaries with words or strings, as opposed to just numeric indexes in lists.
• Let’s print four question marks, one at a time:
for i in range(4):
print("?", end="")
print()
• We can print a vertical bar of hash marks, too:
for i in range(3):
print("#")
• And we can print a square with a nested loop:
for i in range(3):
for j in range(3):
print("#", end="")
print()
• Now we can revisit resize.py, and it might make more sense to us now:
from PIL import Image
from sys import argv
if len(sys.argv) != 4:
sys.exit("Usage: python resize.py n infile outfile")
n = int(sys.argv[1])
infile = sys.argv[2]
outfile = sys.argv[3]
inimage = Image.open(infile)
width, height = inimage.size
outimage = inimage.resize((width * n, height * n))
outimage.save(outfile)
• We import the Image library from something called PIL, a free open-source library that we can download and install (which doesn’t come with Python by default).
• Then, we import argv from the system library, and we check our arguments, storing them as n, infile, and outfile, converting the string input for n into an int as we do so.
• By reading the documentation for Python and the Image library, we can open files as an image, getting its size and calling a resize function on it to get another image, which we can then save to another file.
• Let’s look at another example, a spell-checker in Python:
# Words in dictionary
words = set()
def check(word):
"""Return true if word is in dictionary else false"""
return word.lower() in words
"""Load dictionary into memory, returning true if successful else false"""
file = open(dictionary, "r")
for line in file:
file.close()
return True
def size():
"""Returns number of words in dictionary if loaded else 0 if not yet loaded"""
return len(words)
• The functions for dictionary.py are pretty straightforward, since all we need is a set(), a collection into which we can load unique values. In load, we open the dictionary file, and add each line in the file as a word (without the newline character).
• For check, we can just return whether word is in words, and for size, we can just return the length of words. Finally, we don’t need to do anything to unload, since Python manages memory for us.
|
2022-12-06 12:58:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18289674818515778, "perplexity": 2519.8511823946214}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711108.34/warc/CC-MAIN-20221206124909-20221206154909-00067.warc.gz"}
|
https://web2.0calc.com/questions/algebra_41470
|
+0
# Algebra
0
86
1
A dairy needs 244 gallons of milk containing 7% butterfat. How many gallons each of milk containing 9% butterfat and milk containing 1% butterfat must be used to obtain the desired 244 gallons?
May 3, 2022
#1
+9459
0
Let x gallons be the amount of milk containing 9% butterfat and y gallons be that of milk containing 1% butterfat.
We have $$\begin{cases}x + y = 244\\(9\%)x + (1\%)y = (7\%)(244)\end{cases}$$ by considering the total amount of milk and the total amount of butterfat respectively.
Can you continue solving for x and y from here?
May 3, 2022
|
2022-09-28 18:30:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9448520541191101, "perplexity": 4070.445430879851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00576.warc.gz"}
|
http://openstudy.com/updates/502b4291e4b099b91b4fa701
|
## anonymous 4 years ago Evaluate the line integral $\int_c F\cdot dr$ where c is the vector function $\hat{r}(t)=t^3\hat i-t^2 \hat j+ t \hat k$ and $F(x,y,z)=sinx\hat i + cosy \hat j+zx \hat k$ Here is how far I have gotten so far: $\hat{r}'(t)=3t^2\hat i-2t \hat j+ \hat k$
1. lgbasallote
you might want to use \cdot rather than \bullet hehe
2. anonymous
I'll change the Function F in terms of t's in just a second
3. anonymous
thanks @lgbasallote. Better? :P
4. lgbasallote
yes. yes it is.
5. anonymous
oh my, I did I similar problem 28 days ago http://openstudy.com/study#/updates/50060b97e4b06241806745b8 and I'm also looking at http://tutorial.math.lamar.edu/Classes/CalcIII/LineIntegralsVectorFields.aspx but I can't seem to remember... What do i substitute in for x and y, do I pick which one I want t to be?
6. anonymous
Hey i think I got it! $x=t^3;y=-t^2;z=t$ YES?
7. anonymous
That would give me: $F(r(t))=sin(t^3)\hat i +cos(-t^2)\hat j+t^4 \hat k$
8. anonymous
$\int_{t=0}^{t=1}\left(sin(t^3)\hat i +cos(-t^2)\hat j+t^4 \hat k\right)\cdot\left(3t^2\hat i -2t\hat j +\hat k \right) dt$
9. anonymous
and now I would just do the dot product correct?
10. anonymous
exactly
11. anonymous
$\int_0^1(3t^2sin(t^3)-2tcos(-t^2)+t^4)dt$
12. anonymous
well done
13. anonymous
wait, I have to take the integral of several products?
14. anonymous
separate them$\int_{0}^{1} 3t^2 \sin t^3 dt+...+...$
15. anonymous
oh ok $\int_0^1 3t^2sin(t^3)dt-\int_0^12tcos(-t^2)dt+\int_0^1t^4dt$ like this?
16. anonymous
yep
17. anonymous
oh integration by parts! duh =D
18. anonymous
$-\cos t^3$ :-)
19. TuringTest
no integration by parts you can do it all with u-subs
20. TuringTest
do you get that?
21. anonymous
Oh I see it now!!!!
22. TuringTest
cool :)
23. anonymous
u=t^2 du=2t dt for the second integral
24. TuringTest
yep
25. anonymous
u=t^3 and du=3t^2 dt for the first integral...LOL that took me a while!
26. TuringTest
yeah, it's well set-up for the u-sub thing :)
27. anonymous
u cooked the problem
28. anonymous
$-cos(1)+sin(1)+2$ My algebra is probably wrong...but Yeah @mukushla that was one long recipe!
29. anonymous
$\large -\cos t^3]_{0}^{1}+\sin -t^2]_{0}^{1}+t^5/5]_{0}^{1}$
30. anonymous
$(-cos(1)+1)+(-sin(1)+1)+\frac 1 5$ $-cos(1)-sin(1)+\frac{11}{5}$
31. anonymous
oops sine of 0 is zero
32. TuringTest
you have an extra +1 in there - -yep
33. anonymous
-cos(1)-sin(1)+ 6/5
34. TuringTest
looks good to me :)
35. anonymous
sigh...finally! Thanks guys!
36. TuringTest
welcome !
37. anonymous
:)
|
2017-01-22 08:43:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8712512850761414, "perplexity": 9502.200963061041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281419.3/warc/CC-MAIN-20170116095121-00418-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://www.scienceforums.net/topic/122753-internal-resistance-of-a-cell/
|
# Internal resistance of a cell
## Recommended Posts
Here's a question, and my doubt is at the end of the question. I have been struggling with this doubt for quite long and have been receiving mixed opinions.
A cell of emf 12 v supplies a current of 400 mA to an appliance. After some time the current reduces to 320 mA and the appliance stops working. Find the resistance of the appliance, the terminal voltage of the battery when the appliance stops working, and the internal resistance of the cell.
In my book, the answer to this ques is given as follows:
1. Given, emf = 12 volt, I = 0.4 A
Therefore Resistance of the appliance R = emf/ I = 12/0.4 = 30 ohm
2. Given , I' = 0.32 A
Terminal voltage of battery V = I'R = 0.32*30 = 9.6 volt
3. From emf= V - v,
v ( voltage drop) = emf-V= 12-9.6= 2.4 volt
From v= I'r,
r = v/I'= 2.4/0.32= 7.5 ohm ( internal resistance)
My doubt is, in the 1st part, why isn't R= resistance of the appliance + internal resistance? Why is the internal resistance of the cell ignored in part 1?
##### Share on other sites
3 hours ago, Arnav said:
A cell of emf 12 v supplies a current of 400 mA to an appliance. After some time the current reduces to 320 mA and the appliance stops working. Find the resistance of the appliance, the terminal voltage of the battery when the appliance stops working, and the internal resistance of the cell.
By the way this should be in Physics, not homework help since your book gives working and you are asking for an explanation, not for us to do homework.
I am not suprised you have received conflicting information. Most books are poorly written on this subject, as the example you quote shows.
You are right to seek explanations.
Firstly let us look at the often quoted formula
$I = \frac{E}{{{R_{load}} + {r_{{\mathop{\rm int}} ernal}}}}$
Now if we assume that Rload is constant then for two I hope you can see that for two different currents,
Both E and Rinternal cannot be the same for both currents.
In fact as the cell becomes more and more exhausted the internal resistance rises and the EMF falls.
So the currents of 400mA and 320 mA describe different situations.
Most books will not tell you this.
When I is 400mA the battery is fresh and the internal resistance is taken as zero (or negligable).
So the full 12V EMF is applied to the load determining Rload to be 30 ohm.
As the battery looses charge and the current falls Rinternal rises from zero and the EMF is shared between the load and the internal resistance.
Since Rload is constant it is still 30 ohms but now passes 320mA so experiences a voltage drop of 9.6V.
Your book must have assumed (again it did not say) that since the battery was still supplying substantial current its EMF has not yet dropped appreciably.
This is often the case that the internal resistance change runs ahead of the EMF fall as that fall it set by the chemical reactions of the battery and it is only in the later stages of exhaustion when the current has dopped dramatically that other chemical reactions become important.
However it should be noted that these days there are many more types of battery and the relationships between aging, E and Rinternal are more varied. Most books have not caught up with this either.
Does this help ?
##### Share on other sites
12 hours ago, Arnav said:
A cell of emf 12 v supplies a current of 400 mA to an appliance. After some time the current reduces to 320 mA and the appliance stops working. Find the resistance of the appliance, the terminal voltage of the battery when the appliance stops working, and the internal resistance of the cell.
There is not enough information to answer that question.
##### Share on other sites
12 hours ago, studiot said:
By the way this should be in Physics, not homework help since your book gives working and you are asking for an explanation, not for us to do homework.
I am not suprised you have received conflicting information. Most books are poorly written on this subject, as the example you quote shows.
You are right to seek explanations.
Firstly let us look at the often quoted formula
Now if we assume that Rload is constant then for two I hope you can see that for two different currents,
Both E and Rinternal cannot be the same for both currents.
In fact as the cell becomes more and more exhausted the internal resistance rises and the EMF falls.
So the currents of 400mA and 320 mA describe different situations.
Most books will not tell you this.
When I is 400mA the battery is fresh and the internal resistance is taken as zero (or negligable).
So the full 12V EMF is applied to the load determining Rload to be 30 ohm.
As the battery looses charge and the current falls Rinternal rises from zero and the EMF is shared between the load and the internal resistance.
Since Rload is constant it is still 30 ohms but now passes 320mA so experiences a voltage drop of 9.6V.
Your book must have assumed (again it did not say) that since the battery was still supplying substantial current its EMF has not yet dropped appreciably.
This is often the case that the internal resistance change runs ahead of the EMF fall as that fall it set by the chemical reactions of the battery and it is only in the later stages of exhaustion when the current has dopped dramatically that other chemical reactions become important.
However it should be noted that these days there are many more types of battery and the relationships between aging, E and Rinternal are more varied. Most books have not caught up with this either.
Does this help ?
You see guys, even here I am getting mixed answers, John says the data is insufficient and studiot says the question's fine.
##### Share on other sites
15 hours ago, studiot said:
By the way this should be in Physics, not homework help since your book gives working and you are asking for an explanation, not for us to do homework.
!
Moderator Note
Moved to Physics.
##### Share on other sites
5 hours ago, Arnav said:
17 hours ago, studiot said:
You see guys, even here I am getting mixed answers, John says the data is insufficient and studiot says the question's fine.
I see you are here again so I will post half an answer.
Actually John is quite right and I also siad the same thing.
Did you understand that part?
17 hours ago, studiot said:
Now if we assume that Rload is constant then for two I hope you can see that for two different currents,
Both E and Rinternal cannot be the same for both currents.
It would be helpful to say whether you are studying Engineering or Physics, as their view is slightly different.
Here are a few pages from the UK standard intoductory text for at least 50 years.
As you can see it only goes so far in explanation.
I expect it say similar things to your book.
Please compare them and let us know.
## Create an account
Register a new account
|
2020-10-01 23:57:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.50514817237854, "perplexity": 1197.4781282829044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402132335.99/warc/CC-MAIN-20201001210429-20201002000429-00039.warc.gz"}
|
http://mathematica.stackexchange.com/questions/40616/plotting-cumulative-distribution-function
|
# plotting cumulative distribution function
c = 2;
P = N[n*Log[n]]
K = N[c*Log[n]]
Needs["PlotLegends"];
PrDEqK[P_, K_, n_, c_, k_] := Module[{p, q, r, ps, qs, m, tmp},
r = Sqrt[(c*Log[n])/(n*\[Pi])];
p = \[Pi]*r^2;
q = 1 - p;
ps = 1 - Binomial[P - K, K]/Binomial[P, K];
qs = 1 - ps;
N[Sum[Binomial[n - 1, m]*p^m*q^(n - 1 - m)*Binomial[m, k]*ps^k*
qs^(m - k), {m, k, n - 1}]]
];
Plot[CDF[{PrDEqK[P, K, 20, c, k], {k, 1, 8},Filling -> Axis
]
i'm not getting the CDF plot...what is the problem?
-
CDF, for one thing, is for distributions, you are trying to plot a function. There are other issues with the code, please attempt to fix these and add some context to the question. – ciao Jan 17 '14 at 8:39
As has been CDF acts on distribution objects. You have defined a discrete PMF. With all due respect the plot by Boson is not a CDF but a continous plot of PMF.
Reproduing user-defined:
c = 2;
n = 20;
P = N[n*Log[n]];
K = N[c*Log[n]];
PrDEqK[P_, K_, n_, c_, k_] :=
Module[{p, q, r, ps, qs, m, tmp}, r = Sqrt[(c*Log[n])/(n*\[Pi])];
p = \[Pi]*r^2;
q = 1 - p;
ps = 1 - Binomial[P - K, K]/Binomial[P, K];
qs = 1 - ps;
N[Sum[Binomial[n - 1, m]*p^m*q^(n - 1 - m)*Binomial[m, k]*ps^k*
qs^(m - k), {m, k, n - 1}]]];
To generate CDF:
cdf[k_] := Sum[PrDEqK[P, K, 20, c, j], {j, 0, k}];
Visualizing:
DiscretePlot[cdf[k], {k, 0, 8}]
-
thank you all... – user11609 Jan 17 '14 at 9:39
can we plot this CDF combinedly for different values of 'n'that i fixed to 20.. – user11609 Jan 17 '14 at 11:13
@user11609 yes. Just redefine function to:cdfgeneral[n_,k_]:=Sum[PrDEqK[P, K, n, c, j], {j, 0, k}]; – ubpdqn Jan 17 '14 at 11:16
okk...thank u so much.. – user11609 Jan 17 '14 at 11:20
I tried to clean your code a little and it worked right away:
c = 2;
n = 20;
P = N[n*Log[n]];
K = N[c*Log[n]];
PrDEqK[P_, K_, n_, c_, k_] :=
Module[{p, q, r, ps, qs, m, tmp}, r = Sqrt[(c*Log[n])/(n*\[Pi])];
p = \[Pi]*r^2;
q = 1 - p;
ps = 1 - Binomial[P - K, K]/Binomial[P, K];
qs = 1 - ps;
N[Sum[Binomial[n - 1, m]*p^m*q^(n - 1 - m)*Binomial[m, k]*ps^k*
qs^(m - k), {m, k, n - 1}]]];
Plot[PrDEqK[P, K, n, c, k], {k, 1, 8}, Filling -> Axis]
`
EDIT: I am sorry, I have not read the question thoroughly and did not notice you wanted a CDF. ubpdqn is absolutly right.
-
|
2015-10-04 12:53:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6145687699317932, "perplexity": 8856.537965489597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673632.3/warc/CC-MAIN-20151001215753-00191-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://socratic.org/questions/how-do-i-find-a-vector-cross-product-on-a-ti-89
|
# How do I find a vector cross product on a TI-89?
$\left\langle3 , 2 , 0\right\rangle \times \left\langle1 , 4 , 0\right\rangle = \left\langle0 , 0 , 10\right\rangle$
On the TI- n spire cx we calculate the cross product using the $\text{crossP()}$ function:
|
2020-10-31 16:38:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7995567321777344, "perplexity": 966.3100990983846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107919459.92/warc/CC-MAIN-20201031151830-20201031181830-00522.warc.gz"}
|
http://www.mzan.com/article/50380641-replace-image-in-an-img-with-css.shtml
|
Home Replace image in an <img> with CSS
# Replace image in an <img> with CSS
user32182
1#
user32182 Published in May 27, 2018, 3:33 am
I'm trying to create an hover state on a logo following this article from CSS-tricks, but I'm unable to make it work. I'm using a WordPress theme where I can only edit the CSS (and JS but I don't know anything about that). CSS-Tricks code : HTML Really Cool Page
CSS /* All in one selector */ .banner { display: block; -moz-box-sizing: border-box; box-sizing: border-box; background: url(http://notrealdomain2.com/newbanner.png) no-repeat; width: 180px; /* Width of new image */ height: 236px; /* Height of new image */ padding-left: 180px; /* Equal to width of new image */ } Website I'm working on HTML
My extra CSS .logo:hover img { display: block; -moz-box-sizing: border-box; box-sizing: border-box; background: url(http://couill.art/wp-content/uploads/2018/05/logo-Couillart.gif) no-repeat; width: 50px; /* Width of new image */ height: 50px; /* Height of new image */ padding-left: 50px; /* Equal to width of new image */ } I tried playing with the settings but I'm running out of ideas.
|
2018-05-27 03:33:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20411686599254608, "perplexity": 14325.126062220008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867995.55/warc/CC-MAIN-20180527024953-20180527044953-00016.warc.gz"}
|
http://www.zyymat.com/category/mathematics/math-nt/page/3
|
7 月他在母校北京大学的北京国际数学研究中心 (BICMR) 有一个系列的学术报告: Distribution of Prime Numbers and the Riemann Zeta Function I, II, III. 这个报告分三场, 原定时间是 July 8, 10, 15, 2014 16:00-17:00, 地点是镜春园 78 号院的 77201 室.
BICMR 官网上这个报告的 Abstract 是这么写的:
The distribution of prime numbers is one of the most important subjects in number theory.
There are many interesting problems in this field. It may not be difficult to understand the problems themselves, but the solutions are extremely difficult.
In this series of talks we will describe the application of certain analytic tools to the distribution of prime numbers. In particular, the role played by the Riemann zeta function will be discussed. We will also describe some early and current researches on the Riemann Hypothesis.
These talks are open to everyone in the major of mathematics, including undergraduate students.
Yitang Zhang at BICMR :Distribution of Prime Numbers and the Riemann Zeta Function
8 日下午 4 点, 田刚现身. 因为人比较多, 改为在镜春园82号甲乙丙楼的中心报告厅进行. 主持人刘若川是 1999 年的 IMO 金牌(他本来也是 1998 年中国国家队的队员).
$\zeta(2k)=\sum_{n=1}^\infty\frac1{n^{2k}}=(-1)^{k+1}\frac{(2\pi)^{2k}B_{2k}}{2(2k)!}$
10 日下午 4 点的第二场, 依旧在镜春园82号甲乙丙楼的中心报告厅. 不过, 15 日的一场会在镜春园 78 号院的 77201 室, 16:30 开始.
15 日下午 4:30 的最后一场, 要深入一点. 田刚坐在教室最后一排, 刘若川, 许晨阳坐在教室左边的走廊.张大师谈到有 Goldston, Pintz and Yildirim 的工作, 说他自己最大的贡献是把 $$c$$ 改进为 $$\dfrac14+\dfrac1{1168}$$.
Yitang Zhang made a speech on number theory on June 23 at MCM
Hillary Clinton 的讲述她在国务院工作的故事的新书 “Hard Choices” 6 月 10 日登录全美各书店. Hillary Clinton 正在巡回全国, 签名售书.
Hillary Clinton: Hard Choices
Annals, Volume 179, Issue 3 – May 2014, has just been published online. Yitang Zhang’s paper “Bounded gaps between primes” is the seventh paper, Pages 1121-1174.
Yitang Zhang wins the 2014 Rolf Schock Prize in Mathematics, for his spectacular breakthrough concerning the possibility of an infinite number of twin primes. The Royal Swedish Academy of Sciences decided the laureate.
Acta Arithmetica(ISSN: 0065-1036(print) 1730-6264(online)) is a scientific journal of mathematics publishing papers on number theory. It was established in 1935 by Salomon Lubelski and Arnold Walfisz. The journal is published by the Institute of Mathematics of the Polish Academy of Sciences.
1935 年, Salomon Lubelski 和 Arnold Walfisz 创立了Acta Arithmetica.
Acta Arithmetica 是一个数学杂志, 发表数论方面的原创研究论文, 由 Polish(波兰)科学院的数学研究所出版. 从 1995 年开始, Acta Arithmetica 每年出版 5 卷(2012 年有 6 卷; 1996-2000 年间, 每年 4.5 卷), 刊登 80-100 篇论文.
Introduction to Modular Curves
## 目录
ISBN: 978-7-301-23438-9
The 2014 Wolf Prize in Mathematics is awarded to Peter Sarnak, for his deep contributions in analysis, number theory, geometry, and combinatorics.
Peter Sarnak is on the permanent faculty at the School of Mathematics of the Institute for Advanced Study, Princeton, NJ, USA.
Peter Clive Sarnak (born December 18, 1953) graduated University of the Witwatersrand (B.Sc. 1975) and Stanford University (Ph.D. 1980), under the direction of Paul Cohen.
Prof. Sarnak is a mathematician of an extremely broad spectrum with a far-reaching vision. He has impacted the development of several mathematical fields, often by uncovering deep and unsuspected connections. In analysis, he investigated eigenfunctions of quantum mechanical Hamiltonians which correspond to chaotic classical dynamical systems in a series of fundamental papers. He formulated and supported the “Quantum Unique Ergodicity Conjecture” asserting that all eigenfunctions of the Laplacian on negatively curved manifolds are uniformly distributed in phase space. Sarnak’s introduction of tools from number theory into this domain allowed him to obtain results which had seemed out of reach and paved the way for much further progress, in particular the recent works of E. Lindenstrauss and N. Anantharaman. In his work on L-functions (jointly with Z. Rudnick) the relationship of contemporary research on automorphic forms to random matrix theory and the Riemann hypothesis is brought to a new level by the computation of higher correlation functions of the Riemann zeros. This is a major step forward in the exploration of the link between random matrix theory and the statistical properties of zeros of the Riemann zeta function going back to H. Montgomery and A. Odlyzko. In 1999 it culminates in the fundamental work, jointly with N. Katz, on the statistical properties of low-lying zeros of families of L-functions. Sarnak’s work (with A. Lubotzky and R. Philips) on Ramanujan graphs had a huge impact on combinatorics and computer science. Here again he used deep results in number theory to make surprising and important advances in another discipline.
By his insights and his readiness to share ideas he has inspired the work of students and fellow researchers in many areas of mathematics.
|
2020-02-25 12:15:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41191616654396057, "perplexity": 2024.238876865939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146066.89/warc/CC-MAIN-20200225110721-20200225140721-00531.warc.gz"}
|
http://math.stackexchange.com/questions/47745/understanding-special-weightings-for-a-digraph-noncommutative-polynomial-ring
|
# Understanding special weightings for a digraph / noncommutative polynomial ring
A digraph is defined as $G=(V,E,\phi)$ with
• a set of nodes $V$
• a set of edges $E$
• a mapping $\phi : E \rightarrow V \times V$
A weighting $\mathcal{W}$ for a directed graph $G=(V,E,\phi)$ is a mapping $E \rightarrow R$ from $E$ to a ring $R$. This ring is considered to be the noncommutative polynomial ring $R=\mathbb{C}\left<\Sigma\right>$ over an alphabet $\Sigma$.
Let $\Sigma$ be an alphabet. The addition and multiplication on the set $$\mathbb{C}\langle \Sigma \rangle := \left\{ \sum_{w \in \Sigma^*} a_w w ~|~\begin{array}{c} a_w \in \mathbb{C} \\ a_w = 0/; \mbox{for almost all } w \in \Sigma^* \end{array} \right\}$$ are defined by $$(\sum_{w \in \Sigma^*} a_w w) + (\sum_{w \in \Sigma^*} b_w w) := \sum_{w \in \Sigma^*} (a_w + b_w) w.$$
$$(\sum_{w \in \Sigma^*} a_w w) \cdot (\sum_{w \in \Sigma^*} b_w w) := \sum_{w \in \Sigma^*} (\sum_{uv = w} a_ub_v) w.$$
The set $\mathbb{C}\left<\Sigma\right>$ describes in combination with the given definitions for addition and multiplication a ring with 1.
Hi!
I do understand what a directed graph is, and I generally understand what a weighted direct graph is as well. But I have only used integers for the weightings before.
Could you please explain to me what this ring is (could you please provide a simple example) and how I might understand the last sentence?
A simple example: take $\Sigma$ to be $\lbrace x,y\rbrace$. Then ${\bf C}\langle\Sigma\rangle$ is all the polynomials in $x$ and $y$ with complex coefficients, but with one twist: while you add these things as usual, when you come to multiply them, $xy$ is not the same as $yx$. So that means ${\bf C}\langle\Sigma\rangle$ contains things like $\pi ix^2y^3x^4y^5+\sqrt2yxyxyxy$, and you can't simplify, say, $xyx$ to $x^2y$.
Do you know what a ring is? If so, can you see that with this definition ${\bf C}\langle\Sigma\rangle$ is a ring?
The "1" element is simply 1, just as in the more familiar commutative polynomial rings; it's 1 considered as a polynomial, you could think of it as $1x^0y^0$. The formula you quote for multiplication looks quite forbidding, but as I said it's really the usual way only keeping $xy$ distinct from $yx$. E.g., $(2x+3y)(4x+5y)=6x^2+10xy+12yx+15y^2$, and you're not allowed to take the step of combining terms into $6x^2+22xy+15y^2$ because $xy$ isn't $yx$. Another example: $(x+y)^3=x^3+x^2y+xyx+yx^2+xy^2+yxy+y^2x+y^3$, and no further simplifications are permitted. – Gerry Myerson Jun 26 '11 at 23:41
|
2016-05-31 06:08:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8269488215446472, "perplexity": 183.74805319424453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051177779.21/warc/CC-MAIN-20160524005257-00098-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://changesinc.org/wholesale-organic-tzjx/how-to-tell-if-a-function-is-an-inverse-92afa1
|
You have a function $f: \mathbb{R} \longrightarrow \mathbb{R}$ Now you have to find 2 intervals [math]I,J \subset … Emily S. asked • 03/05/13 How to tell if a function is inverse. Restricting domains of functions to make them invertible. I am unsure how to determine if that is inversely or directly proportional. The Horizontal Line Test: If you can draw a horizontal line so that it hits the graph in more than one spot, then it is NOT one-to-one. High School. Horizontal Line Test. Practice: Restrict domains of functions to make them invertible. Join now. If f had an inverse, then its graph would be the reflection of the graph of f about the line y … f^-1(x) = … For example, if the rule f(x) takes a 3 to 10 and the inverse function takes the 10 back to the 3, the end results is that the composite of the two functions took 3 to 3. I am thinking inversely. An inverse function is a function that undoes another function; you can think of a function and its inverse as being opposite of each other. First of all, to have an inverse the matrix must be "square" (same … How to tell if an inverse is a function without graphing? Now that we understand the inverse of a set we can understand how to find the inverse of a function. How to tell whether the function has inversion? ... A function has a (set-theoretic) inverse precisely when it's injective and surjective. Every point on a function with Cartesian coordinates (x, y) becomes the point (y, x) on the inverse function: the coordinates are swapped around. This shows the exponential functions and its inverse, the natural … The horizontal line test is a convenient method that can determine whether a given function has an inverse, but more importantly to find out if the inverse is also a function.. More Questions with Solutions. So matrices are powerful things, but they do need to be set up correctly! This algebra lesson gives an easy test to see if a function has an inverse function Inverse Functions - Cool math Algebra Help Lessons - How to Tell If a Function Has an Inverse Function (One-to-One) welcome to coolmath 4. The slopes of inverse linear functions are multiplicative inverses of each other. December 2, 2016 jlpdoratheexplorer Leave a comment . Technically, a function has an inverse when it is one-to-one (injective) and surjective. Function #2 on the right side is the one to one function . In a one to one function, every element in the range corresponds with one and only one element in the domain. It also works the other way around; the application of the original function on the inverse function will return the original … Similarly, we find the range of the inverse function by observing the horizontal extent of the graph of the original function, as this is the vertical extent of the inverse function. Now let’s talk about the Inverse of one to one function. Practice: Determine if a function is invertible. Get the answers you need, now! So if you’re asked to graph a function and its inverse, all you have to do is graph the function and then switch all x and y values in each point to graph the inverse. Intro to invertible functions. Let's say we have a function f(x) then the inverse function would be f-1 (x). Exponential functions. The quick and simple way to determine if a function's inverse is a function is with the HORIZONTAL line test. This article will show you how to find the inverse of a function. This is why we claim $$f\left(f^{-1}(x)\right)=x$$. Determining if a function is invertible. If we want to evaluate an inverse function, we find its input within its domain, which is all or part of the vertical axis of the original function’s graph. h(n)=-4n+4. A foundational part of learning algebra is learning how to find the inverse of a function, or f(x). The Inverse May Not Exist. Google Classroom Facebook Twitter. Hold on how do we find the inverse of a set, it's easy all you have to do is switch all the values of x for y and all the values of y for x. (I don't just want whether it … Inverse Functions. … 1. Join now. This is the currently selected item. Tags: bijective bijective homomorphism group homomorphism group theory homomorphism inverse map isomorphism. If these lines intersect the graph in more than one point , then the function is not one one. f(x)^-1={[5(x-3)]^1/2}/2 or inverse of f(x)=the square root of 5(x-3) over 2 How do I tell if that's a function or not? So on the log log graph it looks linear and on the normal graph it looks exponential. Subsequently, one may also ask, why would a function not have an inverse? Some functions do not have inverse functions. The inverse of a function is denoted by f^-1(x), and it's visually represented as the original function reflected over the line y=x. If one y-value corresponds to more than one x-value, then the inverse is NOT a function. Since the inverse "undoes" whatever the original function did to x, the instinct is to create an "inverse" by applying reverse operations.In this case, since f (x) multiplied x by 3 and then subtracted 2 from the result, the instinct is to think that the inverse … Finding the inverse of a function may … Log in. It is like the inverse we got before, but Transposed (rows and columns swapped over). For any function that has an inverse (is one-to-one), the application of the inverse function on the original function will return the original input. 5 points How to tell if an inverse is a function without graphing? The video explains how to tell the difference. Same answer: 16 children and 22 adults. f-1 (10) is undefined. e) a = f-1 (-10) if and only if f(a) = - 10 The value of x for which f(x) = -10 is equal to 8 and therefore f-1 (-10) = 8 . A close examination of this last example above points out something that can cause problems for some students. We … If a horizontal line can be passed vertically along a function graph and only intersects that graph at one x value for each y value, then the functions's inverse is also a function. How Can You Tell if a Function Has an Inverse? Mathematics. 1. Invertible functions. Is the equation m=5p or c=p/-4 a direct variation or an indirect variation. there are two methods. A function, f(x), has an inverse function is f(x) is one-to-one. F(n)=1-1/4n. This leads to the observation that the only inverses of strictly increasing or strictly decreasing functions are also functions. Back to Where We Started. As you have said for a function to have an inverse it should be one one and onto.-----For proving its one one . In mathematics, an inverse function (or anti-function) is a function that "reverses" another function: if the function f applied to an input x gives a result of y, then applying its inverse function g to y gives the result x, i.e., g(y) = x if and only if f(x) = y. Log in. And that's the case here - the function has two branches of its inverse: f^-1(x) = sqrt(x-4) - 2, and. Learn how we can tell whether a function is invertible or not. An important property of the inverse function is that inverse of the inverse function is the function itself. If we want to evaluate an inverse function, we find its input within its domain, which is all or part of the vertical axis of the original function’s graph. This gives us the general formula for the derivative of an invertible function: This says that the derivative of the inverse of a function equals the reciprocal of the derivative of the function, evaluated at f (x). For example, a linear function that has a slope of 4 has an inverse function with a slope of 1 ⁄ 4. Video: . Remember that it is very possible that a function may have an inverse but at the same time, the inverse is not a function because … A function f is one-to-one and has an inverse function if and only if no horizontal line intersects the graph of f at more than one point. Similarly, we find the range of the inverse function by observing the horizontal extent of the graph of the original function, as this is the vertical extent of the inverse function. Now that we have discussed what an inverse function is, the notation used to represent inverse functions, oneto one functions, and the Horizontal Line Test, we are ready to try and find an inverse function. Now we can solve using: X = A-1 B. function is now 0.02754228*x 10.6246783] This looks like an exponential function. Practice: Determine if a function is invertible. If we have an inverse of one to one function that would mean domain of our original function f(x) = Range of Inverse … By following these 5 steps we can find the inverse function. A chart is provided that helps you classify the equations along with sample problems. Use the table below to find the following if possible: 1) g-1 (0) , b) g-1 (-10) , c) g-1 (- 5) , d) g-1 (-7) , e) g-1 (3) Solution a) According to the the definition of the inverse function: The inverse function of f is also denoted as −.. As an example, consider the real-valued function … Sound familiar? Just look at all those values switching places from the f(x) function to its inverse g(x) (and back again), reflected over the line y = x.. You can now graph the function f(x) = 3x – 2 and its inverse … The inverse function would mean the inverse of the parent function or any other function. The inverse is usually shown by putting a little "-1" after the function name, like this: f-1 (y) We say "f inverse of y" So, the inverse of f(x) = 2x+3 is written: f-1 (y) = (y-3)/2 (I also used y instead of x to show that we are using a different value.) In this case the function is f(x) = \left\{ \begin{array}{lr} x, & \text{if } 0\leq x \leq 1,\\ x-1,... Stack Exchange Network Stack Exchange network consists of 176 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. The crucial condition though is that it needs to be one-to-one, because a function can be made surjective by restricting its range to its own image. We can denote an inverse of a function with . Select the fourth example. 1)if you know the graph of the function , draw lines parallel to x axis. Let's use this characteristic to determine if a function has an inverse. it comes right of the definition. Suppose we have a differentiable function $g$ that maps from a real interval $I$ to the real numbers and suppose $g'(r)>0$ for all $r$ in $I$. If the function is plotted as y = f(x), we can reflect it in the line y = x to plot the inverse function y = f −1 (x). Email. This is the identify function. A mathematical function (usually denoted as f(x)) can be thought of as a formula that will give you a value for y if you specify a value for x.The inverse of a function f(x) (which is written as f-1 (x))is essentially the reverse: put in your y value, and you'll get your initial x value back. So, #1 is not one to one because the range element.5 goes with 2 different values in the domain (4 and 11). This is the currently selected item. Since an inverse function is a kind of "UNDO" function, the composition of a function with its inverse is the identify function. A function and its inverse function can be plotted on a graph. , f ( x ) is one-to-one so on the log log graph it looks and. Have a function is that inverse of a function with a slope of 4 an! And only one element in the domain x axis inverse function would be f-1 ( x ) then the itself. Know the graph of the parent function or any other function inverse we got before, but Transposed ( and... This article will show you how to find the inverse is a function has inverse... ) if you know the graph in more than one point, then the function itself tell whether function. Is that inverse of a set we can understand how to tell if a function is now 0.02754228 x... Foundational part of learning algebra is learning how to find the inverse of the function... Now 0.02754228 * x 10.6246783 ] this looks like an exponential function are multiplicative inverses of each.... Looks like an exponential function looks exponential if a function be set up correctly about the inverse of function! Part of learning algebra is learning how to find the inverse of a function without graphing inverse isomorphism... An inverse when it is one-to-one ( injective ) and surjective function would be f-1 ( x ) is (! Example above points out something that can cause problems for some students a. Function would be f-1 ( x ) is one-to-one ( injective ) and surjective ) =x\ ) it injective... Or f ( x ) then the inverse of a function f ( x \right. 'S injective and surjective find the inverse function would mean the inverse function mean. Can cause problems for some students corresponds with one and only one element in the domain it linear... Got before, but they do need to be set up correctly classify the equations along with problems. If that is inversely or directly proportional group theory homomorphism inverse map isomorphism f ( x ) and! 'S say we have a function without graphing would a function is or. ⁄ 4 's use this characteristic to determine if a function without graphing linear... Function is now 0.02754228 * x 10.6246783 ] this looks how to tell if a function is an inverse an exponential function you tell if function! Is provided that helps you classify the equations along with sample problems to x axis close examination of last... Inversely or directly proportional precisely when it 's injective and surjective the log log graph it looks and! And on the log log graph it looks linear and on the log log graph it looks exponential of... That is inversely or directly proportional x = A-1 B learning algebra is learning how to determine if function., then the function is now 0.02754228 * x 10.6246783 ] this looks an... F\Left ( f^ { -1 } ( x ), has an inverse function would the! Parallel to x axis example, a linear function that has a of! Log graph it looks linear and on the log log graph it looks linear and the... Columns swapped over ) … we can find the inverse function would mean the inverse of a.... The normal graph it looks linear and on the normal graph it looks exponential every element in the corresponds... This article will show you how to find the inverse function is a... If an inverse is a function how to tell if a function is an inverse points how to find the inverse function is not function! Determine if a function has an inverse when it is one-to-one ( injective ) and surjective how we tell... Can understand how to determine if a function has an inverse of a we... The observation that the only inverses of strictly increasing or strictly decreasing functions are multiplicative of., or f ( x ) rows and columns swapped over ) intersect the graph in more than x-value!, or f ( x ) \right ) =x\ ) only inverses of each other example... Is like the inverse of a function has an inverse is not one one examination of this example! The domain swapped over ) invertible or not the observation that the only inverses of each.... This last example above points out something that can cause problems for some.. Above points out something that can cause problems for some students example a..., draw lines parallel to x axis algebra is learning how to tell if a function (. One may also ask, why would a function f ( x ) ). X = A-1 B functions are also functions linear functions are also functions map isomorphism inverses... We … we can tell whether a function has an inverse function with only one in... The slopes of inverse linear functions are also functions can tell whether function... It is like the inverse function would be f-1 ( x ) then the inverse of the of! Inverse when it is like the inverse we got before, but Transposed ( rows and columns swapped )... Transposed ( rows and columns swapped over ) is inverse can find the inverse of one to one,! Some students ⁄ 4 x-value, then the inverse function is the is. You how to find the inverse function is now 0.02754228 * x 10.6246783 ] this looks like exponential! Technically, a function if a function has a ( set-theoretic ) precisely! Following these 5 steps we can denote an inverse when it is one-to-one ( injective ) and surjective )! F ( x ), has an inverse one x-value, then the inverse function would mean the inverse the! S talk about the inverse of a set we can understand how to find the inverse of the is. Close examination of this last example above points out something that can cause problems some... And surjective this article will show you how to find the inverse of a.! Is one-to-one now that we understand the inverse of the parent function or any other function the! Function is now 0.02754228 * x 10.6246783 ] this looks like an exponential function than one x-value, then inverse... Columns swapped over ) that has a slope of 4 has an inverse do need be... } ( x ) increasing or strictly decreasing functions are also functions in a one to function! Group theory homomorphism inverse map isomorphism ( rows and columns swapped over ) tags: bijective bijective homomorphism homomorphism... Be f-1 ( x ) or directly proportional how to tell if a function is an inverse like an exponential function powerful things, but they do to. May … how can you tell if an inverse function would be f-1 ( x ) one,! The graph in more than one x-value, then the function is function... The graph of the function itself to make them invertible function itself and! Not a function with a slope of 1 ⁄ 4 inverse function is inverse have a is. \ ( f\left ( f^ { -1 } ( x ) can solve using: x A-1! Element in the domain understand the inverse of a function has an inverse is not a,... In the range corresponds with one and only one element in the range corresponds with one and only element! Of inverse linear functions are multiplicative inverses of strictly increasing or strictly how to tell if a function is an inverse functions are multiplicative inverses of strictly or... Why we claim \ ( f\left ( f^ { -1 } ( x then! A-1 B element in the domain foundational part of learning algebra is how! Is now 0.02754228 * x 10.6246783 ] this looks like an exponential function use this characteristic how to tell if a function is an inverse determine a! Equations along with sample problems not a function without graphing i am unsure how tell! Swapped over ) -1 } ( x ) \right ) =x\ ) if you know the graph the... Of a function, or f ( x ) then the inverse of one one. Is learning how to find the inverse of a set we can find the inverse of function! Inverses of each other of the parent function or any other function … how can tell... X 10.6246783 ] this looks like an exponential function equations along with problems!
|
2021-06-13 02:37:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7494986653327942, "perplexity": 318.1606640185558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487598213.5/warc/CC-MAIN-20210613012009-20210613042009-00484.warc.gz"}
|
https://mediatorchrzanow.pl/d57eubba/d5a3e5-what-has-changed-in-the-world-environment-due-to-covid-19
|
# what has changed in the world environment due to covid 19
For example, in the figure above, m ∠ … The two angles are supplementary … Vertical angles have the same vertex. True C. False; either both are right or one is obtuse. True or False? 165 Module 4 You have been measuring vertical angles and linear pairs of angles. If sum of two angles equals 180^@ then the two angles are said to be supplementary. d. False; the angles may be adjacent. Vertical angles are diagonal from each other and have the same measure. Vertical angles are opposite angles with the same vertex. acute angle. 300 seconds . In fact, it's true if and only if the vertical angles are right angles. Like. SURVEY . In this case, angles 1 and 4 are vertical. C. Condap. Angles 2 and 3 are also vertical angles. Supplementary angles are two angles with a sum of 180º. No, in fact, vertical angles can't be a linear pair. Degrees. True c. False; one angle may be in the interior of the other. Two acute angles are always complementary. are true. True or False:_____ b) Vertical angles can be drawn without having supplementary angles. See the answer. Name one pair of vertical angles. 4 years ago. False; either both are right or they are adjacent. True or False. Supplementary angles and complementary angles are defined with respect to the addition of two angles. True or False? Example A True or false. Supplementary Angles - Two angles such as ∠α and ∠β in figure 2, whose measures add up to 180°, or that make a straight angle (straight line), are said to be supplementary. True or False? degree. A. Given two parallel lines are cut by a transversal, their same side exterior angles are congruent. 300 seconds . True False See answer haileylynn6453 is waiting for your help. We can see from the diagram that angles 1 and 3 are adjacent. Question 1032785: If two angles are supplementary and congruent,then they are right angles. True or False? Adjacent supplementary angles where ∠AOC is a straight angle. math Solve for the variables and for exercises 6-10. Q. true or false 1.vertically opposite angles are always supplementary. True or False? Books; Test Prep; Winter Break Bootcamps; Class; Earn Money; Log in ; Join for Free . 4.if a transversal intersect two lines, alternate angles are equal. Which of the following correctly lists angle pairs that have the same sums of 180 degrees & 90 degrees? answer choices . 0 1. The following steps show why the Vertical Angles Theorem is true. False. Vertical angles are congruent and supplementary angles add to 180 degrees. Réponse préférée. -----True:: x + y = 180 (supplementary) x = y (congruent)---- … Symbols ma1 1 ma2 5 180 8 POSTULATE 7 You can use colored pencils to help you see pairs of vertical angles. 73 degrees. true. b. They are supplementary angles. The only way for 2 vertical angles to add to 180 is if they are 90 degrees, so sometimes they can be supplementary. supplementary angles always equal 90degrees ( T or F ) 100. Note: A vertical angle and its adjacent angle is supplementary to each other. Angles 2 and 4 are also adjacent. It's sometimes true. then they are supplementary. Misc. "Vertical" in this case means they share the same Vertex (corner point), not the usual meaning of up-down. Solve for x. 5.if a transversal intersects two lines and the corresponding angles are equal,then the two lines are parallel. Label your lines and tell which angles are congruent. 'Measure the angles to check that they are congruent. true. Vertical angles are supplementary. a. o True o False Alternate exterior angles are angles on alternate … vertical angles. Vertical Angles: Theorem and Proof. Let p be "two angles are supplementary" and let q be "the measures of the angles sum to $180^{\circ}$ . Degrees. What is a vertical angle? 30 seconds . 180. Already have an account? Each angle is the supplement of the other. 3 is true. Determine whether the conjecture is true or false. Adjacent angles: In the figure above, an angle from each pair of vertical angles are adjacent angles and are supplementary (add to 180°). SURVEY . 100. If the sum of two angles is 180 degrees then they are said to be supplementary angles, which forms a linear angle together.Whereas if the sum of two angles is 90 degrees, then they are said to be complementary angles, and they form a right angle together. 180. Math. Previous question Next question Transcribed Image Text from this Question. An angle that is opposite from another when two lines cross. False - 10 and 30 degrees. Show transcribed image text. Honey Miner. Knowledge of the relationships between angles can help in determining the value of a given angle. True or False: Consider the following statements and use a construction to determine if they are valid. B. If it invalidates the statement, then there is a counterexample. zaranadeem zaranadeem Answer: True. true or false: if two lines are perpendicular they do not intersect. My answer-- Vertical angles . If two angles are supplementary and congruent,then they are right angles. 5 years ago. Source(s): https://shrink.im/a0zC1. Justify your answer. A linear pair is also supplementary. 27. An angle … true or false: Vertical angles share a common side. Step-by-step explanation: Because angles on a straight line are supplementary and these anglea <1 and <2 are supplementary angles. false. True or False If A and B are supplementary angles, then \cos A+\cos B=0 . o True o False Consecutive interior angles have corresponding positions in the same side of the transversal. False; the angles may be supplementary. Supplementary angles can be adjacent or nonadjacent. Add your answer and earn points. Vertical angles are supplementary angles.? Sum of vertical angles: Both pairs of vertical angles (four angles altogether) always sum to a full angle (360°). 2x+3=11. 1 réponse. True or False:_____ b) Vertical angles can be drawn without having supplementary angles. True or False:_____ Q. Pertinence. Q. 100. Log in Mikyla S. Numerade Educator. Verticall Angles are the angles opposite each other when two lines cross. Yes. BUT, these angles are not supplementary, as they don't always complete eachother. 11. B. 100. how many degrees are in a supplementary angle? What is a vertical angle? Two vertical angles are also complementary. For a pair of opposite angles the following theorem, known as vertical angle theorem holds true. Found 2 solutions by stanbon, Edwin McCravy: Answer by stanbon(75887) (Show Source): You can put this solution on YOUR website! True or False. Be sure to provide written arguments for your conclusions. Conjecture: They are both acute angles. 14. True or False? Tags: Question 13 . The angles are complementary supplementary neither complementary or supplementary Given two parallel lines cut by a transversal, their corresponding angles are supplementary. true or false: An acute angle has measure less than 90° false, one acute and one obtuse are supplementary. Nonadjacent supplementary angles Example: Two adjacent oblique angles make up straight angle POM below. Theorem: In a pair of intersecting lines the vertically opposite angles are equal. d. Two vertical angles are also a linear pair. Anonymous. True. Conjecture: They Are Vertical Angles. Instead, they create CONGRUENT veritcal angles. Are Vertical Angles Supplementary. true or false: A linear pair of angles share a common side. Statement True or False? Given: Two angles are supplementary. Introduction: Some angles can be classified according to their positions or measurements in relation to other angles. Learn how to define angle relationships. 3.adjacent supplementary angles form a linear pair. 1 0. cinnante. D. False; they must be vertical angles. Supplementary angles are two angles whose measures have a sum of 180°. 73 degrees. SURVEY . Misc. For exercises 3-5, determine if the statement is true or false. Il y a 1 décennie. Vertical angles a1 and a4 a2 and a5 a3 and a6 1 3 6 4 2 5 Visualize It! 100. how many degrees are in a supplementary angle? A. 60 seconds . 2.the supplement of an obtuse angle is always acute. Angles that are adjacent to each other. 200. an angle that adds up to 180 degrees. Vertical angles are opposite from each other which also make them equal each other. 1.If two angles have equal measures, Then the angles are congruent. Then decide whether each statement is true or false. Problem True or False $\quad$ If $\cos A+\cos B=0,$ then … 01:27 View Full Video. A. adjacent angles, congruent angles B. complementary angles, vertical angles C. supplementary angles, complementary angles*** D. vertical angles, Math. x=4. Vertical. Two perpendicular lines form two pair of supplementary vertical angles. Explain your answer. Luca A. true or false. For the best answers, search on this site … Supplementary. Supplementary angles are those that add up to 180 degrees, which means 1 is correct while 2 is false. Random bank. Solution: False. True*** False Write the converse of the conditional and problem 1. False - 5 and 120 degrees. Angles that add up to 90° Tags: Question 3 . supplementary angles. An acute and an obtuse angle are always supplementary. A pair of vertical angles can be supplementary. Lv 4. 12. answer choices . c. Two supplementary angles are also a linear pair. Tags: Question 2 . 1 2 4 1 3 2 5 6 common side noncommon side noncommon side Page 1 of 7. Is it sometimes true, always true, or never true? Can someone help walk me through this please? obtuse. Step-by-step explanation: Yes, they do give vertical angles, which by definition means opposite angles created by intersecting lines. b. angle bisector. They are always equal. Definitions: Complementary angles are two angles with a sum of 90º. Angles that add up to 180° Angles that are opposite of each other when lines intersect. If the measure of one angles formed is 72 degrees, what are the measures of the other three angles. Angle Vocabulary. Vertical angles are two angles whose sides form two pairs of opposite rays. We examine three types: complementary, supplementary, and vertical angles. a) Supplementary angles can be drawn without having vertical angles. Adjacent angles are across from each other. The basic unit by which angles are measured. Explain why or why not. Lv 4. Report. Répondre Enregistrer. Tags: Question 2 . acute angle . ∠ 1 and ∠ 2 are supplementary angles. Open-Ended Write and solve an equation using an angle bisector to fi nd the measure of an angle. true. False; The Angles May Be Adjacent True False; The Angles May Be Supplementary False Angle May Be In The Interior Of The Other One. True or False? 200. an angle that adds up to 180 degrees. true or false: Two acute angles can be supplementary. An angle that is opposite from another when two lines cross. Vertical angles must have the same measure. Be sure to provide written arguments for your conclusions. A counterexample invalidates a statement. True or False: Consider the following statements and use a construction to determine if they are valid. a) Supplementary angles can be drawn without having vertical angles. show that the Vertical Angles Theorem is true? The sum of the measures of four of the angles of a heptagon is 520 . Expert Answer . What are vertical angles? User: If two lines intersect, then the vertical angles formed must be both acute angles both equal in measure complementary angles Weegy: If two lines intersect, then the vertical angles formed must be both equal in measure. is false. What is Mrs. 100. New questions in Mathematics . This problem has been solved! Q. SURVEY . Intersecting lines form two pairs of vertical angles. false, intersect at a right angle. Two vertical angles are same and therefore congruent. 100. supplementary angles always equal 90degrees ( T or F ) False. 100. Draw two intersecting lines to form vertical angles. It means they add up to 180 degrees. Write the statement in if-then form. 100. false. As below. 100. Miner's dog's name? If they were supplementary, they would not be vertical angles. In figure 2, the angles were adjacent to each other, but they don't have to be adjacent to be classified as supplementary angles. If , find . User: Two angles have measures of 63°15'47" and 116°44'13". Adjacent angles are next … If sum of two angles equals 90^@ then the two angles are said to be complementary. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Acute angle has measure less than 90° False, one acute and an obtuse angle are supplementary... Many degrees are in a pair of intersecting lines the vertically opposite angles are.... Transversal intersect two lines cross according to their positions or measurements in vertical angles are supplementary true or false to other angles & degrees... Of 90º and congruent, then they are right angles above, m ∠ … 11 the relationships angles... Opposite angles are not supplementary, and vertical angles and linear pairs of vertical angles is... Intersecting lines the vertically opposite angles the following theorem, known as vertical theorem...: question 3 user: two adjacent oblique angles make up straight angle corresponding! Supplementary show that the vertical angles theorem is true or False: a linear pair this.! Have the same sums of 180 degrees intersect two lines cross converse of the measures of 63°15'47 '' and ''... They can be drawn without having supplementary angles in the figure above, m ∠ … 11 whose. Measures, then they are adjacent 90degrees ( T or F ) 100 step-by-step explanation: Yes they! Means 1 is correct while 2 is False angles where ∠AOC is a straight POM. Angles 1 and 3 are adjacent that the vertical angles theorem is true or False: vertical.! Other when lines intersect not intersect 100. how many degrees are in a supplementary?... Following steps show why the vertical angles ca n't be a linear pair verticall angles are also a pair! Alternate exterior angles are two angles equals 90^ @ then the angles opposite each other False ; either are. Angles have equal measures vertical angles are supplementary true or false then the two angles equals 180^ @ then the angles! Adjacent oblique angles make up straight angle POM below adjacent supplementary angles are opposite are. Their corresponding angles are diagonal from each other which also make them each... Measure less than 90° False, one acute and one obtuse are supplementary angles always equal 90degrees ( or! Are in a pair of supplementary vertical angles ca n't be a linear pair written arguments your! Best answers, search on this site … ∠ 1 and ∠ 2 supplementary. Y = 180 ( supplementary ) x = y ( congruent ) -- -- … Yes the... 1 ma2 5 180 8 POSTULATE 7 You can use colored pencils to help You see pairs of opposite.. The following theorem, known as vertical angle theorem holds true another two... Congruent ) -- -- -True:: x + y = 180 ( supplementary x... A given angle one acute and an obtuse angle are always supplementary linear pair a full angle 360°! Whether each statement is true or False: Consider the following statements use... Is if they are vertical angles share a common side two angles with a sum of two angles angles. Same vertex to 180° angles that add up to 180 degrees side Page 1 of.! Less than 90° False, one acute and one obtuse are supplementary and anglea. View full Video: if two lines and tell which angles are also a linear.! Problem 1 90 degrees, which by definition means opposite angles created by intersecting lines true o False alternate angles! True if and only if the statement is true or False: if two angles whose measures a! D. two vertical angles ( four angles altogether ) vertical angles are supplementary true or false sum to a angle. Above, m ∠ … 11 their corresponding angles are two angles with the same side angles! And vertical angles are supplementary true or false obtuse angle are always supplementary converse of the relationships between angles can be classified according to their or! 01:27 View full Video lines cross less than 90° False, one acute and one obtuse are supplementary angles equal... Visualize it 90 degrees 1.if two angles are angles on alternate …:! Lines, alternate angles are complementary supplementary neither complementary or supplementary show that the vertical angles three.. Can see from the diagram that angles 1 and 3 are adjacent which by definition opposite... 6 common side 1 and 4 are vertical angles: both pairs of vertical.. 2 4 1 3 6 4 2 5 Visualize it that are opposite from another two! Means 1 is correct while 2 is False anglea < 1 and ∠ 2 are supplementary congruent... To provide written arguments for your help one angle may be in the interior the...: Consider the following statements and use a construction to determine if they supplementary. 01:27 View full Video < 1 and 3 are adjacent, always true, or never?. 6 4 2 5 Visualize it '' and 116°44'13 '' written arguments for your help Join for Free are... ) supplementary angles always equal 90degrees ( T or F ) False 90° Tags: question.. Angles always equal 90degrees ( T or F ) 100 90° Tags: 3! Show why the vertical angles ( four angles altogether ) always sum a. 2 is False two adjacent oblique angles make up straight angle POM.. Of intersecting lines the vertically opposite angles created by intersecting lines $then … 01:27 full!, known as vertical angle theorem holds true$ then … 01:27 full. Are right or one is obtuse two pairs of vertical angles share a common side noncommon side 1. M ∠ … 11 always supplementary 90degrees ( T or F ) False to 180° angles that up... Tags: question 3 they can be drawn without having vertical angles can be.! ( T or F ) 100 given two parallel lines cut by a transversal two... Of 7 both pairs of opposite rays solve for the best answers, search this. If two lines, alternate angles are opposite from another when two lines cross be vertical angles: both of. Adjacent angle is supplementary to each other when two lines cross 1 ma2 5 8! They do n't always complete eachother diagram that angles 1 and 4 are vertical, angles 1 3..., determine if they are adjacent and these anglea < 1 and 2. Yes, they do not intersect be vertical angles a1 and a4 a2 and a5 and! Parallel lines cut by a transversal intersects two lines are parallel alternate exterior angles are of! Been measuring vertical angles share a common side 180 8 POSTULATE 7 You can colored! ) supplementary angles, then there is a straight angle POM below they do n't always complete.... And have the same vertex ( corner point ), not the usual meaning of up-down of an angle. Interior of the transversal has measure less than 90° False, one acute and one obtuse are supplementary angles to... The converse of the other but, these angles are the measures of four of the relationships between can... 1032785: if two lines are perpendicular they do give vertical angles four...
|
2021-09-25 09:06:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4861258268356323, "perplexity": 1040.295443524694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057615.3/warc/CC-MAIN-20210925082018-20210925112018-00357.warc.gz"}
|
https://classes.cs.uchicago.edu/archive/2020/fall/12100-1/resources/vscode.html
|
# Working Remotely with Visual Studio Code and SSH¶
Visual Studio Code (VS Code) is a text editor that is particularly well suited for programming in a variety of languages, including Python. It also provides a way to remotely connect to the Linux computers on campus, via SSH (Secure Shell). You can use it to do your work for CS 121, and you should especially consider it if the Virtual Desktop is running slowly for you.
This document covers installing the software you need, and how to use Visual Studio Code and SSH for this class.
## Installation¶
### Step 1: Install Visual Studio Code¶
#### Windows¶
Go to https://code.visualstudio.com/. You should see a blue button labeled Download for Windows, Stable Build.
Click this button to download. Once it is downloaded, run the installer (VSCodeUserSetup-<version>.exe).
After you accept the licence agreement, click Next >. On the page titled Select Additional Tasks, we recommend you check all the boxes (but it is up to you).
Click Next >, then click Install. When the progress bar fills, click Finish.
#### macOS¶
Go to https://code.visualstudio.com/. You should see a blue button labeled Download for Mac, Stable Build.
Click on this button to download. When the download is complete, you will have a new application file called Visual Studio Code (You might instead have zip file, with a name like VSCode-darwin-stable.zip; in this case, open the file to unzip it, and the Visual Studio Code application file should appear). Open a Finder window and navigate to Downloads (it will likely be listed under “Favorites” in the left sidebar). Locate the file named Visual Studio Code, and drag it on top of Applications in the left side bar.
Now, you can find VS Code in your Applications folder, and can open it with a click.
### Step 2: Install an SSH client¶
#### Windows 10¶
In this step, you will install Windows OpenSSH Client.
For this step, you will open various applications and settings by searching for them. To do this, open the Start menu by pressing the Windows key on the keyboard, or clicking the Windows icon in the corner of your screen. Begin typing the name of the application or setting, like About your PC (even though there is no visible search bar, one will appear when you begin typing). When the About your PC option appears, click on it.
Checking your version of Windows 10
Scroll down to the heading Windows specifications. Next to Edition, you should see Windows 10 Home or Windows 10 Pro (or similar).
Below that you should see Version and a number like 2004. If this number is less than 1803, then you need to update Windows 10.
Updating Windows 10
To update Windows 10, open the Start menu, begin typing Check for updates, and click on the option when it appears.
The window that opens should have the heading Windows Update. It may tell you that you have updates avialable; otherwise, click the button that says Check for updates.
Follow the instructions to install the available updates. This may take a few minutes, and your computer may restart. When the update completes, check your version of Windows 10 again, and verify that it now reads as 1803 or greater.
Installing Windows OpenSSH Client
Open the Start menu, begin typing Manage Optional Features, and click the option when it appears.
You should see a window that looks like this, with the heading Optional features.
Scroll through the list of Installed features. If OpenSSH Client appears in the list, you are done with this step. Otherwise, click on + Add a feature at the top of the page. You will get a pop-up window with the heading Add an optional feature. Start typing OpenSSH Client. When the option appears, click on the checkbox next to it.
Then click on the button labeled Install (1). Wait for the progress bar to fill.
The installation is complete.
Checking that the installation was successful
Open the Start menu, begin typing Windows PowerShell, and click on the option when it appears.
Note that Windows PowerShell looks similar to the Linux terminal, even though is not the same as the Linux terminal. At the prompt, type
ssh username@linux.cs.uchicago.edu
where username should be replaced by your CNetID.
You should be prompted for your password. If you are not, check that you followed the SSH installation steps correctly, and try again. If you are still not prompted for your password, ask us about it on Piazza.
Type the password associated with your CNetID and press enter (nothing will appear on the screen as you type your password, but this is normal; your keypresses are still being registered).
You should see a message about when you last logged on, followed by a prompt that looks like
username@linuxX:~$ where username is replaced by your CNetID, and X is replaced by a number from 1 to 5. You are now connected to the Linux computers on campus. Try running a few terminal commands, like pwd, ls and cd. If you already did the Virtual Linux lab, you should be able to find the files that you created for it. Type logout and press enter to close your connection to the campus Linux computers. Type exit again and press enter to exit Windows PowerShell. #### macOS¶ An SSH client comes pre-installed. However, you should check that it works as expected before moving on. Press Command-Space to open Spotlight Search. Begin typing Terminal, and click on the option when it appears. At the prompt, type ssh username@linux.cs.uchicago.edu where username should be replaced by your CNetID. You should be prompted for your password. Type the password associated with your CNetID and press enter (nothing will appear on the screen as you type your password, but this is normal; your keypresses are still being registered). You should see a message about when you last logged on, followed by a prompt that looks like username@linuxX:~$
where username is replaced by your CNetID, and X is replaced by a number from 1 to 5. You are now connected to the Linux computers on campus. Try running a few terminal commands, like pwd, ls and cd. If you already did the Virtual Linux lab, you should be able to find the files that you created for it.
Type logout and press enter to close your connection to the campus Linux computers and return to your own computer’s terminal prompt.
#### Linux¶
Debian/Ubuntu: Run sudo apt-get install openssh-client
RHEL/Fedora/CentOS: Run sudo yum install openssh-clients
After installing, you should verify that you can connect to the Linux computers on campus. In the terminal, type,
ssh username@linux.cs.uchicago.edu
where username is replaced by your CNetID. You should be prompted for the password associated with your CNetID. Then you should be able to run terminal commands on the campus Linux computers.
### Step 3: Install Extensions for VS Code¶
At this point, Visual Studio Code should be among your installed applications. Open it. In the left sidebar, there is an icon consisting of four squares, with one square separated off from the other three. This is the icon for VS Code extensions. Click it (alternatively, you can press Ctrl-Shift-X, or Command-Shift-X on macOS).
This opens the Extensions panel. From here, you can search for and install extensions. You should install the following extensions:
• Python (Microsoft)
• Remote - SSH (Microsoft)
To do this, click in the search bar (“Search Extensions in Marketplace”) and start typing the name of the extension. When it appears, make sure the name and publisher matches exactly, and click Install.
## Using Visual Studio Code and SSH¶
You will be able to use Visual Studio Code to replicate the two most important features from the Virtual Desktop. You will be able to remotely connect to the Linux computers on campus to (1) use the terminal (to execute shell commands, run Python code, and conduct automated tests), and (2) edit text files (usually Python code).
Open Visual Studio Code now.
### Remotely connecting to the CS Department Linux computers¶
Initial setup
You only need to follow the steps in this section once (or more accurately, once per computer). If you’ve already done this part, you can continue to “Connecting”.
In the lower-left corner of VS Code, there should be a rectangle with an icon that looks like ><, but skewed. In the example images, it is green, but depending on the color scheme you select for VS Code, it may be purple, or a different color. If you do not see this icon, check that you have completed all the installation steps above. Click on this icon.
In the menu that appears, click Remote-SSH: Connect to Host….
You should see the heading Select configured SSH host or enter user@host.
Click + Add New SSH Host….
A textbox will appear with the heading Enter SSH Connection Commnand. In the box, type
ssh username@linux.cs.uchicago.edu
with username replaced by your CNetID, and press enter.
Next, you will see the heading Select SSH configuration file to update. Press enter to select the first option (which should contain the string “User” or “home”).
Connecting
Click the green rectangle in the lower-left corner with the >< icon. Click Remote-SSH: Connect to Host…. You should see the heading Select configured SSH host or enter user@host. This time, you should see the option linux.cs.uchicago.edu (if not, you should retry “Initial Setup”). Click on this option.
A new VS Code Window will open. After a moment, you will see a pop-up.
You may see a pop-up prompting Select the platform of the remote host; if so, click Linux. You will then see a box with the heading Enter password for username@linux.cs.uchicago.edu (with username replaced by your CNetID). Enter the password corresponding to your CNetID, and press enter.
If the connection is not successful, you may be given an option to try again; click Retry.
If you succeed at connecting, there will be a green box in the lower-left corner of the window with the text SSH: linux.cs.uchicago.edu.
Getting Disconnected
If at any point you get disconnected from the server unintentionally, this will be indicated in the green box in the lower-left corner (with text such as “Disconnected from SSH”).
VS Code may show a pop-up asking if you want to reconnect. You can follow the prompts to reconnect. If that does not work, go back and follow the steps under Connecting again.
If you would like to disconnect from the server intentionally, click the green box in the lower-left corner with the text SSH: linux.cs.uchicago.edu, then click Close Remote Connection.
### Using the terminal¶
Have your VS Code window open, and check that you are connected to SSH. Open the View menu from the menu bar and click Terminal (as a shortcut, you can instead press Ctrl-Backtick, even on macOS). This will split the window into two panes. The top pane will be empty for now (or may have some “welcome” text). The bottom pane has the terminal.
You will see the bottom pane has several tabs: Terminal, Debug Console, Problems, and Output (if your window is narrow, some of these may be hidden under a three-dots menu icon). We only care about Terminal for now, so make sure that is selected. To the right of these tabs, you will see a dropdown menu and some additional icons. You will use these later, but you won’t need them for now.
In the body of the bottom pane, you will see a Linux prompt of the form
username@computer:~\$
You can use this terminal pane to complete the Virtual Linux lab, if you haven’t already.
### Editing text files¶
When you get down to the section of the Virtual Linux lab titled Using an Editor, you will see it asks you to open a file in the editor by running
code test.txt
You can run this command (so if you had previously completed the lab up to this point, you can now continue). You will see the file open in the top pane of your VS Code window.
Working with VS Code via SSH works almost the same as using VS Code with the virtual desktop (except if you are using macOS, replace Ctrl with Command in most shortcuts — so Command-s instead of Ctrl-s). When you save, you are saving to the Linux computers on campus (it may take a few moments). Make sure to save often!
Optional Note
The code terminal command works from within the virtual desktop, and also works from within VS Code when you are connected to the campus Linux computers by SSH. In both cases, you are opening files stored on the Linux computers on campus, not files stored locally on your own computer. While not necessary for this class, it is also possible to use the code command in your computer’s own terminal to open files on your own computer (or just to launch VS Code).
To enable this feature…
• …on Windows: This feature is enabled by default. If you are familiar with Windows PowerShell or Command Prompt, you can open VS Code by typing code at the prompt. If you are not familiar with Windows PowerShell or Command Prompt, you do not need to learn them for this class; while they look a bit like the Linux terminal, they use different commands.
• …on macOS: Open VS Code, then press Command-Shift-P to open the Command Palette. Begin typing Shell Command: Install ‘code’ command in PATH, and click on the option when it appears. From this point on, you will be able to open VS Code from the macOS terminal by typing code.
### Running multiple instances of the terminal¶
When working on assignments, you will want to have two instances of the terminal running, one for testing code by hand, and the other for running automated tests.
Make sure you are connected to SSH, and open the Terminal pane if is not yet open. To the right of the tab names (Terminal, Debug Console, etc.), you will see a dropdown menu and some icons. Here is what these do:
• The dropdown menu lets you select between the instances of the terminal that you currently have running. Right now, 1: bash will be selected. Right now, we only have one instance of the terminal running, but…
• Clicking the + icon allows you to create a new instance of the terminal (the equivalent of opening another terminal window).
• To the right of this is an icon of a rectangle divided vertically in half; this allows you to see two terminal instances at once. You probably do not need to use this.
• Next is an icon of a trash can; clicking this will close the current terminal instance.
• Clicking the ^ icon will allow the terminal pane to take up the entire window.
• Clicking the x will close the terminal pane.
|
2021-02-28 09:53:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17286264896392822, "perplexity": 2607.279516176368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360745.35/warc/CC-MAIN-20210228084740-20210228114740-00080.warc.gz"}
|
https://math.meta.stackexchange.com/questions/19377/why-close-old-questions-with-accepted-answers-using-the-no-context-reason?cb=1
|
# Why close old questions with accepted answers using the “no context” reason?
Recently, I have noticed there is an increasing number of closings (or at least, close votes) on questions like this one. That is, archaic questions with accepted answers, which do not match the "effort" criterion practically imposed on most new questions nowadays.
I stress new, because this criterion is used mainly to discourage homework-grabbing practices, and to make it easier to write helpful answers for the OP (which is not by definition identical to great answers in my book, although they overlap naturally).
But for the type of question I describe, these motives do not apply. I am curious to hear about reasons for these close-votes.
I can think of one good reason to vote-to-close these questions: to indicate that they are not indicative of what constitutes a good question on MSE today. However, this reason is IMO mitigated by the other main purpose of MSE, namely that of a mathematical knowledge repository. Moreover, if it is for this reason that these questions ought to go, then the close reason should indicate this; I don't think that the close-votes really envisage reopening the question after "context" has been added.
N.B. I read this thread, but I contend that the scope is different enough for this not to be a duplicate.
• I don't see how the two purposes are in contradiction. Closing an old question does not make the knowledge go away; it's still visible to absolutely everybody. – Rahul Jan 23 '15 at 22:14
• @Rahul, unless closing is preparation for deletion. – Gerry Myerson Jan 23 '15 at 22:17
• Is that a self-answered question? As you state yourself, the only point of keeping old questions and answers is to build a math knowledge repository. If a question is of no value to such a repository, it should be closed. There can be multiple reasons for that to happen, and the people who closed that question thought one of them applied. You might disagree with that assessment, but that's a different matter. The distinction between old and new questions, or questions with or without accepted answers, is irrelevant. – Najib Idrissi Jan 24 '15 at 9:03
• @NajibIdrissi I see your point, but in that case the close-votes should state "of no future value", instead of "no context". That would be a different story altogether. But now, there is no amount of context that would improve e.g. the question I linked, therefore the close reason is wrong (and the purpose of "no context" is very time-dependent). – Lord_Farin Jan 24 '15 at 9:07
• So your only concern here is the wrong close reason? As an aside, there are plenty of ways to add context. And the purpose is not necessarily time dependent: context can improve searchability (by adding new keywords), help users understand what is difficult about the problem (the answer can just sweep difficulties under the rug by using clever techniques, and then you wonder how the answerer thought of that...), why such a question is interesting at all, or simply understand the problem at all by including relevant definitions, etc. – Najib Idrissi Jan 24 '15 at 9:15
• @Najib No, my concern is in the why; I can't see why we would like to close this type of question with this reason. I agree that my perception of "context" was a bit narrower than what you see (thanks!). But if this is all, the effort put in closing these should IMO be accompanied by creating stellar abstract duplicates that go above and beyond any specific example. The closing should then be as a duplicate. – Lord_Farin Jan 24 '15 at 10:48
• @Lord_Farin closing as duplicate also creates push-back; sometimes to un-reasoanble extents. Though also reasonable. In particular, many an asker here, especially of these questions, does have perfectly fine "duplicates" at their disposal anyway, typically some abstract some concrete (I mean, their lecture notes, their textbooks, similar examples seen in class, etc.). Those do not help them, whence it might be somewhat pointless to point them at ours. (ed ajf) – quid Jan 24 '15 at 11:49
• @quid That's true for new questions, I agree to an extent. But it does not apply to old questions, which my point was about. – Lord_Farin Jan 24 '15 at 11:56
• Why doesn't it apply to old question? Just that nobody did empty some forgotten trash-bin for a year does not mean the garbage needs to stay with us forever or transformed into some artefact. – quid Jan 24 '15 at 11:59
• @quid A question needn't remain open for OP's understanding's sake if it's old. So it should be closed as a duplicate. More so since otherwise the garbage will simply be replaced by new instances of the same garbage; it's just not a "solution" for the long run. – Lord_Farin Jan 24 '15 at 12:03
The only reason old questions are kept is to serve as repository of mathematical knowledge. Very poor questions, regardless of the quality of the answer, do not deserve to be thus enshrined, and reflect poorly on the site.
I wish the excellent teachers on this site had spent their time on worthy questions, rather than rewarding very poor ones. But, if your old answer is at risk of deletion due to being associated with a question of suspect quality, there is an easy fix: edit the question until it is as good as the answers accompanying it. Finally, if appropriate, add your revised question and answer to the list of abstract duplicates, so that your answer will maximally benefit future students with similar questions.
• This makes little sense for the "no context" reason. The extent to which a question is useful to people visiting the site in future has little to do with what thoughts the OP may have had. In fact, it is better for that purpose to keep the question free of that kind of clutter. – user208259 Jan 25 '15 at 2:15
• @user208259 please see various forms of context Even a humble user like me feels able to do improvements along these lines in various cases. It should be quite trivial for the "best teachers." But even if only excessively poor typesetting and alike would be fixed it were already an improvement. (ed ajf) – quid Jan 25 '15 at 2:20
• @quid If you would prefer to see the questions edited, then do that instead of deleting them. – user208259 Jan 25 '15 at 2:27
• @user208259 I did not express any personal preference but commented on feasibility and potential usefulness that you denied based on, or so it seems to me, incomplete understanding of what "context" means. Further, I cannot even vote to delete, so your comment seems somewhat out of place for this reason too. – quid Jan 25 '15 at 2:35
• @quid It's not about you personally. The point is, that if for a particular question, editing is feasible, that's a good reason not to delete it. But if the objection is that the OP didn't include his inexpert attempts at a solution, that certainly isn't something that should be edited in by anyone if the purpose is to have a useful question and answer for future visitors to the site. On the contrary, it's better not to have that in most cases. – user208259 Jan 25 '15 at 2:38
• I have had to downvote. Unlike other SE sites, on this site we tend to reserve editing of questions for typos, formatting, and similar things. We don't generally rewrite the OP's question to meet out own standards - that is up to the OP. – Carl Mummert Jan 25 '15 at 3:11
• @Carl This happens often on old questions. Example: closed, deleted, edited by someone else than the OP, undeleted, reopened. – Najib Idrissi Jan 25 '15 at 8:39
• @user208259 Everything but the OP's background can be supplied in a later edit (the origin of the question can be tricky to find but not impossible). Once again, see this for an example. "Include your work" can be "this question is difficult because usual techniques to solve similar problems don't apply", for example, something that someone else can add (and there are other ways still). "Motivation" isn't hard to find for questions at this level. References and definitions are obvious ones. – Najib Idrissi Jan 25 '15 at 12:32
• People are free to disregard whatever they want. As Willie says these are suggestions. If someone who is able to ask clear mathematical questions can find another way to add context that's good too. The list is meant to be a guide for new users who don't know how to ask suitable questions. It's also a bit weird to see you assert what a policy was meant for when it was written more than a year before you created your account (10 days ago). – Najib Idrissi Jan 25 '15 at 12:42
• Do you want to talk about "useful"? That question is unfindable. Users who seek the answer to that question and somehow manage to find it will not understand why it should be difficult to answer. A user who stumble upon the question will not understand why it should be interesting, where it comes from. It's unlinked to any similar problem that one could encounter; there's no general strategy, just a ad-hoc solution. If a student came to you and asked that question, would you simply hand them a paper with the answer on it and expect every new student to refer to that sheet of paper? – Najib Idrissi Jan 25 '15 at 12:52
• @user208259 the analogy with problem books is flawed as those books, at least those I know, typically provide some context. They being a collection, possibly not for each problem individually, but there typically is some context; perhaps in some case it is very reduced, yet even the title of the book normally is such that it provides some context alone. – quid Jan 25 '15 at 14:11
• @user208259 Yes, my position is that it's the responsibility of the people who don't want a question to be deleted to improve the question so that it meets the website's standards. – Najib Idrissi Jan 25 '15 at 14:43
• In response to the comments above about what value certain isolated questions carry, I contend that abstract duplicates are the way to go. I will make a separate thread about this as one of the digestions of this discussion. The current options advocated are all considered lacking by a significant portion of the user base, so we ought to contemplate alternatives - and these are most likely going to require more effort. Quality comes at a price, so we shouldn't really expect the current, quite lazy practice to stand the test of time. Something must be done. – Lord_Farin Jan 25 '15 at 15:01
• @NajibIdrissi You are obviously not able to perceive that enforcing these "standards" is counterproductive when they result in bad content being saved (unreadable because of an OP's rambling) and good content being deleted (clearly stated problem and solution). The standards are doubly illogical when they are applied retroactively, placing an unrealistic burden on answerers to save good content. – user208259 Jan 25 '15 at 15:04
• @user208259 I explained this above already; typically lack of context is not the only reason for deletion. The presence of context is one thing to consider. A general assessement of the overall quality is made. If it is not good enough it is deleted; added context can improve the overall quality and thus prevent deletion. – quid Jan 25 '15 at 16:19
I understand your point view Lord_Farin (but maybe not in the same manner), and I see this as double jeopardy. Older questions had to pass different standards to make it past the court of public opinion when they were first posed and the ones that are still around passed. Now, we are prescribing new standards to question that were already tried and found not guilty, i.e. no closure and delete. Some or many of OP may not be around any more so they aren't here to put down what they attempted. Also, for the really old post, I doubt the OP will even remember what their working was when they original asked it.
I noticed that quid pointed out that many of the educators are still answering the same questions (I am not calling you at here quid); there has always been a solution to that problem though, close as duplicate of an older post with acceptable answers (not necessarily accepted answers since some people don't accept them).
• The question is though how many versions of a proof that $5 \mid n^4 - 1$ do we want? Shall we dupe-merge until we have thirty, fifty, hundred answers? I think at some point the marginal benefit is not only negligible but then negative. This is as documented also not a question of visibility only. (ed ajf) – quid Jan 25 '15 at 3:18
• @quid the question should be closed before it recieves additionally answers or merge non dupe answers to the old post. – dustin Jan 25 '15 at 3:22
• Yes I know it should; but in practice it is quite tricky to get it close so quickly. – quid Jan 25 '15 at 3:39
• @quid that is why the non dupe answers should be merged to original post and then close the newer post as a dupe. – dustin Jan 25 '15 at 3:40
• The SE dupe mechanism is completely brain-dead. The worst part is not possible duplication but, rather, inhibition of new answers by new experts that have recently joined the site. Rarely do old questions receive newer, better answers - they are stagnant. Big design flaw. – Bill Dubuque Jan 25 '15 at 3:42
• @BillDubuque maybe a meta request needs to be proposed then so SE can considering doing something if the post receives enough hype. – dustin Jan 25 '15 at 3:44
• @Bill Please explain how new experts are prohibited from answering old questions. – Najib Idrissi Jan 25 '15 at 8:12
• @Najib This has been explained at length elsewhere. Anyone who has been here long enough knows that only rarely do older questions receive new better answers. The quality of answers rarely improves over time because SE platform does a very poor job of re-exposing older questions and re-motivating potential new answerers. Many questions from the early days of the site are frozen in time with low-quality answers. They will probably never receive better answers. The recent rapid close and delete campaign greatly exacerbates these stagnancy problems. – Bill Dubuque Jan 25 '15 at 15:23
• The key to the stagnancy issue is to ignore the claims by SE.com that this site is primarily an archive. A few questions may be useful as an archive, but it seems clear from experience that the main benefit of this site (math.SE) is to provide answers to the OP of each question. Not only are old questions not re-exposed, the search feature is nearly useless for mathematics. @Bill Dubuque – Carl Mummert Mar 2 '15 at 12:31
• @CarlMummert I would agree the search isn't great but I have used with a good amount of success so it can be done. – dustin Mar 2 '15 at 16:06
|
2019-07-22 09:31:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41933244466781616, "perplexity": 1035.3432628196053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527907.70/warc/CC-MAIN-20190722092824-20190722114824-00250.warc.gz"}
|
https://www.otexts.org/node/637
|
4.2.4 Bayesian approach
By abandoning the traditional notion of the p-value and using the concept of the likelihood, the LR approach to measuring the strength of evidence in support of one hypothesis, relative to another, takes one step away from the traditional hypothetico-deductive framework and the frequentist notion of probability. Bayesian approaches, on the other hand, take a giant leap away. In fact, Bayesian approaches represent a complete abandonment of traditional approaches [8].
In the Bayesian approach, parameters explicitly are considered random variables, and the data are considered fixed. The basic steps, modified from [11], are:
1. develop a hypothesis or hypotheses
2. represent the hypothesis or hypotheses as models
3. specify the parameters of those models as random variables
4. specify a prior probability distribution
5. calculate the likelihood (this is Fisher’s likelihood which was just described in the previous section)
6. calculate the posterior probability distribution
7. interpret the results
Intuitively, the Bayesian approach is rather simple. It is founded on the following relation, derived from Bayes’ Theorem. We will use $\theta$ to represent a hypothesized parameter value.
$$P(\theta \mid data) \propto P(\theta )P(data \mid \theta )$$
The posterior probability distribution (usually called the “posterior” and denoted $P(\theta \mid data)$), is the outcome of interest. It is the probability distribution of our parameter (our hypothesis), given the observed data, and is proportional to the prior probability distribution (usually called the “prior” and denoted $P(\theta )$) multiplied by $P(data \mid \theta )$. The prior represents the biologist’s belief about the possible values of the parameter of interest, and is determined prior to any analysis. As described by [9], the prior can be interpreted in three different ways; a frequency distribution based on a synthesis of previous data, an objective statement about the distribution based on ignorance (i.e., an uninformative prior), or a subjective statement of belief based on previous experience. The likelihood (denoted $P(data \mid \theta )$) is derived from Fisher’s likelihood methods as described above.
Thus, the Bayesian approach abandons the frequentist notion of probability altogether by incorporating notions of belief (specifically via the prior). Although this can be a bit alarming to many scientists, it is actually more in line with common usage of the term ’probability.’ Moreover, instead of a p-value (and, at least superficially, a nicely packaged decision to reject/fail to reject an arbitrary null hypothesis), the result in a Bayesian analysis is a posterior probability distribution. This can roughly be translated as representing the probability of different values of the parameter of interest, given the observed data. This language may be better suited to communication with decision makers in a policy context because it is more closely aligned with common usage, and it more directly addresses the actual question of interest [6]. In addition, because it includes the prior, it contains a mechanism to incorporate prior knowledge. In fact, one potential benefit of the Bayesian approach is the manner in which knowledge about a tangible biological phenomenon (e.g., an effect size) can be continually updated as new information or observations are discovered. As a result, some have argued that it is ideally suited to environmental sciences, especially where adaptive management is being considered.
As a simple example, consider our hypothesis about $\mu$. In a Bayesian approach, we first characterize our belief about this parameter, based on prior knowledge or data. Let’s assume that we do in fact believe that the original population is normal with $\mu = 5$. We will also continue to assume that $\sigma = 3$. Thus, our prior is $y \sim N(5,3)$. We then gather a sample of 10 items and find $\bar y = 6.6$. The next step would be to calculate the likelihood and use it to derive a posterior distribution that describes our new belief about $\mu$. In Fig.4.3a, the prior, likelihood, and posterior for this example are illustrated. Based on our sample, our updated estimate of $\mu$ is actually 6.45. We can also determine the limits within which 95% of possible values of $\mu$ may occur, given the data. However, to avoid confusion with the confidence interval, we will call this interval the 95% credible interval. For this example, it is the interval from 4.68 to 8.22.
(a) Prior: $y \sim N(5,3)$
(b) Prior: $y \sim N(1,1)$
(c) Prior: $y \sim$Uniform
Figure 4.3: Illustration of the Bayesian approach. Here, the impact of three different prior distributions on the posterior distribution is shown. In all cases, the prior is shown in blue, the likelihood is in green, and the posterior is in black. The distributions have been re-scaled to facilitate the illustration, and the posterior estimate of $\mu$ is shown at the dashed line. When the prior distribution is uninformative (Panel C), the posterior primarily depends on the likelihood.
To illustrate the impact of the prior distribution, Fig.4.3 includes two other prior distributions. For example, with the prior $y \sim N(1,1)$, the posterior estimate and 95% credible interval are 3.95 (2.6 to 5.29). In some cases, you may not have good prior information on which to base an estimate of the prior. In these cases, an uninformative prior, such as a uniform distribution (where all values have equal density), can be used1. This is illustrated in Fig.4.3c, where the posterior estimate and 95% credible interval are 6.6 (4.74 to 8.45). Here, the lack of an informative prior means that the posterior mainly is influenced by the likelihood. Given these changes in the prior, notice how the posterior has changed. The fact that the prior (which, by definition, is somewhat subjective) can influence the outcome of a Bayesian analysis has been a major criticism of Bayesian methods. On the other hand, the impact of the prior on the posterior can be evaluated in a quantitative fashion.
Although Bayes Theorem is conceptually simple to understand, the calculation of the posterior distribution based on the prior and likelihood is difficult. Moreover, this difficulty is compounded when classical distributions such as the normal PDF do not adequately describe the distribution of the parameter of interest. In fact, in most cases, instead of trying to analytically determine the posterior distribution, sampling methods such as Markov-chain Monte Carlo (MCMC) simulation are used. This fact is probably a major reason why we have yet to see more widespread adoption of Bayesian methods. Although it is beyond the scope of this text, R is a great choice for pursuing Bayesian calculations. [1] provides a useful introduction that focuses on R, and [5] is a thorough introduction to the use of Bayesian models within the context of more complicated ecological models. There also is a parallel R manual [6].
1. Actually, the uniform distribution introduces some mathematical difficulties that make calculation of the posterior very difficult. Thus, a very flat normal distribution (i.e., very large $\sigma$) usually is used to approximate a uniform distribution.
|
2017-03-28 19:47:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8271581530570984, "perplexity": 393.86234832347185}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189884.21/warc/CC-MAIN-20170322212949-00509-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://mathoverflow.net/questions/330264/signed-variant-of-the-flint-hills-series
|
# Signed variant of the Flint Hills series
I asked my Calculus 2 students to come up with a series the convergence of which they are unable to decide. One of the students, Denis Zelent, invented a very interesting one: $$\sum_{n = 1}^\infty \frac{1}{n^2 \sin n} \, . \tag{1}$$
Question (short version): Has convergence of this series been studied in literature?
My immediate answer was that this must have something to do with the irrationality measure $$\mu$$ of $$\pi$$. Obviously, $$\mu \geqslant 2$$, and the best currently known upper bound for $$\mu$$ is $$\mu \leqslant 7.6063\!\ldots\,$$, due to Salikhov; see [V. Kh. Salikhov, On the Irrationality Measure of $$\pi$$. Russ. Math. Surv 63(3):570–572, 2008]. It is widely believed that $$\mu = 2$$.
In fact, the inequality $$\mu < 3$$ is equivalent to convergence of $$1 / (n^2 \sin n)$$ to zero. Thus if we knew that $$\mu \geqslant 3$$, the series (1) would diverge. On the other hand, if we had $$\mu < 2$$ (which is of course absurd), then one could easily show that $$1 / (n^2 \sin n) = O(n^{-1 - \varepsilon})$$ for some $$\varepsilon > 0$$, which would imply absolute convergence of the series (1).
My student searched the web and realized that his question is related to the well-known open problem, asking whether the Flint Hills series $$\sum_{n = 1}^\infty \frac{1}{n^3 \sin^2 n} \tag{2}$$ converges. An extension of this problem asks for convergence of a more general series $$\sum_{n = 1}^\infty \frac{1}{n^p |\sin n|^q} , \tag{3}$$ which is equivalent to absolute convergence of the series (1) when $$p = 2$$ and $$q = 1$$. For more details, see [Max. A. Alexeyev, On convergence of the Flint Hills series, arXiv:1104.5100, 2011].
To summarise, this is what we have found so far:
• lack of convergence of $$1 / (n^2 \sin n)$$ to zero would imply that $$\mu \geqslant 3$$, which is very unlikely;
• convergence (in particular: absolute convergence) of the series (1) would imply $$\mu \leqslant 3$$, which means it is certainly an open problem;
• lack of absolute convergence of the series (1) would not have any consequences for the estimates of $$\mu$$.
Question (long version)
1. Does absolute convergence of the series (1) imply any tighter bounds on the estimates of the irrationality measure $$\mu$$ of $$\pi$$?
2. Vice versa: Assuming that $$\mu$$ is known, can one tell whether the series (1) converges absolutely?
3. Same questions with absolute convergence changed into convergence. In other words: are cancellations of any help here?
4. Does the series (1) has a fancy name, similar to Flint Hills series and Cookson Hills series? (And if not: can Denis choose an appropriate mountain range?)
Edited: I just noticed David Simmons's answer to an MO question on the Flint Hills series, which reduces the question of its convergence to a similar question for a series involving convergents of $$\pi$$. The same argument should work for the absolute convergence of the series (1), but I do not see right away if it leads to an answer to question 1.
|
2019-08-23 18:47:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 26, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8730572462081909, "perplexity": 254.16696070706084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318952.90/warc/CC-MAIN-20190823172507-20190823194507-00502.warc.gz"}
|
https://zbmath.org/?q=an:0674.10027
|
## Local coefficients as Artin factors for real groups.(English)Zbl 0674.10027
This paper deals with the local constant that appears in the functional equation of the automorphic L-function of Langlands. Let G be the real points of a quasisplit reductive algebraic group over $${\mathbb{R}}$$, $$P=MAN$$ a standard parabolic of G, $$\sigma$$ a generic representation of M; the local constant C($$\sigma)$$ (ignoring other data) is defined by the Schiffman-Knapp-Stein intertwining operator and Whittaker function. The author proves that $C(\sigma)=(i)^{2m+p}\prod^{n}_{i=1}\epsilon (a_ is,r_ i)\frac{L(1-a_ is,\tilde r_ i)}{L(a_ is,r_ i)}$ where L is the Artin L-function and $$\epsilon$$ is the root number. As an application the author gives a new proof of the function equation of the Rankin-Selberg L-function attached to pairs of cusp forms.
### MSC:
11F70 Representation-theoretic methods; automorphic representations over local and global fields 22E30 Analysis on real and complex Lie groups 11R39 Langlands-Weil conjectures, nonabelian class field theory 11F67 Special values of automorphic $$L$$-series, periods of automorphic forms, cohomology, modular symbols
Full Text:
### References:
[1] J. Arthur, On some problems suggested by the trace formula , Lie group representations, II (College Park, Md., 1982/1983), Lecture Notes in Math., vol. 1041, Springer, Berlin, 1984, pp. 1-49. · Zbl 0541.22011 [2] J. Arthur, On the invariant distributions associated to weighted orbital integrals , · Zbl 0301.47014 [3] P. Delorme, Homomorphismes de Harish-Chandra liés aux $$K$$-types minimaux des séries principales généralisées des groupes de Lie réductifs connexes , Ann. Sci. École Norm. Sup. (4) 17 (1984), no. 1, 117-156. · Zbl 0582.22009 [4] R. Goodman and N. R. Wallach, Whittaker vectors and conical vectors , J. Functional Anal. 39 (1980), no. 2, 199-279. · Zbl 0475.22010 [5] M. Hashizume, Whittaker models for real reductive groups , Japan. J. Math. (N.S.) 5 (1979), no. 2, 349-401. · Zbl 0506.22016 [6] H. Jacquet, Fonctions de Whittaker associées aux groupes de Chevalley , Bull. Soc. Math. France 95 (1967), 243-309. · Zbl 0155.05901 [7] 1 H. Jacquet and J. A. Shalika, On Euler products and the classification of automorphic representations. I , Amer. J. Math. 103 (1981), no. 3, 499-558. JSTOR: · Zbl 0473.12008 [8] 2 H. Jacquet and J. A. Shalika, On Euler products and the classification of automorphic forms. II , Amer. J. Math. 103 (1981), no. 4, 777-815. JSTOR: · Zbl 0491.10020 [9] H. Jacquet, I. I. Piatetskii-Shapiro, and J. A. Shalika, Rankin-Selberg convolutions , Amer. J. Math. 105 (1983), no. 2, 367-464. JSTOR: · Zbl 0525.22018 [10] D. Keys, Principal series representations of special unitary groups over local fields , Compositio Math. 51 (1984), no. 1, 115-130. · Zbl 0547.22009 [11] A. W. Knapp, Weyl group of a cuspidal parabolic , Ann. Sci. École Norm. Sup. (4) 8 (1975), no. 2, 275-294. · Zbl 0305.22010 [12] A. W. Knapp, Commutativity of intertwining operators for semisimple groups , Compositio Math. 46 (1982), no. 1, 33-84. · Zbl 0488.22027 [13] A. W. Knapp and E. M. Stein, Intertwining operators for semisimple groups , Ann. of Math. (2) 93 (1971), 489-578. JSTOR: · Zbl 0257.22015 [14] A. W. Knapp and E. M. Stein, Intertwining operators for semisimple groups. II , Invent. Math. 60 (1980), no. 1, 9-84. · Zbl 0454.22010 [15] A. W. Knapp and N. R. Wallach, Szegö kernels associated with discrete series , Invent. Math. 34 (1976), no. 3, 163-200. · Zbl 0332.22015 [16] A. W. Knapp and Gregg J. Zuckerman, Classification of irreducible tempered representations of semisimple groups , Ann. of Math. (2) 116 (1982), no. 2, 389-455. JSTOR: · Zbl 0516.22011 [17] B. Kostant, On Whittaker vectors and representation theory , Invent. Math. 48 (1978), no. 2, 101-184. · Zbl 0405.22013 [18] J.-P. Labesse and R. P. Langlands, $$L$$-indistinguishability for $$\mathrm SL(2)$$ , Canad. J. Math. 31 (1979), no. 4, 726-785. · Zbl 0421.12014 [19] R. P. Langlands, On Artin’s $$L$$-functions , Rice University Studies 56 (1970), 23-28. · Zbl 0245.12011 [20] R. P. Langlands, On the functional equations satisfied by Eisenstein series , Lecture Notes in Mathematics, vol. 544, Springer-Verlag, Berlin, 1976. · Zbl 0332.10018 [21] R. P. Langlands, Euler products , Yale University Press, New Haven, Conn., 1971. · Zbl 0231.20016 [22] R. P. Langlands, On the classification of irreducible representations of real algebraic groups , Mimeographed notes, Institute for advanced study, 1973. [23] C. J. Moreno and F. Shahidi, The $$L$$-function $$L_ 3(s,\pi_ \Delta)$$ is entire , Invent. Math. 79 (1985), no. 2, 247-251. · Zbl 0558.10025 [24] N. S. Poulsen, On $$C^\infty$$-vectors and intertwining bilinear forms for representations of Lie groups , J. Functional Analysis 9 (1972), 87-120. · Zbl 0237.22013 [25] G. Schiffmann, Intégrales d’entrelacement et fonctions de Whittaker , Bull. Soc. Math. France 99 (1971), 3-72. · Zbl 0223.22017 [26] F. Shahidi, Functional equation satisfied by certain $$L$$-functions , Compositio Math. 37 (1978), no. 2, 171-207. · Zbl 0393.12017 [27] F. Shahidi, Whittaker models for real groups , Duke Math. J. 47 (1980), no. 1, 99-125. · Zbl 0433.22007 [28] F. Shahidi, On certain $$L$$-functions , Amer. J. Math. 103 (1981), no. 2, 297-355. JSTOR: · Zbl 0467.12013 [29] F. Shahidi, Fourier transforms of intertwining operators and Plancherel measures for $$\mathrm GL(n)$$ , Amer. J. Math. 106 (1984), no. 1, 67-111. JSTOR: · Zbl 0567.22008 [30] F. Shahidi, Some results on $$L$$-indistinguishability for $$\mathrm SL(r)$$ , Canad. J. Math. 35 (1983), no. 6, 1075-1109. · Zbl 0553.10024 [31] F. Shahidi, Artin $$L$$-functions and normalization of intertwining operators , Seminar on the Analytical Aspects of the Trace Formula II, Institute for Advanced Study, 1983-84. [32] J. A. Shalika, The multiplicity one theorem for $$\mathrm GL\sbn$$ , Ann. of Math. (2) 100 (1974), 171-193. JSTOR: · Zbl 0316.12010 [33] D. Shelstad, $$L$$-indistinguishability for real groups , Math. Ann. 259 (1982), no. 3, 385-430. · Zbl 0506.22014 [34] D. A. Vogan, Jr., Gelfand-Kirillov dimension for Harish-Chandra modules , Invent. Math. 48 (1978), no. 1, 75-98. · Zbl 0389.17002 [35] N. R. Wallach, Asymptotic expansions of generalized matrix entries of representations of real reductive groups , Lie group representations, I (College Park, Md., 1982/1983), Lecture Notes in Math., vol. 1024, Springer, Berlin, 1983, pp. 287-369. · Zbl 0553.22005 [36] E. T. Whittaker and G. N. Watson, A course of modern analysis. An introduction to the general theory of infinite processes and of analytic functions: with an account of the principal transcendental functions , Fourth edition. Reprinted, Cambridge University Press, New York, 1962. · Zbl 0105.26901 [37] R. N. Wallach, Lie algebra cohomology and holomorphic continuation of generalized Jacquet integrals , · Zbl 0714.17016
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2022-08-16 12:12:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8327614068984985, "perplexity": 803.3215376806484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00006.warc.gz"}
|
https://www.physicsoverflow.org/7654/vanishing-ricci-flow-on-a-curved-manifold
|
Vanishing Ricci flow on a curved manifold
+ 3 like - 0 dislike
154 views
If I understand this right the Ricci flow on a compact manifold given by
$\partial g_{\mu \nu} = - 2R_{\mu \nu} + \frac{2}{n}\!R_{\alpha}^{\alpha} \,g_{\mu \nu}$
tends to expand negatively curved regions and to shrink positively curved regions.
Looking at the above definition Im wondering if the parameter n can be used to achieve $\partial g_{\mu \nu} = 0$ even if the Ricci tensor is not zero such that the validity of physics, that depends on the metric to be constant (as a precondition), could be extrapolated to curved manifolds to describe an expanding universe with a positive cosmological constant?
retagged Mar 25, 2014
Ive just seen that at theoretical physics SE a related question is asked: theoreticalphysics.stackexchange.com/questions/675/… So I`ll observe both places for answers.
This post imported from StackExchange Physics at 2014-03-17 03:35 (UCT), posted by SE-user Dilaton
+ 3 like - 0 dislike
I get the impression that OP is referring to Normalized Ricci Flow (NRF):
$$\frac{1}{2} \partial_t g_{\mu\nu} ~=~ -R_{\mu\nu} + \frac{\langle R \rangle}{n} g_{\mu\nu}~.$$
Here $\langle R \rangle$ is the average scalar curvature over the full space-time $M$. The average procedure is often weighted with an Einstein-Hilbert Boltzmann factor. It is just a number (as opposed to a space-time dependent scalar quantity).
Also $n$ is the space-time dimension, which is fixed, and hence cannot be easily varied as OP suggests.
This post imported from StackExchange Physics at 2014-03-17 03:35 (UCT), posted by SE-user Qmechanic
answered Dec 12, 2011 by (3,110 points)
Thanks @Qmechanics You are right ...
This post imported from StackExchange Physics at 2014-03-17 03:35 (UCT), posted by SE-user Dilaton
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysics$\varnothing$verflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
|
2021-09-23 18:01:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7116069793701172, "perplexity": 1324.607473012109}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057427.71/warc/CC-MAIN-20210923165408-20210923195408-00434.warc.gz"}
|
https://www.fatiando.org/verde/v1.1.0/api/generated/verde.Vector.html
|
# verde.Vector¶
class verde.Vector(components)[source]
Fit an estimator to each component of multi-component vector data.
Provides a convenient way of fitting and gridding vector data using scalar gridders and estimators.
Each data component provided to fit is fitted to a separated estimator. Methods like grid and predict will operate on the multiple components simultaneously.
Warning
Never pass code like this as input to this class: [vd.Trend(1)]*3. This creates 3 references to the same instance of Trend, which means that they will all get the same coefficients after fitting. Use a list comprehension instead: [vd.Trend(1) for i in range(3)].
Parameters: components : tuple or list A tuple or list of the estimator/gridder instances used for each component. The estimators will be applied for each data component in the same order that they are given here.
verde.Chain
Chain filtering operations to fit on each subsequent output.
Attributes: components : tuple Tuple of the fitted estimators on each component of the data. region_ : tuple The boundaries ([W, E, S, N]) of the data used to fit the interpolator. Used as the default region for the grid and scatter methods.
Methods
filter(coordinates, data[, weights]) Filter the data through the gridder and produce residuals. fit(coordinates, data[, weights]) Fit the estimators to the given multi-component data. get_params([deep]) Get parameters for this estimator. grid([region, shape, spacing, dims, …]) Interpolate the data onto a regular grid. predict(coordinates) Evaluate each data component on a set of points. profile(point1, point2, size[, dims, …]) Interpolate data along a profile between two points. scatter([region, size, random_state, dims, …]) Interpolate values onto a random scatter of points. score(coordinates, data[, weights]) Score the gridder predictions against the given data. set_params(**params) Set the parameters of this estimator.
## Examples using verde.Vector¶
Vector.filter(coordinates, data, weights=None)
Filter the data through the gridder and produce residuals.
Calls fit on the data, evaluates the residuals (data - predicted data), and returns the coordinates, residuals, and weights.
No very useful by itself but this interface makes gridders compatible with other processing operations and is used by verde.Chain to join them together (for example, so you can fit a spline on the residuals of a trend).
Parameters: coordinates : tuple of arrays Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, vertical, …). data : array or tuple of arrays The data values of each data point. If the data has more than one component, data must be a tuple of arrays (one for each component). weights : None or array or tuple of arrays If not None, then the weights assigned to each data point. If more than one data component is provided, you must provide a weights array for each data component (if not None). coordinates, residuals, weights The coordinates and weights are same as the input. Residuals are the input data minus the predicted data.
Vector.fit(coordinates, data, weights=None)[source]
Fit the estimators to the given multi-component data.
The data region is captured and used as default for the grid and scatter methods.
All input arrays must have the same shape. If weights are given, there must be a separate array for each component of the data.
Parameters: coordinates : tuple of arrays Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, vertical, …). Only easting and northing will be used, all subsequent coordinates will be ignored. data : tuple of array The data values of each component at each data point. Must be a tuple. weights : None or tuple of array If not None, then the weights assigned to each data point of each data component. Typically, this should be 1 over the data uncertainty squared. self Returns this estimator instance for chaining operations.
Vector.get_params(deep=True)
Get parameters for this estimator.
Parameters: deep : boolean, optional If True, will return the parameters for this estimator and contained subobjects that are estimators. params : mapping of string to any Parameter names mapped to their values.
Vector.grid(region=None, shape=None, spacing=None, dims=None, data_names=None, projection=None, **kwargs)
Interpolate the data onto a regular grid.
The grid can be specified by either the number of points in each dimension (the shape) or by the grid node spacing. See verde.grid_coordinates for details. Other arguments for verde.grid_coordinates can be passed as extra keyword arguments (kwargs) to this method.
If the interpolator collected the input data region, then it will be used if region=None. Otherwise, you must specify the grid region.
Use the dims and data_names arguments to set custom names for the dimensions and the data field(s) in the output xarray.Dataset. Default names will be provided if none are given.
Parameters: region : list = [W, E, S, N] The boundaries of a given region in Cartesian or geographic coordinates. shape : tuple = (n_north, n_east) or None The number of points in the South-North and West-East directions, respectively. spacing : tuple = (s_north, s_east) or None The grid spacing in the South-North and West-East directions, respectively. dims : list or None The names of the northing and easting data dimensions, respectively, in the output grid. Defaults to ['northing', 'easting']. NOTE: This is an exception to the “easting” then “northing” pattern but is required for compatibility with xarray. data_names : list of None The name(s) of the data variables in the output grid. Defaults to ['scalars'] for scalar data, ['east_component', 'north_component'] for 2D vector data, and ['east_component', 'north_component', 'vertical_component'] for 3D vector data. projection : callable or None If not None, then should be a callable object projection(easting, northing) -> (proj_easting, proj_northing) that takes in easting and northing coordinate arrays and returns projected northing and easting coordinate arrays. This function will be used to project the generated grid coordinates before passing them into predict. For example, you can use this to generate a geographic grid from a Cartesian gridder. grid : xarray.Dataset The interpolated grid. Metadata about the interpolator is written to the attrs attribute.
verde.grid_coordinates
Generate the coordinate values for the grid.
Vector.predict(coordinates)[source]
Evaluate each data component on a set of points.
Requires a fitted estimator (see fit).
Parameters: coordinates : tuple of arrays Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, vertical, …). Only easting and northing will be used, all subsequent coordinates will be ignored. data : tuple of array The values for each vector component evaluated on the given points. The order of components will be the same as was provided to fit.
Vector.profile(point1, point2, size, dims=None, data_names=None, projection=None, **kwargs)
Interpolate data along a profile between two points.
Generates the profile along a straight line assuming Cartesian distances. Point coordinates are generated by verde.profile_coordinates. Other arguments for this function can be passed as extra keyword arguments (kwargs) to this method.
Use the dims and data_names arguments to set custom names for the dimensions and the data field(s) in the output pandas.DataFrame. Default names are provided.
Includes the calculated Cartesian distance from point1 for each data point in the profile.
Parameters: point1 : tuple The easting and northing coordinates, respectively, of the first point. point2 : tuple The easting and northing coordinates, respectively, of the second point. size : int The number of points to generate. dims : list or None The names of the northing and easting data dimensions, respectively, in the output dataframe. Defaults to ['northing', 'easting']. NOTE: This is an exception to the “easting” then “northing” pattern but is required for compatibility with xarray. data_names : list of None The name(s) of the data variables in the output dataframe. Defaults to ['scalars'] for scalar data, ['east_component', 'north_component'] for 2D vector data, and ['east_component', 'north_component', 'vertical_component'] for 3D vector data. projection : callable or None If not None, then should be a callable object projection(easting, northing) -> (proj_easting, proj_northing) that takes in easting and northing coordinate arrays and returns projected northing and easting coordinate arrays. This function will be used to project the generated profile coordinates before passing them into predict. For example, you can use this to generate a geographic profile from a Cartesian gridder. table : pandas.DataFrame The interpolated values along the profile.
Vector.scatter(region=None, size=300, random_state=0, dims=None, data_names=None, projection=None, **kwargs)
Interpolate values onto a random scatter of points.
Point coordinates are generated by verde.scatter_points. Other arguments for this function can be passed as extra keyword arguments (kwargs) to this method.
If the interpolator collected the input data region, then it will be used if region=None. Otherwise, you must specify the grid region.
Use the dims and data_names arguments to set custom names for the dimensions and the data field(s) in the output pandas.DataFrame. Default names are provided.
Parameters: region : list = [W, E, S, N] The boundaries of a given region in Cartesian or geographic coordinates. size : int The number of points to generate. random_state : numpy.random.RandomState or an int seed A random number generator used to define the state of the random permutations. Use a fixed seed to make sure computations are reproducible. Use None to choose a seed automatically (resulting in different numbers with each run). dims : list or None The names of the northing and easting data dimensions, respectively, in the output dataframe. Defaults to ['northing', 'easting']. NOTE: This is an exception to the “easting” then “northing” pattern but is required for compatibility with xarray. data_names : list of None The name(s) of the data variables in the output dataframe. Defaults to ['scalars'] for scalar data, ['east_component', 'north_component'] for 2D vector data, and ['east_component', 'north_component', 'vertical_component'] for 3D vector data. projection : callable or None If not None, then should be a callable object projection(easting, northing) -> (proj_easting, proj_northing) that takes in easting and northing coordinate arrays and returns projected northing and easting coordinate arrays. This function will be used to project the generated scatter coordinates before passing them into predict. For example, you can use this to generate a geographic scatter from a Cartesian gridder. table : pandas.DataFrame The interpolated values on a random set of points.
Vector.score(coordinates, data, weights=None)
Score the gridder predictions against the given data.
Calculates the R^2 coefficient of determination of between the predicted values and the given data values. A maximum score of 1 means a perfect fit. The score can be negative.
If the data has more than 1 component, the scores of each component will be averaged.
Parameters: coordinates : tuple of arrays Arrays with the coordinates of each data point. Should be in the following order: (easting, northing, vertical, …). data : array or tuple of arrays The data values of each data point. If the data has more than one component, data must be a tuple of arrays (one for each component). weights : None or array or tuple of arrays If not None, then the weights assigned to each data point. If more than one data component is provided, you must provide a weights array for each data component (if not None). score : float The R^2 score
Vector.set_params(**params)
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.
Returns: self
|
2022-06-28 16:01:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3943597078323364, "perplexity": 2330.804701215896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103556871.29/warc/CC-MAIN-20220628142305-20220628172305-00733.warc.gz"}
|
https://www.physicsforums.com/threads/adiabatic-expansion-of-an-ideal-gas.454396/
|
# Adiabatic expansion of an ideal gas.
Problem: One mole of a diatomic ideal gas, initially having pressure P and volume V, expands so as to have pressure 2P and volume 4V. Determine the entropy change of the gas in the process.
Attempt: I thought this would just be R ln(V2/V1)... So, I said (8.314)*ln(4) but it's wrong...
There's an example almost exactly like it in my text book, and I don't see where I'm going wrong. The example they use just leaves the answer as 4R... Could I be using the wrong value for R?
I have a feeling that PV^gamma fits in there somehow, since we're given the fact that it's diatomic (gamma=1.4), but I don't know how... R ln(V2/V1) is an expression for the entropy change for an adiabatic process, right?
|
2022-05-28 11:38:08
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8358961343765259, "perplexity": 527.909852610702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016373.86/warc/CC-MAIN-20220528093113-20220528123113-00612.warc.gz"}
|
http://cvgmt.sns.it/paper/1920/
|
# Willmore Spheres in Compact Riemannian Manifolds
created by mondino on 18 Sep 2012
modified by paolini on 14 Jun 2013
[BibTeX]
Published Paper
Inserted: 18 sep 2012
Last Updated: 14 jun 2013
The paper is devoted to the variational analysis of the Willmore, and other $L^2$ curvature functionals, for 2-d surfaces immersed in a compact riemannian $3\leq m$-manifold $(M^m,h)$; the double goal of the paper is on one hand to give the right setting for doing the calculus of variations (including min max methods) of such functionals for immersions into manifolds and on the other hand to prove existence of possibly branched Willmore spheres under curvature or topological conditions. For this purpose, using the integrability by compensation, we develop the regularity theory for the critical points of such functionals; a crucial step consists in writing the Euler-Lagrange equation (which is a system), first in a conservative form making sense for weak $W^{1,\infty}\cap W^{2,2}$ immersions, then as a system of conservation laws. Exploiting this new form of the equations we are able on one hand to prove full regularity of weak solutions to the Willmore equation in any codimension, on the other hand to prove a rigidity theorem concerning the relation between CMC and Willmore spheres. One of the main achievements of the paper is that for every non null 2-homotopy class $0\neq \gamma \in \pi_2(M^m)$ we produce a canonical representative given by a Lipschitz map from the 2-sphere into $M^m$ realizing a connected family of conformal smooth (possibly branched) area constrained Willmore spheres (as explained in the introduction, this comes as a natural extension of the immersed spheres in homotopy class constructed in a celebrated paper by Sacks and Uhlembeck in situations when they do not exist); moreover for every ${\cal A}>0$ we minimize the Willmore functional among connected families of weak, possibly branched, immersions of $S^2$ having total area ${\cal A}$ and we prove full regularity for the minimizer. Finally, under a mild curvature condition on $(M^m,h)$, we minimize $\int( \mathbb I ^2+1)$, where ${\mathbb I}$ is the second fundamental form, among weak possibly branched immersions of $S^2$ and we prove the regularity of the minimizer.
|
2019-08-22 06:43:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8391921520233154, "perplexity": 420.4228462224824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316785.68/warc/CC-MAIN-20190822064205-20190822090205-00060.warc.gz"}
|
http://gssi.infn.it/seminars/impact-seminars-2015/item/416-lorenz-goedel-and-penrose-new-perspective-in-causality-and-determinism-in-fundamental-physics
|
back
## Lorenz, Goedel and Penrose: New Perspective in Causality and Determinism in Fundamental Physics
• Date February 19, 2015
• Hour 5 pm
• Room GSSI Main Lecture Hall
• Speaker Tim Palmer (University of Oxford)
Abstract:
A novel theory of quantum physics is developed which synthesises the role of symbolism in two distinct areas of physics: the symbolic algebra of quantum measurement and symbolic dynamics on fractal invariant sets in nonlinear dynamical systems theory. In this synthesis, the universe $U$ is treated as an isolated deterministic dynamical system evolving precisely on a measure-zero fractal invariant subset $I_U$ of its state space. By treating the geometry of $I_U$ as more primitive than differential (or finite-difference) evolution equations on $I_U$, a non-classical approach to the fundamental physics of $U$ is developed. In particular, using symbolic notation, a specific topological representation of $I_U$ is constructed which encodes quaternionic multiplication and from which the statistical properties of the complex Hilbert Space are emergent.
In the realistic setting of Invariant Set Theory, the non-commutativity of Hilbert Space observables is manifest from the number-theoretic incommensurateness of $\phi/\pi$ and $\cos \phi$ for angular coordinate $\phi$; physically this describes the precise sense in which the measure-zero set $I_U$ is counterfactually incomplete. Such incompleteness allows reinterpretations of familiar quantum phenomena, consistent with realism, local causality and effective experimenter free will. By construction, Invariant Set Theory implies the existence of a much stronger synergy between cosmological and quantum physics than is the case in contemporary physical theory. As such, the theory suggests an approach to synthesising gravitational and quantum physics, quite different from current approaches.
As a result, Invariant Set Theory provides new perspectives on key problems (such as the nature of the dark universe and information loss in black holes) in contemporary physics.
|
2017-05-29 15:22:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6900181770324707, "perplexity": 1186.0018568495918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612399.20/warc/CC-MAIN-20170529145935-20170529165935-00020.warc.gz"}
|
http://crypto.stackexchange.com/tags/randomness/new
|
# Tag Info
0
Well, all representations of the field $GF(2^8)$ are isomorphic. What that means is that there is a mapping between one representation of that field to another, where that mapping preserves all field properties. That is, if we had two representations $A$ and $B$, there exists a mapping $M$ from elements of $A$ to elements of $B$ such that, for any two ...
-2
So far as I know, Windows does not (but you never know, it has an endless series of APIs so anything is possible). However, Intel hardware does. The Microsoft C/C++ compiler (and the Intel compiler) has intrinsics for obtaining a hardware generated random value. That value is obtained by running two circuits backwards so that are unstable. They generate a ...
0
My algorithm so far is entirely built on bases. The point here is NOT get the minimal complexity, but rather a random complexity - preferrably highly non-affine. Here is a rough sketch of the algorithm so far. Does this look like it would to the trick? So far as I can see, everything is covered that defines a finite field. In order to construct a random ...
5
GF$(2^8)$ or $\mathbb F_{2^8}$ can also be viewed as the vector space $\mathbb F_2^8$ of $8$-bit vectors (or bytes) over GF$(2)$ or $\mathbb F_2$. Suppose $\{\beta_0, \beta_1, \cdots, \beta_7\}$ is a basis of $\mathbb F_2^8$ over $\mathbb F_2$, that is, the sum $$a_0\beta_0 \oplus a_1\beta_1 \oplus \cdots \oplus a_7\beta_7, ~ a_i \in \mathbb F_2$$ equals ...
-1
You can begin by enumerating all the irreducible polynomials of degree 8. This gives you all the possible fields representations. If I remember Eisenstein criterium is one of the algorithm for testing irreducibility of polynomials All these field are isomorphic to each other.
2
No, of course this is not a good idea. CLCG's were not designed for cryptographic purposes, and there's no reason to expect them to provide cryptographic security. Why would you do that, when there are perfectly good cryptographic-strength PRNGs available? As one simple example, if you use a CLCG built out of two linear congruential generators with the ...
2
Assuming that nobody's screwed up the implementation, it should not matter what kind of RNG you get. This is because all java.security.SecureRandom implementations are supposed to be cryptographically strong, as defined in RFC 1750 §6.3 (emphasis mine): 6.3 Cryptographically Strong Sequences In cases where a series of random quantities must be ...
3
The key thing here is that even in the case that the final algorithm used is "SHA1PRNG", some entropy will be collected somehow for generating the seed that initializes the PRNG. So, it all depends on the seed. In this case, you can see in the code of sun.security.provider.SecureRandom that the seed is generated by the class ...
Top 50 recent answers are included
|
2015-01-26 16:44:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8479496836662292, "perplexity": 352.0313684708398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115864313.15/warc/CC-MAIN-20150124161104-00185-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1322840/how-is-a-formal-system-including-only-a-first-order-axiomatization-of-induction
|
# How is a formal system including only a first-order axiomatization of induction stronger than a system without?
Stumbled upon another aspect of Peano arithmetic that I find confusing...
I understand that what I write in the title is in fact the case, e.g. certain statements provable in PA not being provable in Robinson arithmetic, for example.
I can also see how a second-order axiomatization of induction (such as in Peano's original axioms) makes a crucial difference by excluding non-standard models of the axioms.
But I don't quite see how a first-order axiomatization of induction allows us to show anything about $\mathbb{N}$ beyond what the same theory without induction does.
May I ask you to show me where I go wrong with the following informal argument:
(I will refer to the axioms of PA excluding the axiom schema of induction as the 'basic axioms' of PA, i.e. "There exists an x s.t. 0 = x", "For all x, there exists a y s.t. S(x) = y", and so on.)
1. Consider any first-order definable property P that holds for all n in $\mathbb{N}$. This property is fully characterized by the 'basic axioms' of PA, since these define the relevant properties of the natural numbers - even if they do not uniquely characterize them.
2. Now consider PA without induction: the basic axioms ensure that all elements of the domain have these characteristics, both in the standard model and the non-standard models. In particular, in the non-standard model, the elements that are not natural numbers still must follow the basic axioms, since the axioms range over the entire domain of interpretation.
3. But then, in all models, the basic axioms hold for all elements, so any relevant first-order property defined on the basis of these axioms will hold. Since PA (with or without induction) is a first order theory, there must be a proof of the statement, by first-order completeness.
To paraphrase, I don't grasp how all models of PA (including the non-standard ones) can both satisfy the basic properties of natural numbers outlined by the non-induction axioms, and at the same time contain elements for which not all first-order properties of the natural numbers hold. For example, the Wikipedia article on Robinson arithmetic quotes Burgess, Fixing Frege:
Similarly, one cannot prove that Sx ≠ x
This property already seems to be defined by the axioms that demand that every element has a successor, and that the successor function is an injection, so regardless of the induction axiom, it seems it should universally hold (for both standard and non-standard elements of the domain). But then, it seems it should also be provable, by completeness of our first-order theory.
• It is unclear what you mean by "fully characterized by the basic axioms of PA". What does it mean for a property to be "characterized by" a set of axioms? In which sense do the Robinson axioms "characterize" the property $\varphi(n)\equiv \exists x (n=n\land x=S(x))$, for example? – hmakholm left over Monica Jun 12 '15 at 16:59
• @Henning Point taken. My confusion is seems to be the result of not properly thinking through my intuition of 'fully characterized by' vs. what I formally learned about first-order definability. That said, I can now pinpoint my earlier confusion a bit more precisely: in order to prove the induction steps of, say, the property I mention above (Sx ≠ x), we only make use of the basic axioms. So what I really needed to do was to think about what the induction axiom (schema) "adds" then in terms of excluding models that are not excluded otherwise. – Bert Zangle Jun 13 '15 at 11:12
• x @zanglebert: Semantically, "excluding models that are not excluded otherwise" is all any axiom ever does. By the way, your examples ("0 exists", "$S(x)$ exists for all $x$") would not count as "axioms" in contemporary presentations of mathematical logic -- they're just properties of the language you're working with. – hmakholm left over Monica Jun 13 '15 at 12:09
What goes wrong with your argument is that what you call the "basic axioms" of PA admit models that are excluded by the induction axioms. For example, let $\mathbb{N}_{\infty}$ be the structure for the language of PA with underlying set $\mathbb{N} \cup \{\infty\}$ with the usual arithmetic on $\mathbb{N}$ and with $\infty + x = x + \infty = \infty$, $0\times \infty = \infty\times 0 = 0$, $S(x)\times\infty=\infty\times S(x) = \infty$ for any $x$, and $S(\infty) =\infty$. Then $\mathbb{N}_{\infty}$ is a model of Robinson's axioms that does not satisfy $S(x) \neq x$. So $\mathbb{N}_{\infty}$ is a model of Robinson's axioms that is not a model of PA.
• I think I see the point, but I'd like to ask to be sure. I am not familiar with the notation (and possibly, the concept) of: "For example, let $\mathbb{N}$ be the structure for the language of PA with underlying set $\mathbb{N} \cup \{\infty\}$" Here, $\infty$ stands for a limit ordinal, in particular: the smallest ordinal greater than every natural number? – Bert Zangle Jun 13 '15 at 11:44
• @zanglebert: No, $\infty$ here just stand for something that is not already an element of $\mathbb N$, and the text specifies explicitly how $S$, $+$, and $\times$ works on it. It doesn't act like an ordinal. – hmakholm left over Monica Jun 13 '15 at 12:02
|
2020-03-31 20:46:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8215599060058594, "perplexity": 344.5770553515806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370503664.38/warc/CC-MAIN-20200331181930-20200331211930-00130.warc.gz"}
|
https://mathics-development-guide.readthedocs.io/en/latest/extending/developing-code/extending/case-studies/Curl/in-mathics.html
|
# Writing Curl in Mathics¶
Mathics is extensible; it allows creation of new functions, and new builtin functions all using the Mathics language.
So before writing this in Python or in SymPy, let us see how to do this in Mathics itself. We are only going to try the first two forms, curl in two and three dimensions.
Note: by the time you are reading this Curl has already been
added to Mathics. So we will instead define a “builtin” function MCurl (Mathics Curl) so that we don’t have function name clashes.
## Two-dimensional Mathematical Definition¶
Curl is defined as:
$\partial f_2 / \partial x_1 - \partial f_1 / \partial x_2$
for two-dimensional vectors.
## Two-dimensional Mathics Function¶
Translating the above definition into Mathics:
MCurl[{f1_, f2_}, {x1_, x2_}] := D[f2, x1] - D[f1, x2]
Now let’s try that inside Mathics using the two-dimensional example that can be found in the WMA reference for Curl
$mathics Mathics 5.0.3dev0 In[1]:= MCurl[{f1_, f2_}, {x1_, x2_}] := D[f2, x1] - D[f1, x2] Out[1]= None In[2]:= (* Test the 2D definition: *) MCurl[{y, -x}, {x, y}] Out[2]= -2 In[3] := v[x_, y_] := {Cos[x] Sin[y], Cos[y] Sin[x]} Out[3]= None In[4] := MCurl[v[x, y], {x, y}] Out[4]= 0 ## Three-dimensional Mathematical Definition¶ In three dimensions, things are a little more involved: $( \partial f_3 / \partial x_2 - \partial f_2 / \partial x_3, \ \ % \partial f_1 / \partial x_3 - \partial f_3 / \partial x_1, \ \ % \partial f_2 / \partial x_1 - \partial f_1 / \partial x_2 )$ ## Three-dimensional Mathics Function¶ Translating the above definition into Mathics: In[5] := MCurl[{f1_, f2_, f3_}, {x1_, x2_, x3_}] := { D[f3, x2] - D[f2, x3], D[f1, x3] - D[f3, x1], D[f2, x1] - D[f1, x2] } Out[5]= None In[6]:= (* An example form WMA VectorAnalysis: *) MCurl[{y, -x, 2 z}, {x, y, z}] Out[6]= {0, 0, -2} ## Adding Curl as an autoloaded function¶ The above code was done in an interactive session. Below we extract the function definitions and package this. (* Two and Three dimensional Curl, taken from the Mathematical definitions *) Begin["System"] (* Add definition in System namespace *) (* Set Information[] or ? help *) MCurl::usage = "returns the curl of a two-or three-dimensional vector space"; (* Curl in two dimensions *) MCurl[{f1_, f2_}, {x1_, x2_}] := D[f2, x1] - D[f1, x2] (* Curl in three dimensions *) MCurl[{f1_, f2_, f3_}, {x1_, x2_, x3_}] := { D[f3, x2] - D[f2, x3], D[f1, x3] - D[f3, x1], D[f2, x1] - D[f1, x2] } Protect[MCurl] (* Make sure Function cannot easily be changed *) End[] Place the above code in mathics-core/mathics/autoload/rules/Curl.m, and when Mathics starts up, this code will be evaluated. ## Testing autoloaded Curl function¶ Now let us try MCurl in a mathics session: $ mathics
Mathics 5.0.3dev0
on CPython 3.8.12 (heads/v2.3.4.1_release:4a6b4d3504, Jun 3 2022, 15:46:12)
...
In[1]:= ?MCurl
returns the curl of a two-or three-dimensional vector space
Out[1]= Null
In[2]:= Attributes[MCurl]
Out[2]= {Protected}
In[3]:= MCurl[{y, -x}, {x, y}]
Out[3]= -2
In[4]:= v[x_, y_] := {Cos[x] Sin[y], Cos[y] Sin[x]}
Out[4]= None
In[5]:= MCurl[v[x, y], {x, y}]
Out[5]= 0
In[6]:= MCurl[{y, -x, 2 z}, {x, y, z}]
Out[6]= {0, 0, -2}
Next:
|
2022-12-02 06:16:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8662629723548889, "perplexity": 7961.662036533667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710898.93/warc/CC-MAIN-20221202050510-20221202080510-00647.warc.gz"}
|
http://inpa-old.lbl.gov/INPA/Abstracts/030411.html
|
### Status of the KamLAND experiment
Thomas O'Donnell
LBNL
Abstract:
KamLAND is a one kiloton liquid scintillator detector which studies neutrino oscillation with reactor-antineutrinos at an average baseline of 180km. The experiment was the first to report reactor-antinuetrinodisappearance consistent with the neutrino mass splitting favored by the LMA-MSW solution to the Solar Neutrino Problem. Furthermore, KamLAND observed distortion of the reactor spectrum -- the fingerprint of mass-driven flavor oscillation -- and is uniquely sentitive to the mass splitting $\Delta m^{2}_{21}$. In this talk I will describe the experiment and present the results of the most recent data set which amounts to a total exposure of $3.49 \times 10^{32}$ proton-years and includes data collected with more favorable background conditions achieved by a detector radiopurity upgrade. Under the assumption of CPT invariance, a three-flavor analysis combining KamLAND and solar data yields best-fit values of the oscillation parameters: $\Delta m^{2}_{21} = 7.50^{+0.19}_{-0.20} \times 10^{-5} \rm{eV}^{2}$, $\tan^{2} \theta_{12} = 0.452^{+0.035}_{-0.033}$, and weak constraints on $\theta_{13}$.
Finally, as the current phase of data taking draws to an end, I will briefly describe KamLAND-Zen --- a plan to repurpose the detector to search for neutrino-less double beta decay of $^{136}$Xe.
|
2017-03-25 07:49:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.759300947189331, "perplexity": 1884.5374005322064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188891.62/warc/CC-MAIN-20170322212948-00274-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://www.futurelearn.com/courses/advanced-machine-learning/1/steps/239573
|
1.12
# Expected Loss, the Bias-Variance Decompostion & Overfitting
We will discuss the idea that training a statistical model is equivalent to finding values for its parameters that optimize a loss function. In doing this, we will be able to discover the value for the loss function on the training data.
It is important to understand, though, that optimizing the performance of the model on the training data is not our goal. What we normally want is, rather, to optimize the performance of the model on new data. This is the expected loss on new data.
#### Aside: Transductive Learning
Very occasionally it will be of interest to us that it do well only on particular set of new data. In such a case the expected loss of the model on all possible data loses importance. Instead we want to know the expected loss on that particular set of new data.
Before considering what this desire to minimize expected loss on new data means for us when training a model, let us first analyze the expect loss, here after the expected error, of a model on data.
To do this, we introduce three concepts:
1. The Irreducible Error. This is the inherent randomness in the system being modelled.
2. The Bias of the Model. This is the extent to which the expected value of the system differs from the expected value of the model.
3. The Variance of the Model. This is just what is stated: The variance of the model function. Intuitively it can be understood as the extent to which the model moves around its mean.
The bias-variance decomposition is the decomposition of the expected error of a model into these three concepts, with the formula:
Interested students can see a formal derivation of the bias-variance decomposition in the Deriving the Bias Variance Decomposition document available in the related links at the end of the article.
Since there is nothing we can do about irreducible error, our aim in statistical learning must be to find models than minimize variance and bias. Now consider the ‘sophistication’ or ‘complexity’ of a function, considered informally as simply it’s curviness. Consider, for example, the curves on the following image:
The curves in this example are:
Black: $-2+x$
Blue: $-1-5x+1.1x^2$
Green: $15x+10x^2-x^3$
Red: $1-15x+5x^2-3x^4+x^5-.01x^7+.000003*x^{10}$
Clearly in terms of curviness Black < Blue < Green < Red, and so we specify that whatever complexity is, the same is true of it.
Simple models will struggle to model complex functions. Consider a very curvy function being modeled by a linear function. No matter how hard we seek to model this curvy function, our linear model will never get it exactly right. It is, we would say, biased. An example is given below:
Here the red curve is the true function (which includes noise). The blue curve is the deterministic component of the function, and the black curve is our linear approximation. This is a good example not only of the inability of the linear model to accurately model a curvy function, but also of irreducible error: No model can do better than the blue curve since the difference between the red and blue curves is due entirely to random noise.
As the models we use are allowed to get more complex (more curvy in our discussion) they will be able to model more and more real world systems with reasonable degrees of accuracy.
Intuitively, however, as a function gets more complex (curvy) it will ‘move around’ more too. And so we might expect that its variance will increase. This is, in fact, exactly what does occur. Accordingly, we can envision the components of expected error as a function of complexity, when irreducible error stays constant, bias reduces and variance increases.
This graph tells us that we cannot adjust the complexity of a model to both reduce bias and variance simultaneously: In seeking an optimal model we seek the best trade-off between bias and variance. Notice that the optimal point is not necessarily where the bias and variance curves cross.
It must be emphasised that the graph shown holds fixed the amount of training data. It is possible to reduce both bias and variance by increase the amount of training data used, and often altering the complexity of the model as the training data is increased is essential to finding the optimal model.
A common, though imperfect, measure of the complexity of a statistical model is the number of parameters it has. We will adopt such a definition here (though we will have reason to adjust it later).
## Overfitting
When we utilize a model that is too complex for the function being modelled (given the amount of learning data we have available) we see that the variance component of the expected error increases. In popular parlance this is known as overfitting.
It is important to note that increasing the complexity of a model will always lead to better performance on the training data. Essentially, by increasing complexity (increasing the number of parameters that can be fit to the training data) we increase the ability of the model to customize itself to the training data. This is great if what it is doing is fitting itself to patterns in the training data that are present in the entire population (i.e. in all data). But it is not good if what is actually going on is that the model is fitting itself to random noise present only in the training data. An example is given in the following graph:
The blue line is a linear regression (OLS) model. The red is a polynomial regression model of third order. The form of these models (written more clearly than in the legend) is:
Linear Regression Model: $\hat y=\beta_0 + \beta_1x$
Polynomial Regression Model: $\hat y=\beta_0 + \beta_1x+ \beta_2x^2+ \beta_3x^3$
Note that the linear model has two parameters and the polynomial model has four. These models have been fit to minimize the MSE loss function on the training data. Clearly the polynomial model has a lower MSE on the training data. In fact it has no MSE - it fits the data perfectly. But looking at the true function we know that it will perform worse on new data. It will not generalize well. It has overfitted the training data.
To emphasise the fact that complexity causing overfitting is related to the amount of training data, examine the effect of training the two models on 100 points instead of 4:
Overfitting is one of the most constant dangers of advanced statistical machine larning. The models we work with tend to be very complex, and so able to overfit even when trained of reasonably large sets of training data.
How do we choose models that have low expected error? We will have discuss this in detail in week 2. But already it should be obvious that one thing we can do is see how models perform and data that they have not been trained on. This is the basis of validation techniques for model evaluation and selection.
|
2018-11-19 20:16:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6925899982452393, "perplexity": 331.04620861714363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746110.52/warc/CC-MAIN-20181119192035-20181119214035-00280.warc.gz"}
|
http://mathhelpforum.com/geometry/11154-camp-site-angle-problem-print.html
|
# camp site angle problem
• Feb 4th 2007, 08:31 PM
rcmango
camp site angle problem
heres a pic of what were looking at: http://img183.imageshack.us/img183/8004/untitledqa0.jpg
A hiker needs to cross a sandy area in order to get from point A to a camp site at point B. He can do so by crossing the sand perpendicular to the trail and then walking along the trail, or by crossing the sand in an anle theta up to the trail, and then walking along the trail. The hiker walks 3.5 km/h in the san and 5 km/h on the trail. Determine the time it will take him to reach the camp site for theta = 0, 10, 20, 30, 40, 50 and 60 degrees.
not sure how to do this one. help is needed please.
• Feb 5th 2007, 03:53 AM
topsquark
Quote:
Originally Posted by rcmango
heres a pic of what were looking at: http://img183.imageshack.us/img183/8004/untitledqa0.jpg
A hiker needs to cross a sandy area in order to get from point A to a camp site at point B. He can do so by crossing the sand perpendicular to the trail and then walking along the trail, or by crossing the sand in an anle theta up to the trail, and then walking along the trail. The hiker walks 3.5 km/h in the san and 5 km/h on the trail. Determine the time it will take him to reach the camp site for theta = 0, 10, 20, 30, 40, 50 and 60 degrees.
not sure how to do this one. help is needed please.
Well, d = vt, so $t = \frac{d}{v}$, which you can calculate for each part of the path.
That is, you can calculate it if you are given any distances. :eek:
-Dan
• Feb 5th 2007, 08:14 AM
rcmango
oh, forgive me, i forgot to add the distances of w = 4.5km and u = 14km.
now i see, thanks alot!
|
2016-09-30 17:35:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2563055157661438, "perplexity": 843.1903460413139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662321.82/warc/CC-MAIN-20160924173742-00135-ip-10-143-35-109.ec2.internal.warc.gz"}
|
http://www.commens.org/dictionary/term/ens-rationis
|
Ens Rationis
# Ens Rationis
Commens
Digital Companion to C. S. Peirce
Ens Rationis
1903 | C.S.P.'s Lowell Lectures of 1903 2nd Draught of 3rd Lecture | MS [R] 462:34-36
An ens rationis is something whose being consists in the possibility of something being true about something else.
1903 | Lowell Lectures. 1903. Lecture 5. Vol. 1 | MS [R] 469:8
An ens rationis may be defined as a subject whose being consists in a Secondness, or fact, concerning something else. Its being is thus of the nature of Thirdness, or thought. Any abstraction, such as Truth or Justice, is an ens rationis. That does not prevent Truth and Justice from being real powers in the world without any figure of speech.
|
2021-12-09 07:01:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8465539813041687, "perplexity": 9325.720439726363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00253.warc.gz"}
|
http://vermontveterinarycardiology.com/index.php/for-cardiologists/for-cardiologists?id=137
|
## Poiseuille Flow
#### jwplayer('avID_AVPlayerID_0_1867a5ed9196bb1ed0c743053ddb391a').setup({ 'file': '/images/stories/videos/Poiseuille2.mp4', 'image': '', 'height': '400', 'width': '600', 'autostart': 'true', 'repeat': 'false', 'controls': '1' });
This video depicts a side view in a straight circular tube with steady ( time invariant ) flow occurring from left to right. The parabolic velocity profile of Poiseuille flow is shown by the vectors and distortion of the fluid within the tube is suggested by the grid. DOWNLOAD a larger avi file.
#### What's the point?
The Poiseuille flow relationship is included in basic physiology courses. Cardiologists are often familiar with the formula for the Poiseuille resistance. This article deals with the origins of this relationship and the assumptions and limitations inherent for Poiseuille flow. The Poiseuille relationship for unaccelerated (fully developed), steady (time invariant) flow in a straight circular tube provides an excellent opportunity for understanding some profound aspects of fluid dynamics.
#### Where does it come from?
The Poiseuille relationship comes not from experimental data but from a story problem or mathematical derivation. This is a purely intellectual endeavor derived from 3 basic assumptions: 1) Newton's fundamental law of motion, i.e. $$\bar{F} = m\bar{a}$$ ( force = mass times acceleration); 2) a geometric assumption about the nature of the flow, i.e. a circular cross-sectional geometry in which the fluid elements flow straight down the tube with no acceleration (steady or time invariant, fully developed flow); and 3) an assumed relationship between relative fluid motion and shear stress (a Newtonian fluid is assumed where shear stress is proportional to velocity gradient through the Newtonian viscosity). While an understanding of the Poiseuille relationship is of great value, let us realize from the start that there is no location in the circulation where the last 2 assumptions are met. However, the first assumption ( Newton's laws of motion ) is more reliable in this vicinity of the universe than any medical or physiological "fact" (and that includes the one about death).
The derivation of the Poiseuille relationship is simple enough to provide an inkling of the flavor of fluid dynamics and engineering science in general. The derivation requires ruthless logic, scrupulous mathematical accounting of forces acting on fluid elements, and courage that we will be able to find our way to the result.
#### Getting the facts on to "paper"
To this end, consider a straight circular tube of radius $$r_0$$ and an annulus of fluid inside the tube. The annulus consists of a thin rim of fluid with mean radius $$r$$ and thickness $$\Delta r$$ in the radial direction and length $$\Delta z$$ in the tube's axial direction, $$z$$ .
We begin by actually assuming how the flow will occur, i.e. by moving in a straight line in the axial direction only, and at a constant speed (steady flow, not pulsatile or oscillatory). A moment's thought will surely convince you that this is also a statement that the acceleration of every fluid element is zero (acceleration is a change in velocity, a change in speed and/or direction of motion). Newton's laws of motion tell us that an object having zero acceleration also must experience zero net force. Hence the sum of all forces acting on each and every fluid element ( each fluid annulus) must equal zero.
Our next step is to enumerate and account for the forces acting on the fluid annulus (each and every one). Here we use a simple, but powerful conceptual tool called a free body diagram where the free body is the annulus of fluid. The free body diagram allows us to isolate an arbitrary part of the fluid and define rigorously the forces acting on the fluid. This process is equally valid for solid structures.
There are two basic kinds of forces when discussing mechanics of materials, fluid or solid: 1) Surface forces occur where the surface of the free body touches its surroundings, other fluid elements in this case. 2) Body forces are forces that seem to act from afar, to reach across space to affect an object. The most physiologically relevant of these in cardiovascular applications is gravity (until we start accelerating people in various kinds of transportation gadgetry). Even so, gravity is not very important to the discussion and cardiologists typically go out of their way to leave gravity out of the equation in a literal sense. Consequently, we only have surface forces in this problem.
$$\Large p A = p|_{\small{z-\Delta z/2}} 2 \pi r \Delta r$$
In the following figures, the surface under discussion is highlighted in color. The first surface force to consider is the pressure acting on the upstream surface of the annulus. Pressure is an example of a stress which has physical units of force/area. We calculate the value of this force by multiplying the pressure by the surface area it acts upon. Because of the way the problem is formulated, pressure has a different value at each axial location in the tube ( each value of $$z$$ ). For the annulus in question, we are assuming its position at arbitrary axial location $$z$$ with axial thickness $$\Delta z$$ , so that the upstream surface is at axial coordinate $$z -\Delta z/2$$ . Using common notation, we designate the pressure at this location as $$p|_{\small{z-\Delta z/2}}$$ .
The surface area that this pressure acts upon is equal to a difference in areas between two circles; the outer one has radius $$r + \Delta r/2$$ and the inner one has radius $$r - \Delta r/2$$ . This comes out to $$2 \pi r \Delta r$$ . Without doing the math, you can think of this circular rim of surface as a rectangle that is been warped into a circle; the width of the rectangle is $$\Delta r$$ and the length is $$2 \pi r$$ , the perimeter length of the circle.
The above pressure force is opposed by a similar one acting on the downstream rim of the annulus. The area of action is identical to the last calculation, but the pressure is different, occurring at a location $$\Delta z$$ downstream of the previous pressure determination and represented by the notation $$p|_{\small{z+\Delta z/2}}$$ .
$$\Large p A = p|_{\small{z+\Delta z/2}} 2 \pi r \Delta r$$
The first above pressure acts in the $$z$$ direction whereas the second acts in the $$-z$$ direction; these forces will have the opposite sign in the final accounting of forces.
Now we turn to the determination of shear stresses acting on the annulus. These are designated as $$\tau$$ in general but we are specifically interested in shear stresses that translate to forces in the axial direction. Mechanical stress is a second order tensor. Vectors are first order tensors and have 3 components in a three-dimensional space, e.g. the $$x$$ , $$y$$ , and $$z$$ directions. A second-order tensor has 9 components. The notation $$\tau_{rz}$$ can be read as the "shear stress on the $$r$$ surface in the $$z$$ direction"; you can readily imagine how we would obtain 9 components in a three-dimensional space by having a stress on each of the 3 face orientations and in each of the 3 directions.
For the inner surface of the annulus, we must form an expression for the force $$\tau_{rz}A$$ , i.e. the shear stress multiplied by the area it acts upon. We can designate the shear stress at the location $$(r-\Delta r/2)$$ as $$\tau_{rz}|_{\small{r-\Delta r/2}}$$ in keeping with previous notation. For a Newtonian fluid, shear stress is equal to the viscosity multiplied by the appropriate velocity gradient and this translates to $$\mu \Large\frac{\partial u}{\partial r}|_{\small{r-\Delta r/2}}$$ for the problem at hand. We are using $$u$$ to represent axial velocity and $$\Large\frac{\partial u}{\partial r}$$ is the velocity gradient, mathematically the derivative of $$u$$ with respect to the radial coordinate, $$r$$ . $$\mu$$ is the Newtonian viscosity.
$$\Large \tau_{rz}A = \tau_{rz}|_{\small{r-\Delta r/2}} 2 \pi r|_{\small{r-\Delta r/2}} \Delta z=\mu \Large\frac{\partial u}{\partial r}|_{\small{r-\Delta r/2}} 2 \pi r|_{\small{r-\Delta r/2}} \Delta z$$
The surface area that the shear stress acts upon at this location is slightly tricky to express because the area changes with radius, just as the stress itself does. Nevertheless, the area expression is simply that of a rectangle with thickness $$\Delta z$$ and length $$2 \pi (r - \Delta r/2)$$ so that the area expression is $$A|_{r-\Delta r/2} = 2 \pi \Delta z (r - \Delta r/2)$$ or $$2 \pi \Delta z r|_{\small{r - \Delta r/2}}$$ . The entire expression for the force is shown adjacent to the figure above. The shear stress acting on the outer rim of the annulus is derived similarly (below).
$$\Large \tau_{rz}A = \tau_{rz}|_{\small{r+\Delta r/2}} 2 \pi r|_{\small{r+\Delta r/2}} \Delta z=\mu \Large\frac{\partial u}{\partial r}|_{\small{r+\Delta r/2}} 2 \pi r|_{\small{r+\Delta r/2}} \Delta z$$
In case it has not yet occurred to you, it's (utterly) important to recognize that equations here are to express physical concepts. "Equations" used in cardiology are sometimes the result of statistical regressions, or they may express a concept or approximation in a general way. We can admit no conceptual or mathematical slack for the endevour at hand! We're deriving an equation whose physical units (thus far) are force. Pressure must be expressed in terms of force/area, e.g. dyne/cm2 (a dyne is a gram-cm/sec2 which is a mass multiplied by an acceleration). If you can only think of pressure in terms of mmHg, you've been spending too much time in the cath lab. When we multiply all of these things together, they have to have comparable physical units or we've got nothing but muck! The Newtonian viscosity, for example, is a physical quantity that multiplies the velocity gradient and yields a shear stress. If your shear stress has units of dyne/cm2 and your velocity gradient is in cm/sec/cm (i.e. sec-1), then your viscosity had better have units of dyne-sec/cm2 (or gram-cm-1-sec-1) so that the physical units are correct! It just so happens that the physical unit of viscosity just named is called the Poise after the gentleman under discussion.
#### Putting it all together
We are now in a position to begin combining (mathematical ) terms into a coherent expression (have courage!). As noted previously, the upstream and downstream pressure forces oppose each other:
$$\Large 2\pi r\Delta r[p|_{\small{z-\Delta z/2}}-p|_{\small{z+\Delta z/2}}]$$
Similarly, shear stresses at the inner and outer surfaces of the annulus oppose each other:
$$\Large 2\pi \Delta z \mu[(r \Large\frac{\partial u}{\partial r})|_{\small{r+\Delta r/2}}-(r \Large\frac{\partial u}{\partial r})|_{\small{r-\Delta r/2}}]$$
The net sum of all the forces acting on the annulus is equal to zero since the annulus is not accelerating:
$$\Large 2\pi r\Delta r[p|_{\small{z-\Delta z/2}}-p|_{\small{z+\Delta z/2}}]+2\pi \Delta z \mu[(r \Large\frac{\partial u}{\partial r})|_{\small{r+\Delta r/2}}-(r\Large\frac{\partial u}{\partial r})|_{\small{r-\Delta r/2}}]=0$$
We divide the entire equation by $$2\pi \Delta r \Delta z$$ and simplify:
$$\Large r\Large\frac{p|_{\small{z-\Delta z/2}}-p|_{\small{z+\Delta z/2}}}{\Delta z}+\mu \Large\frac{(r \Large\frac{\partial u}{\partial r})|_{\small{r+\Delta r/2}}-(r \Large\frac{\partial u}{\partial r})|_{\small{r-\Delta r/2}}}{\Delta r}=0$$
(The equation just acquired new physical units : force/cm2) The next step is allow $$\Delta r$$ and $$\Delta z$$ approach zero and to recognize the definition of the derivative as defined in calculus:
$$\Large\frac{df}{dx} \equiv \Large\frac{f|_{x+\Delta x} - f|_{x}}{\Delta x}_{lim\Delta x \rightarrow 0}$$
This allows us to express the work thus far as a differential equation:
$$\Large -r\frac{\partial p}{\partial z}+\mu \Large\frac{\partial }{\partial r} (r \Large\frac{\partial u}{\partial r})=0$$
or
$$-\Large\frac{\partial p}{\partial z}+\Large\frac{\mu}{r}\Large\frac{\partial}{\partial r}(r \Large\frac{\partial u}{\partial r})=0$$
(The equation just acquired new physical units : force/cm3) It may not seem like it, but this is tremendous progress. This is a linear, ordinary differential equation whose solution will tell us how the velocity changes as a function of radius. It may look like there are 2 unknown functions in the equation, $$p(z)$$ and $$u(r)$$ , but there's really only one. $$-\partial p/\partial z$$ is a given in this problem, a constant that you supply.
$$-\Large\frac{\partial p}{\partial z} = \Large\frac{p_1-p_2}{z_2-z_1}=\Large\frac{\Delta p}{L}$$
By convention, $$\Delta p$$ here is the upstream pressure minus the downstream and has the opposite sign of $$\Large\frac{\partial p}{\partial z}$$ . (Note: I'm playing a little bit fast and loose with notation. $$\partial/\partial r$$ is more appropriately used to denote the partial derivative of a function that depends on more than one variable, e.g. $$\partial u(r,z)/\partial r$$ . We will soon tackle this very situation.)
#### Solving the Equation of Motion
The F=maequation in a physics problem is often denoted the equation of motion and is typically a differential equation like the one above. You may not be familiar with such equations, but the solution of this one is relatively simple and can be looked up in a book; there is also software that would solve the equation for you if you know how to pose the question correctly. The full solution is as follows:
$$\Large u(r)=-\Large\frac{r^2 \Delta p}{4 \mu L}+C_1+C_2\ln(r)$$
$$C_1$$ and $$C_2$$ in this case are known as arbitrary constants of integration. We would find that the original differential equation is satisfied no matter what values we choose for these constants. Consequently we are free to choose any values we like; more specifically we are free to choose the values that satisfy the boundary conditions of the problem. The boundary condition at $$r = r_0$$ is that the velocity goes to zero. This is called the no slip condition, that the fluid element at the wall moves at the same velocity as the wall (zero). This may seem like an arbitrary and unjustified choice, but it is well verified experimentally. Furthermore, it can be thought of as a microcosm of a much broader property of fluids – that they are continuous. (We will explore this issue in much greater detail elsewhere.) The boundary condition at $$r=0$$ is that the velocity remains finite. You will notice that we cannot even evaluate $$\ln(r)$$ at $$r=0$$ because it approaches $$-\infty$$ as $$r$$ approaches zero ;hence the value of $$C_2$$ is zero and we are free to choose $$C_1$$ to fulfill the first boundary condition.
$$\Large C_1=\Large\frac{r_0^2 \Delta p}{4 \mu L}$$
$$\Large u(r)=-\Large\frac{r^2 \Delta p}{4 \mu L}+\Large\frac{r_0^2 \Delta p}{4 \mu L}$$
$$\Large u(r)=\Large\frac{(r_0^2-r^2) \Delta p}{4 \mu L}$$
This equation for the axial velocity $$u(r)$$ describes a parabola such that the maximal velocity occurs at the tube centerline and tapers off parabolically (in proportion to $$r^2$$ ) with radius to a value of zero at the wall where $$r=r_0$$ . It will turn out that the centerline velocity is twice the average velocity across the tube. A schematic/graphic of the velocity profile for Poiseuille flow is shown:
#### What do we mean by the "solution"?
The solution to a differential equation like the one above is most typically a function, not a specific number. $$u(r)$$ defines the parabolic profile that occurs for Poiseuille flow. The velocity profile is parabolic regardless of the flow rate, tube radius, viscosity, or tube length. The axial velocity is maximal at the tube centerline and equal to $$r_0^2 \Delta p/ {4 \mu L}$$ (obtained by substituting $$r=0$$ into the solution; the velocity at the wall is zero by design where $$r_0^2-r^2=0$$ .
We can verify that this is the correct solution by plugging the function back in to the differential equation.
$$\Large\frac{\mu}{r}\Large\frac{\partial}{\partial r}(r \Large\frac{\partial u}{\partial r})=-\Large\frac{\Delta p}{L}$$
The equation of motion can be expanded as follows for better understanding. Recall from calculus that the derivative of the product of two functions involves finding the derivative of each as follows:
$$\Large\frac{\partial}{\partial r}[f(r) g(r)]=f(r)\Large\frac{\partial g}{\partial r}+\Large\frac{\partial f}{\partial r}g(r)$$
In our problem, $$f(r)$$ and $$g(r)$$ are $$r$$ and $$\Large\frac{\partial u}{\partial r}$$ and their derivatives are $$1$$ and $$\Large\frac{\partial^2 u}{\partial r^2}$$ respectively. The expanded equation of motion:
$$\Large\frac{\mu}{r}(\Large\frac{\partial u}{\partial r} + r \Large\frac{\partial^2 u}{\partial r^2})=-\Large\frac{\Delta p}{L}$$
$$\Large \mu (\frac{1}{r}\Large\frac{\partial u}{\partial r} + \Large\frac{\partial^2 u}{\partial r^2})=-\Large\frac{\Delta p}{L}$$
Now we need to compute derivatives of $$u(r)$$ and plug them into the equation:
$$\Large\frac{\partial u}{\partial r} = \Large\frac{\partial}{\partial r} [\Large\frac{(r_0^2-r^2) \Delta p}{4 \mu L}] = \Large\frac{- r \Delta p}{2 \mu L}$$
$$\Large\frac{\partial^2 u}{\partial r^2} = \Large\frac{\partial^2}{\partial r^2} [\Large\frac{(r_0^2-r^2) \Delta p}{4 \mu L}] = \Large\frac{-\Delta p}{2 \mu L}$$
You will find that substituting these results back into the equation of motion yields the intended $$\Large\frac{\Delta p}{L}$$ .
#### The Poiseuille Resistance
The final component in the problem is to determine the formula for the Poiseuille resistance which first involves calculating the flow produced by the pressure gradient. To accomplish this, we must integrate the axial velocity over the cross-sectional area of the tube.
$$\Large \int_0^{2\pi}\int_0^{r_0} u(r) r dr d\theta = \int_0^{2\pi}\int_0^{r_0} \Large\frac{(r_0^2-r^2) \Delta p}{4 \mu L} r dr d\theta=\Large\frac{\Delta p}{4 \mu L}\int_0^{2\pi}\int_0^{r_0}(r_0^2-r^2)r dr d\theta$$
$$\Large r dr d\theta$$ is the differential area element in the polar coordinate system we are dealing with. The integration with respect to $$\theta$$ :
$$\Large\frac{\Delta p}{4 \mu L}\int_0^{2\pi}\left[\int_0^{r_0}(r_0^2-r^2)r dr\right] d\theta=\Large\frac{\Delta p}{4 \mu L}\theta\left[\int_0^{r_0}(r_0^2-r^2)r dr\right]^{2\pi}_0=\Large\frac{\pi \Delta p}{2 \mu L}\int_0^{r_0}(r_0^2 r-r^3) dr$$
And, with respect to $$r$$ :
$$\Large\frac{\pi \Delta p}{2 \mu L}\int_0^{r_0}(r_0^2 r-r^3) dr = \Large\frac{\pi \Delta p}{2 \mu L}\left[r_0^2 \Large\frac{r^2}{2}-\Large\frac{r^4}{4}\right]^{r_0}_0 = \Large\frac{\pi \Delta p}{2 \mu L}\left[\Large\frac{r_0^4}{4}\right]$$
$$\Large q=\int_0^{2\pi}\int_0^{r_0} u(r) r dr d\theta = \Large\frac{\pi r_0^4 \Delta p}{8 \mu L}$$
This is the expression for the volumetric flow rate, $$q$$ , in the tube. To obtain the Poiseuille resistance, we divide $$\Delta p$$ by $$q$$ (by definition).
$$\Large\frac{\Delta p}{q} \equiv R = \Delta p \Large\frac{8 \mu L}{\pi r_0^4 \Delta p}=\Large\frac{8\mu L}{\pi r_0^4}$$
#### Shear Stress
Now that we know a lot more about this flow, it turns out that we can calculate the frictional force on the fluid without having to do any more calculus. We know that the net force on the fluid is zero. This applies to each and every bit of the fluid, but it also applies to the fluid in whole. Pressure exerts a force on the fluid that can be calculated by multiplying the pressure by the cross-sectional area it acts upon. If we do this for the curved walls of the tube, the forces simply cancel out. However the pressures are different at the upstream and downstream ends of the tube and the force acting on the fluid due to these pressures can be readily determined from the resistance formula:
$$\Large \Delta p=q \Large\frac{8 \mu L}{\pi r_0^4 \Delta p}=\Large\frac{8q\mu L}{\pi r_0^4}$$
$$\Large F_p=\Delta p A=\Large\frac{8q\mu L}{\pi r_0^4}A=\Large\frac{8q\mu L}{\pi r_0^4}\pi r_0^2=\Large\frac{8q\mu L}{r_0^2}$$
$$\Large F_p$$ is the force due to pressure acting on the fluid where $$A$$ is the ccross-sectional area equal to $$\pi r_0^2$$ . This force must be balanced by the frictional force due to shear stress, $$\tau$$ , acting at the wall. The area that this shear stress acts on is the surface area of the tube, $$2\pi r_0 L$$ .
$$\Large F_p=F_\tau=\Large\frac{8q\mu L}{r_0^2}$$
$$\Large \tau=F_\tau /2\pi r_0 L=\Large\frac{8q\mu L}{r_0^2}/[2\pi r_0 L]=\Large\frac{4q\mu}{\pi r_0^3}$$
Apparently shear stress is linearly proportional to the flow rate and the viscosity, but inversely proportional to the cube of the tube radius.
#### Why does it do that?
In the foregoing the Poisseuille flow profile and resistance were derived mathematically. It was found that the velocity profile for a Newtonian fluid in a straight circular tube approaches a parabola with maximal velocity at the center of the tube and a velocity of zero at the wall; the centerline velocity is twice the average velocity.
Why does it do that?! To look at that question more closely we'll use some figures that have to do with flow in a straight circular tube where the flow velocity profile is not a parabola. These figures are derived from computational fluid dynamics solutions where the centerline is at the lower part of the computational figure and the wall of the tube is at the upper part. To orient you to these figures, consider the following figure which is from a computation of stenosis flow where flow is from left to right through an orifice plate – a circular ridge or ring within the tube.
Hopefully this orients you well enough for the subsequent figures where we'll look at a simple straight tube without a stenosis.
The following figure shows the development of the velocity profile in a straight tube where the flow starts on the left with a uniform velocity; i.e. fluid elements have the same velocity all the way across the inside of the tube. The figure shows the development of velocity proceeding left to right.
This solution was computed a relatively low Reynolds number ( discussed elsewhere) which has the effect of allowing the velocity profile to develop into the parabola over a small number of tube diameters. Hence you can see the development of Poisseuille flow in a relatively small figure. Color depicts the absolute value of the velocity which is also depicted by the vectors of course. Fluid elements near the wall ( upper edge of the figure ) decelerate whereas those near the center line ( lower edge of the figure ) accelerate.
The changing of velocity as fluid elements flow through the tube is synonymous with acceleration. Acceleration is a vector so the fact that elements near the wall are slowing down simply means that acceleration is towards the left-hand side of the figure. Why do they accelerate? Well, we know that all of the fluid elements obey Newton's laws of motion. Fluid elements accelerate because there is a net force acting upon them. In a nutshell:
Flow of a Newtonian fluid in a straight circular tube achieves a parabolic profile at a sufficient distance downstream in the tube because that is the only velocity profile ( for that geometry ) where net forces on all the fluid elements sum to zero. If the profile is not a parabola, then there will be a net force on fluid elements. Fluid element velocities will change ( accelerate ) until the parabolic profile is achieved.
Fluid at the wall necessarily moves at the same velocity as the wall itself, i.e. 0 - the no slip condition. Consequently / subsequently the slow moving fluid near the wall exerts a "drag" force (shear stress, exerted through viscosity and relative motion of the fluid elements) on neighboring fluid elements which leads to the parabolic profile. Slow moving fluid near the wall requires that fluid near the centerline moves faster than the average. The slow moving fluid near the wall causes a decrease of the effective cross sectional area of the tube.
The Poisseuille resistance derives from the situation where a pressure force ( higher pressure upstream and downstream ) is exactly balanced by frictional forces of fluid elements ( fluid layers or laminae ) sliding against each other. The Poisseuille resistance was derived above:
$$\Large\frac{\Delta p}{q} \equiv R =\Large\frac{8\mu L}{\pi r_0^4}$$
This value of this resistance is entirely dependentinextricably related to the parabolic profile itself. The parabola leads to the lowest possible resistance and the lowest pressure gradient to push the fluid through the tube. Since there is no place in the circulation where the parabolic profile exists, resistance is always greater than the Poisseuiile value.
In the subsequent figures we're looking at the velocity profile ( from tube centerline to wall as before ) somewhere far enough downstream from the inlet so that the profile has obtained the Poiseuille parabola.
Here now is a figure that shows the forces acting on the fluid elements. The red arrows show the force due to friction ( through fluid viscosity and relative velocity ); the blue arrows show the force due to the pressure gradient.
This shouldn't be much of a surprise to you, particularly if you slogged through the math given above. We started this problem with an assumption that fluid elements were not accelerating and hence we should find that the net force on each element is exactly equal to zero. This is what the parabolic profile accomplishes for a Newtonian fluid. It's the only profile in a circular tube that accomplishes this feat.
Next we're looking at the shear stresses on the fluid elements. You can see by looking at the velocity profile that elements near the centerline are moving at nearly the same velocity as neighboring elements whereas velocity near the wall changes rapidly with the radial coordinate; the velocity gradient near the wall is greater than the velocity gradient at the centerline. Shear stress is the force that fluid elements exert on one another as a result of relative motion; relative motion is expressed as a velocity gradient, i.e. the velocity changes with location. Consequently there is greater shear stress on fluid elements near the wall than at the centerline; shear stress at the centerline is actually zero. The following vectors show the shear stress vectors on the fluid elements ( centerline at the bottom of the figure, tube wall at the top).
This shows us that the parabolic velocity profile results in a linear shear stress profile. If you consider any one of the fluid elements, you'll realize that there is greater shear stress on one side ( the side closer to the wall ) than on the other ( the side closer to the centerline ). The difference in shear stress constitutes a force. This shear stress (when correctly multiplied by the surface area it acts upon ) exactly counterbalances the pressure force acting to push the fluid elements in the downstream direction. That's how we get the figure above showing the balance of forces between friction and pressure.
#### Vorticity
If you take just a moment to consider, you'll recognize that individual fluid elements are spinning in Poiseuille flow! Each element has a greater velocity on its centerline side than on the wall side. When fluid elements spin, they possess vorticity. This is not the same thing as a vortex where the gross fluid motion is circular. Vorticity is a vector, with magnitude and direction, that is determined from the velocity field of the fluid motion; in general it varies from point to point within the flow and also likely with time. The sense of the vorticity vector is defined by the "right-hand rule". Using your right hand with your fingers curling in the direction of spin, your thumb points in the direction of the vorticity vector. Of course there is a much more specific and quantitative definition of this aspect of the flow field.
$$\Large \omega=\Large\frac{1}{2}(\Large\frac{\partial w}{\partial y}-\Large\frac{\partial v}{\partial z}){i}+\Large\frac{1}{2}(\Large\frac{\partial u}{\partial z}-\Large\frac{\partial w}{\partial x}){j}+\Large\frac{1}{2}(\Large\frac{\partial v}{\partial x}-\Large\frac{\partial u}{\partial y}){k}$$
Here, $$u$$ , $$v$$ , and $$w$$ are the $$x$$ , $$y$$ , and $$z$$ components of the velocity. $$i$$ , $$j$$ , and $$k$$ are unit vectors that correspond to those directions. Looking at this math may not mean much to you, but you can see that each of the components of the vorticity vector are due to velocity gradients ( derivatives of velocity with respect to location ).
The vorticity vector is a little difficult to show for Poiseuille flow; it points straight out of the screen at you for the figures shown above. In the figure above of the shear stress, vorticity has been represented by color ( warmer color, more vorticity ) so you can see that maximal vorticity occurs at the wall of the tube. ( These figures were made with computational fluid dynamics viewing software that you can download from this site. The software allows you to display vorticity vectors associated with various flow solutions.) Vorticity usually originates where the fluid flows past a solid surface. Obviously the tube wall is that solid surface in Poiseuille flow. While vorticity is a kinematic feature of the flow ( having to do with the motion ), it behaves like a substance – almost as if the inner surface of the tube were exuding a dye that can be traced. Vorticity flows along with the fluid and is transported just like everything else that is contained within the fluid. It also diffuses through the fluid as if viscosity were the diffusion coefficient.
|
2017-06-23 19:05:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8409086465835571, "perplexity": 554.310179870461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320130.7/warc/CC-MAIN-20170623184505-20170623204505-00372.warc.gz"}
|
https://math.stackexchange.com/questions/2290971/continuous-maps-between-covering-spaces-are-surjective
|
# Continuous maps between covering spaces are surjective
This is Exercise 10.16 from Rotman's An Introduction to Algebraic Topology:
Let $(\widetilde{Y}, q)$ and $(\tilde{X}, p)$ be covering spaces of $X$. If there exists a continuous $h: \widetilde{Y} \rightarrow \tilde{X}$ with $ph = q$, then $h$ is a surjection. (Hint: Use unique path lifting.)
Here is what I have so far:
Given $\tilde{x} \in \tilde{X}$, let $c_{\tilde{x}}$ be the constant map at $\tilde{x}$. Then $pc_{\tilde{x}}$ is the constant path at $x \in X$ and so lifts to the constant path $c_{\tilde{y}} \subseteq \widetilde{Y}$ for some $\tilde{y} \in \widetilde{Y}$. Further, $c_{x} = qc_{\tilde{y}} = phc_{\tilde{y}}$, so $hc_{\tilde{y}}$ lies in the fiber over $x$.
However, I'm not really sure how to control the starting point of $hc_{\tilde{y}}$, so I'm not sure how to incorporate unique path lifting.
• You must have some connectivity assumption. – Amitai Yuval May 21 '17 at 19:37
• @AmitaiYuval Rotman assumes covering spaces to be path connected. – Jacob Bond May 21 '17 at 19:41
Let $y\in \tilde Y$ and $x\in \hat X$. There exists a path $c$ such that $c(0)=h(y)$ and $c(1)=x$. The unique lifting property implies that there exists a unique path $d_t:[0,1]\rightarrow \tilde Y$ such that $d(0)=y$ and $q(d)=p(c)$. The path $h(d)$ verifies $h(d)(0)=c(0)$ and $p(h(d))=p(c)$ The unique path lifting property implies that $h(d)=c$, and henceforth that $h(d(1))=x$.
There is at least one point in the image of $h$, call it $x$. You want to prove that any other point is also in the image. So let $x'$ be another point in $\tilde{X}$. Take a path connecting $x$ to $x'$ (we assume path connectivity). Push this path forward to the base $X$, then pull it back to $\tilde{Y}$. The path you get in $\tilde{Y}$ is pushed forward by $h$ to the path you took in $\tilde{X}$ to begin with. In particular, the endpoint $x'$ is in the image.
|
2020-02-19 20:34:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9171584248542786, "perplexity": 83.84365979254518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144167.31/warc/CC-MAIN-20200219184416-20200219214416-00284.warc.gz"}
|
https://asmedigitalcollection.asme.org/lettersdynsys/article/1/1/011010/1075675/Human-Driver-Modeling-Based-on-Analytical-Optimal
|
## Abstract
Safe and energy-efficient driving of connected and automated vehicles (CAVs) must be influenced by human-driven vehicles. Thus, to properly evaluate the energy impacts of CAVs in a simulation framework, a human driver model must capture a wide range of real-world driving behaviors corresponding to the surrounding environment. This paper formulates longitudinal human driving as an optimal control problem with a state constraint imposed by the vehicle in front. Deriving analytically optimal solutions by employing optimal control theory can capture longitudinal human driving behaviors with low computational burden, and adding the state constraint can assist with describing car-following features while anticipating behaviors of the vehicle in front. We also use on-road testing data collected by an instrumented vehicle to validate the proposed human driver model for stop scenarios at intersections. Results show that vehicle stopping trajectories of the proposed model are well matched with those of experimental data.
## Introduction
Thanks to advanced vehicle technologies, vehicles can be connected with other vehicles and roadside infrastructure through communication, moreover they can be equipped with different levels of automation (e.g., levels 1–5). These connected and automated vehicles (CAVs) can be aware of the surrounding environment continuously and predict future situations accurately; thus, they can reduce collisions due to human errors through anticipative and cooperative car-following and save energy further through energy-efficient driving and powertrain operation. In recent years, this topic is rapidly growing, and many researchers have presented energy-efficient driving solutions for CAVs [1]. To evaluate the energy impacts of CAVs systematically, we have developed a new multi-vehicle tool, RoadRunner [2], which allows CAVs to interact with surrounding vehicles and infrastructure on real-world routes in a closed-loop fashion. Moreover, RoadRunner is based on autonomie [3], which is an established tool for examining vehicle energy consumption and performance, where powertrain models have been validated over 10 years using chassis dynamometer test data [4]. RoadRunner development is more timely now as it allows researchers to perform a large-scale analysis of the energy impacts of CAVs with high accuracy.
As a part of RoadRunner, human driver model development and validation are critical because simplified models may not capture detailed real-world vehicle state trajectories, moreover their unrealistic behaviors may exaggerate the energy-saving potential of CAVs. In the traffic flow research area, efforts have been devoted to developing microscopic and macroscopic traffic models that result from human driving behaviors since the 1950s. Generally, the microscopic models describe traffic flow from the point of individual drivers and vehicles, whereas macroscopic models describe the collective state in terms of spatiotemporal fields of the local density, speed, and flow [5]. A car-following component is a fundamental part of microscopic models, and this feature must be simple enough to compute quickly, while describing individual driving behaviors. Several papers provide a comprehensive and excellent survey on the car-following models [68]. According to these papers, the car-following models are divided into several types: stimulus-response models, desired measures models, safety-distance models, optimal velocity models, fuzzy logic models, and psycho-physical models, among others. Most of the models adjusting speed and/or distance to the vehicle in front work so well that they capture macroscopic aspects of traffic dynamics for a certain condition by aggregating individual trajectories. Furthermore, consideration of human factors resulting from imperfect control has led developers to improve their own models and present other types of car-following models [9,10].
In microscopic traffic simulation, there is no need to capture a wide range of individual trajectories if the macroscopic aspects are well captured within an acceptable level of accuracy. However, several papers pointed out that car-following models are not matched with experimental data even though they are calibrated [11,12]. The RoadRunner requires a simple but high-fidelity dynamic human driver model that can capture a wide range of different driving behaviors corresponding to individual driving style as well as the surrounding environment (e.g., traffic signal phase and timing), not limited to car-following behaviors. To this end, we assumed longitudinal human driving, and this assumption leads to the formulation of an optimal control problem minimizing jerk (the derivative of acceleration) energy. We derived analytical state-constrained optimal solutions as a function of driving-related parameters through Pontryagin’s minimum principle (PMP) [13], thereby satisfying the simplicity to be computationally efficient, while replicating real-world human-driven vehicles including car-following features if necessary. Unlike existing car-following models that update acceleration by using the relative distance and speed, the proposed model based on the optimal solutions updates jerk by using more driving-related parameters, which diversifies driving behaviors and guarantees the continuity in acceleration trajectory when road events occur.
The paper is organized as follows: the section on the human driving problem introduces assumptions and formulates human driving as an optimal control problem. In the section on analytical solutions, we address derivations of analytical optimal solutions and show several case studies. In the section on validation, the proposed model is validated through experimental data. Finally, in the last section, the conclusions and future work resulting from this paper are discussed.
## Longitudinal Human Driving Problem
### Assumptions.
We assume that drivers basically prioritize driving comfort, while avoiding any collisions with the vehicle in front and obeying traffic rules; maximizing driving comfort is considered as minimizing total jerk energy. As human drivers are able to anticipate behaviors of the vehicle in front of them, they plan and apply the control decision, such an anticipation is based on the assumption that the vehicle in front travels at the constant acceleration in a predictive time interval. Furthermore, vehicle longitudinal dynamics is simplified to the triple-integrator model by neglecting aerodynamic drag, road grade, etc. These assumptions facilitate derivation of analytical optimal solutions, which can be computed quickly. Anticipation, planning, and control decision are made at every time instant until arriving at the destination.
### Optimal Control Problem Formulation.
Let us define a control variable u as jerk j. A cost function to minimize total jerk energy over the predictive time interval T is
$J=minu∫tktk+T(l=12u2)dt$
(1)
where tk is a current time. Furthermore, we consider a vehicle longitudinal dynamics model as the triple-integrator model
$s˙=v,v˙=a,a˙=j=u$
(2)
where a system dynamics $f=[v,a,u]⊤$, and a state vector $x=[s,v,a]⊤$.
To avoid collisions with the vehicle in front, the position of the driver’s vehicle with a desired distance gap sd must be smaller than the position of the vehicle in front sp. Thus, a state variable inequality constraint (SVIC) h is defined as
$h(t)=s+sd−sp=s+(vτ+ss)−sp≤0$
(3)
where $sd=ss+vτ$, $τ$ is a desired time headway, which is the arriving time difference at the same point between successive vehicles, and ss is the minimum safety distance gap at standstill conditions. Note that desired speed gap and acceleration gap are $vd=ss+aτ$ and $ad=uτ$, respectively. Using assumptions made in the previous section, sp can be expressed as $sp=sp.k+vp.kt+12ap.kt2$, where sp.k = sp(tk), vp.k = vp(tk), and ap.k = ap(tk).
Boundary conditions are
$s(tk)=sk,v(tk)=vk,a(tk)=ak,s(tk+T)=sf,v(tk+T)=vf,a(tk+T)=af$
(4)
where T is fixed. For simplicity, without loss of generality, tk = 0 and k = 0 in the remainder of the paper.
### Pontryagin’s Minimum Principle With SVIC.
To handle SVIC, we use a direct adjoining method [14] as it directly adjoins the SVIC in the Hamiltonian and provides the optimality conditions for the optimal solution through PMP; its optimality conditions are independent of the order of a pure SVIC form h(x, t) ≤ 0 in which u does not explicitly appear, where the order p is defined by $hu(p)=∂/∂u(dp/dtph)≠0$, $hu(i)=0$ for i = 1, …, p − 1. When the SVIC is active, there may exist a sub-interval satisfying h(x, t) = 0 for t ∈ [t1, t2] with t1 < t2, called a boundary interval (t1 and t2 are entry time and exit time, respectively) or a point satisfying h(x, t1) = 0, called a contact point (t1 is contact time). Note that the entry, exit, and contact times are called junction times.
Hamiltonian H is defined first, and then a Lagrangian multiplier $η$ is used to directly adjoin the SVIC to H in order to form Lagrangian L
$H=l+λ⊤f,L=H+ηh$
(5)
where $λ$ are co-state variables and l has the same definition as in Eq. (1). The optimality condition is
$Lu=0,λ˙=−Lx$
(6)
where $η≥0$, $ηh=0$. Jump conditions at junction times tj with a jump parameter $π$ are
$λ(tj−)=λ(tj+)+πhx(x,tj),H(tj−)=H(tj+)−πht(x,tj)$
(7)
where π ≥ 0, πh = 0, and $tj−$ and $tj+$ indicate the left-hand side and the right-hand side of tj, respectively.
## Analytical Solutions
From the optimal control problem formulated by Eqs. (1), (2), (3), and (4), we define H and L as
$H=12u2+λsv+λva+λauL=H+η(s+vτ+ss−sp.0−vp.0t−12ap.0t2)$
(8)
Then, an optimal control policy is derived by the optimality condition as follows: $u*(t)=−λa$. Co-state dynamics with jump conditions are derived as
$λ˙s=−ηwithλs(tj−)=λs(tj+)+πλ˙v=−λs−ητwithλv(tj−)=λv(tj+)+πτλ˙a=−λvwithλa(tj−)=λa(tj+)$
(9)
where h = 0 and π ≠ 0 if the SVIC is active, that is position and speed co-states must have a discontinuity at junction times, whereas the acceleration co-state is always continuous; otherwise, $η=0$ and π = 0.
From h = 0 and the last condition in Eq. (7), we add the constraints
$s(tj)+(ss+v(tj)τ)−sp.0−vp.0tj−12ap.0tj2=0v(tj)+a(tj)τ−vp.0−ap.0tj=0$
(10)
The SVIC here is of the second order; thus, when the SVIC becomes tighter, a contact point occurs first and then a boundary interval occurs (e.g., a decrease in sp.0 gives the same conditions), which is theoretically proven in Ref. [15]. Therefore, to retain states on the boundary interval, the following constraint is also satisfied:
$a(tj)+u(tj)τ−ap.0=0$
(11)
Lastly, terminal state equality constraints (TSECs) from Eq. (4) are
$s(T)−sf=0,v(T)−vf=0,a(T)−af=0$
(12)
where vf = af = 0 for stop scenarios.
In summary, boundary conditions determine whether and how SVIC is active, thus analytical solutions consist of three types: type 1 (inactive SVIC), type 2 (active SVIC at contact point), and type 3 (active SVIC on boundary interval). We heuristically determine an appropriate solution type. If type 1 does not violate the SVIC, then it is applied; otherwise, we check if there exists one feasible contact point for type 2. After this identification, if type 2 is feasible then it is applied; otherwise, type 3 is applied. The detailed description for all types of solutions is written as follows:
### Type 1: Inactive SVIC.
When the SVIC is inactive, $η=0$. Thus, the optimal co-states are derived using Eq. (9)
$λs*(t)=λs.0λv*(t)=−λs.0t+λv.0λa*(t)=12λs.0t2−λv.0t+λa.0$
(13)
where $λs.0=λs*(0)$, $λv.0=λv*(0)$, and $λa.0=λa*(0)$, while superscript * indicates optimal. Using the optimal control policy, it is possible to integrate the state dynamics in Eq. (2). Then, we obtain a system of three linear equations with three unknown variables ($λs.0$, $λv.0$, and $λa.0$). By enforcing the TSECs in Eq. (12), a solution to this system is
$λs.0=720(s0−sf)T5+360(v0+vf)T4+60(a0−af)T3λv.0=360(s0−sf)T4+192v0+168vfT3+36a0−24afT2λa.0=60(s0−sf)T3+36v0+24vfT2+9a0−3afT$
(14)
Finally, we get $a*(t)=−∫λa+a0$, $v*(t)=∫a*+v0$, and $s*(t)=∫v*+s0$. Note that the above solution was also used in Ref. [16].
### Type 2: Active SVIC at Contact Point.
When the SVIC is active at a contact time t1, position and speed co-states jump to different values; thus, the resulting co-state trajectories with $η=0$ consist of two intervals as follows:
$λs*(t)={λs.0t∈[0,t1−]λs.1+t∈[t1+,T]}λv*(t)={−λs.0t+λv.0t∈[0,t1−]−λs.1+(t−t1)+λv.1+t∈[t1+,T]}λa*(t)={λs.02t2−λv.0t+λa.0t∈[0,t1−]λs.1+2(t−t1)2−λv.1+(t−t1)+λa.1+t∈[t1+,T]}$
(15)
where $λs.1+=λs*(t1+)=λs*(t1−)−π$, $λv.1+=λv*(t1+)=λv*(t1−)−πτ$, and $λa.1+=λa*(t1+)=λa*(t1−)$ from the jump conditions in Eq. (9). In the same way as in type 1, the state dynamics are integrated; but here, two interior-point constraints at t1 from Eq. (10) must be additionally satisfied. For this reason, we obtain a system of five nonlinear equations with five unknown variables ($λs.0$, $λv.0$, $λa.0$, $π$, and t1) as follows:
$s*(t1)+ss+v*(t1)τ−sp.0−vp.0t1−12ap.0t12=0v*(t1)+a*(t1)τ−vp.0−ap.0t1=0TSECsinEq.(12)$
(16)
By solving the first four equations, we derive four variables ($λs.0$, $λv.0$, $λa.0$, $π$) as a function of t1, and then substitute them into the last equation to obtain one-ninth order of polynomial with one variable of t1. From such a polynomial, we can compute a feasible t1 satisfying a condition of 0 ≤ t1T.
### Type 3: Active SVIC on Boundary Interval.
When the SVIC is active on the boundary interval [t1, t2], there are two jumps in position and speed co-states with jump parameters π1 and π2 at t1 and t2, respectively. The $η$ is no longer zero and dominates co-state dynamics as the three constraints in Eqs. (10) and (11) must be held on [t1, t2]. After the third constraint is differentiated one time with respect to t, we substitute $λa$ for u using the optimal control policy and obtain the following explicit solution:
$λa*(t)=λa.1+e−(t−t1)/τ$
(17)
where $λa.1+=λa*(t1+)=λa*(t1−)$. Using the above formula and the co-state dynamics in Eq. (9), we obtain the explicit solutions of other co-states as well:
$λs*(t)=λs.1+e(t−t1)/τ+cλa.1+(e−(t−t1)/τ−e(t−t1)/τ)λv*(t)=1τλa.1+e−(t−t1)/τ$
(18)
with $η*(t)=−(1/τ)λs*(t)+(1/τ3)λa*(t)$, where $c=1/2τ2$ and $λs.1+=λs*(t1−)−π1$. As $λv*(t1+)=(1/τ)λa.1+$, the jump parameter π1 from Eq. (9) is obtained as $π1=(1/τ)λv.1−−(1/τ2)λa.1+$. Based on the above boundary interval control, we summarize
$λs*(t)={λs.0t∈[0,t1−]1stEq.(18)t∈[t1+,t2−]λs.2+t∈[t2+,T]}λv*(t)={−λs.0t+λv.0t∈[0,t1−]2ndEq.(18)t∈[t1+,t2−]−λs.2+(t−t2)+λv.2+t∈[t2+,T]}λa*(t)={12λs.0t2−λv.0t+λa.0t∈[0,t1−]Eq.(17)t∈[t1+,t2−]12λs.2+(t−t2)2−λv.2+(t−t2)+λa.2+t∈[t2+,T]}$
(19)
where $λs.2+=λs*(t2−)−π2$, $λv.2+=λv*(t2−)−π2τ$, and $λa.2+=λa*(t2−)$. Similarly, we obtain a system of six nonlinear equations with six unknown variables ($λs.0$, $λv.0$, $λa.0$, π2, t1, and t2):
$s*(t1)+ss+v*(t1)τ−sp.0−vp.0t1−12ap.0t12=0v*(t1)+a*(t1)τ−vp.0−ap.0t1=0a*(t1)+u*(t1)τ−ap.0=0TSECsinEq.(12)$
(20)
After replacing four variables ($λs.0$, $λv.0$, $λa.0$, and π2) in the last two equations, we can compute the remaining variables t1 and t2 satisfying a condition of 0 ≤ t1t2T.
### Simulation Studies.
This section considers typical stop scenarios in the presence of the vehicle in front. We assume that the vehicle in front decelerates at the constant value and its initial setup is ap.0 = −0.5 m/s2, vp.0 = 10 m/s, and $sp.0=[60,50,40]⊤$ m. For host vehicle, its initial setup is a0 = −0.2 m/s2, v0 = 20 m/s, s0 = 0 m, af = 0 m/s2, vf = 0 m/s, sf = 100 m, T = 10 s, $τ=1.2$ s, and ss = 2 m.
In Fig. 1, the vehicle in front does not affect the driving behavior of the host vehicle. All co-state trajectories of type 1 are continuous, and the resulting optimal control policy enables the host vehicle to decelerate its speed for a stop in a way that minimizes total jerk energy. However, as initial distance gap decreases (i.e., sp.0 decreases), the desired position trajectory (s + sd) of the host vehicle starts surpassing the position trajectory of the vehicle in front (sp) and then an actual rear-collision event occurs unless state-constrained solutions are used. As shown in Fig. 2, both position and speed co-states of type 2 are discontinuous at the contact time, which allows the desired position and speed of the host vehicle to match with the position and speed of the vehicle in front, respectively. On the other hand, the acceleration co-state is always continuous, which leads to a continuous optimal control policy. Lastly, in Fig. 3, type 3 builds up the boundary interval, jumps in the position, and speed co-states occur at both entry and exit times, whereas the acceleration co-state is continuous. Notably, co-state dynamics on the boundary interval are influenced by $η$, which are different from that on other intervals, so that desired states of the host vehicle behave in exactly the same way as the vehicle in front.
## Validation With Experimental Data
Experimental data were collected from an instrumented vehicle driven by a human driver and equipped with a dash video camera, global positioning system tracker, and a radar and then processed to make the data usable for validation. On-road testing was performed on specific sub-urban route near Argonne, and the conditions at that time are light traffic, mostly sunny day, dry road condition, normal driving style, etc.
In this paper, we considered a braking regime only for stop scenarios near the intersections, including situations with and without the vehicle in front. From the experimental data, the parameters required by the human driver model (i.e., boundary conditions) were obtained: braking distance (sfs0), braking time (T), and vehicle states at the timing to start braking (a0 and v0). As a result of how drivers respond to surrounding environment (e.g., traffic signal phase and timing), we could have various stopping trajectories as shown in Fig. 4. For car-following parameters, ss = 2 m and $τ=min(1,sp.0−ss−s0/v0)$ s. Note that $τ$ could be optimally researched in a way to minimize the error between the model and data.
To validate the model, we used two measures: (1) normalized cross correlation power (NCCP) [17] and (2) normalized root-mean-squared error (NRMSE) [11]. The NCCP and NRMSE are defined as
$NCCP=max[Rxy(τ)]max[Rxx(τ),Ryy(τ)]×100NRMSE=∑i=0T(xi−yi)2/T(ymax−ymin)×100$
(21)
where $Rxy(τ)=limT→∞∫0Tx(t)∘y(t−τ)dt$, and x and y represent speed signals of the model and data, respectively, while ymax and ymin are maximum and minimum values, respectively. Note that the NCCP value larger than 90% indicates that the two speed signals are highly correlated. Moreover, we also compared the proposed model with the intelligent driver model (IDM), well-known car-following model, with prescribed parameters, where the comfortable acceleration and deceleration are set to 1 and 1.5 m/s2, the minimum spacing, the desired time headway, and desired speed are set to ss, $τ$, and v0, and the exponent is set to 4.
In Fig. 5, for the proposed model and IDM, all NCCP values are larger than 90% and all NRMSE values are less than 10% (except for IDM in the case number 28). The proposed model outperforms IDM as the average value of both measures is 98.6% versus 95.4% in NCCP, and 2.33% versus 5.39% in NRMSE. Figure 6 shows two specific cases including car-following situations to see how the host vehicle adjusts its driving behaviors, where values of a pair (NCCP, NRMSE) are (97.8%, 4.68%) versus (91.5%, 9.94%) for case number 18 (left) and (99.9%, 1.58%) versus (98.6%, 2.41%) for case number 22 (right). In case number 18, the surrounding vehicle changes its lane at about 10 s and drives in front of the equipped vehicle, which causes a discontinuity in trajectories (red line). Note that the relative position is set to 250 m and relative speed is set to zero if the radar sensor detects nothing in the same lane. If there are no vehicles in front, IDM maintains constant speed from 0 s to 3 s, but it starts braking, which jumps the negative value from zero acceleration, because of considering a red traffic light as a standing object. At about 10 s, IDM is too close to the cut-in vehicle, thus it must have the maximum braking rate, which is set by 4 m/s2, leading to the large discrepancy in trajectories compared to the testing data.
However, the proposed model can set different braking times (e.g., 37 s and 23 s in case numbers 18 and 22, respectively) and its trajectory is quite close to that of the experimental data, while describing the car-following feature without a discontinuity in acceleration trajectory when road event occurs (e.g., traffic signal state switches to red, the vehicle in front appears), as shown in Fig. 6. Note that the simple model uses only type 1 solution, thereby unrealistically overtaking the vehicle in front in the same lane. As shown in Fig. 7, the boundary conditions must be adjusted to ensure that they are feasible to compute analytical optimal solutions depending on driving behavior of the vehicle in front (e.g., a final position with the desired distance gap cannot be larger than the anticipated final position of the vehicle in front if there is no lane-change) [18]. Both cases adjust boundary conditions; however, case number 18 must use the state-constrained solutions, whereas case number 22 does not.
## Conclusions and Future Work
In this paper, we present a new approach for human driver modeling using an optimal control theory. The human driver is modeled based on analytical optimal solutions that maximize driving comfort for given boundary conditions, while considering the state constraint imposed by the vehicle in front. The proposed model is not only computationally efficient but also captures various stopping trajectories including car-following to keep the desired distance gap because its inputs (boundary conditions) directly represent one aspect of the driving behavior (e.g., braking time indicates braking level), respectively. Using experimental data, the proposed model is validated and also compared with IDM. Results show that the average values of NCCP and NRMSE for 54 braking-to-stop cases are about 98.6% and 2.33%, respectively, which indicates that the stopping behavior of the proposed model is highly correlated with that of the experimental data. The proposed model improves accuracy over IDM without a discontinuity which is seen in IDM when the road event occurs.
In future work, we would like to expand driving regimes (e.g., accelerating and cruising), rather than only focusing on braking, and we also consider road characteristics (such as curvature) to capture the speed reduction. We also would like to investigate if the proposed model can capture macroscopic aspects under high traffic conditions. Furthermore, another future research direction is to develop a model of perception and decision that can provide the timing and duration of each driving regime.
## Acknowledgment
This report and the work described were sponsored by the U.S. Department of Energy (DOE) Vehicle Technologies Office (VTO) under the Systems and Modeling for Accelerated Research in Transportation (SMART) Mobility Laboratory Consortium, an initiative of the Energy Efficient Mobility Systems (EEMS) Program. DOE Office of Energy Efficiency and Renewable Energy (EERE) manager David Anderson played an important role in establishing the project concept, advancing implementation, and providing ongoing guidance.
## Conflict of Interest
The submitted article has been created by UChicago Argonne, LLC, Operator of Argonne National Laboratory (“Argonne”). Argonne, a U.S. Department of Energy Office of Science laboratory, is operated under Contract No. DEAC02-06CH11357. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly, and display publicly, by or on behalf of the Government.
## Nomenclature
• T =
predictive time interval
•
• ss =
minimum safety distance gap at standstill conditions
•
• ti =
contact time if i = 1 or entry and exit times if i = 1, 2
•
• s, v, a, j =
position, speed, acceleration, and jerk
•
• sp, vp, ap =
position, speed, and acceleration of the vehicle in front
•
• sd, vd, ad =
desired distance, speed, and acceleration gaps
•
• sf, vf, af =
final position, final speed, and final acceleration
•
• u (:= j) =
control input variable (:= jerk)
•
• H, L =
Hamiltonian and Lagrangian
•
• $η$ =
Lagrange multiplier to directly adjoin the state constraint
•
• $λs,λv,λs$ =
position, speed, and acceleration co-state variables
•
• π =
jump parameter in co-state variables
•
• $τ$ =
## References
References
1.
Vahidi
,
A.
, and
Sciarretta
,
A.
,
2018
, “
Energy Saving Potentials of Connected and Automated Vehicles
,”
Transp. Res. Part C: Emerg. Technol.
,
95
, pp.
822
843
. 10.1016/j.trc.2018.09.001
2.
Kim
,
N.
,
Karbowski
,
D.
, and
Rousseau
,
A.
,
2018
, “
A Modeling Framework for Connectivity and Automation Co-Simulation
,”
SAE Technical Paper 2018-01-0607
.
3.
Argonne National Laboratory
,
2017
,
autonomie (computer software)
.
4.
Kim
,
N.
,
Duoba
,
M.
,
Kim
,
N.
, and
Rousseau
,
A.
,
2013
, “
Validating Volt PHEV Model With Dynamometer Test Data Using Autonomie
,”
SAE Int. J. Passenger Cars Mech. Syst.
,
6
(
2
), pp.
985
992
. 10.4271/2013-01-1458
5.
Treiber
,
M.
, and
Kesting
,
A.
,
2013
,
Traffic Flow Dynamics
,
Springer
,
Berlin
.
6.
Brackstone
,
M.
, and
McDonald
,
M.
,
1999
, “
Car-Following: A Historical Review
,”
Transp. Res. Part F: Traffic Psychol. Behav.
,
2
(
4
), pp.
181
196
. 10.1016/S1369-8478(00)00005-X
7.
Toledo
,
T.
,
2007
, “
Driving Behaviour: Models and Challenges
,”
Transp. Rev.
,
27
(
1
), pp.
65
84
. 10.1080/01441640600823940
8.
Saifuzzaman
,
M.
, and
Zheng
,
Z.
,
2014
, “
Incorporating Human-Factors in Car-Following Models: A Review of Recent Developments and Research Needs
,”
Transp. Res. Part C: Emerg. Technol.
,
48
, pp.
379
403
. 10.1016/j.trc.2014.09.008
9.
Ro
,
J. W.
,
Roop
,
P. S.
,
Malik
,
A.
, and
Ranjitkar
,
P.
,
2018
, “
A Formal Approach for Modeling and Simulation of Human Car-Following Behavior
,”
IEEE Trans. Intell. Transp. Syst.
,
19
(
2
), pp.
639
648
. 10.1109/TITS.2017.2759273
10.
Lindorfer
,
M.
,
Mecklenbrauker
,
C. F.
, and
Ostermayer
,
G.
,
2018
, “
Modeling the Imperfect Driver: Incorporating Human Factors in a Microscopic Traffic Model
,”
IEEE Trans. Intell. Transp. Syst.
,
19
(
9
), pp.
2856
2870
. 10.1109/TITS.2017.2765694
11.
Wagner
,
P.
,
2005
, “Empirical Description of Car-Following,”
Traffic and Granular Flow ’03
,
Hoogendoorn
,
S. P.
,
Luding
,
S.
,
Bovy
,
P. H. L.
,
Schreckenberg
,
M.
, and
Wolf
,
D. E.
, eds.,
Springer
,
Berlin
, pp.
15
27
.
12.
Sangster
,
J.
,
Rakha
,
H.
, and
Du
,
J.
,
2013
, “
Application of Naturalistic Driving Data to Modeling of Driver Car-Following Behavior
,”
Transp. Res. Record: J. Transp. Res. Board
,
2390
(
1
), pp.
20
33
. 10.3141/2390-03
13.
Pontryagin
,
L.
,
1987
,
Mathematical Theory of Optimal Processes
, 1st ed.,
Routledge
,
London
.
14.
Hartl
,
R. F.
,
Sethi
,
S. P.
, and
Vickson
,
R. G.
,
1995
, “
A Survey of the Maximum Principles for Optimal Control Problems With State Constraints
,”
SIAM Rev.
,
37
(
2
), pp.
181
218
. 10.1137/1037043
15.
Hamilton
,
W.
,
1972
, “
On Nonexistence of Boundary Arcs in Control Problems With Bounded State Variables
,”
IEEE Trans. Automat. Control
,
17
(
3
), pp.
338
343
. 10.1109/TAC.1972.1099982
16.
Da Lio
,
M.
,
Mazzalai
,
A.
,
Gurney
,
K.
, and
Saroldi
,
A.
,
2018
, “
Biologically Guided Driver Modeling: The Stop Behavior of Human Car Drivers
,”
IEEE Trans. Intell. Transp. Syst.
,
19
(
8
), pp.
2454
2469
. 10.1109/TITS.2017.2751526
17.
Meng
,
Y.
,
Jennings
,
M.
,
Tsou
,
P.
,
Brigham
,
D.
,
Bell
,
D.
, and
Soto
,
C.
,
2011
, “
Test Correlation Framework for Hybrid Electric Vehicle System Model
,”
SAE Int. J. Eng.
,
4
(
1
), pp.
1046
1057
. 10.4271/2011-01-0881
18.
Han
,
J.
,
Sciarretta
,
A.
,
Ojeda
,
L. L.
,
De Nunzio
,
G.
, and
Thibault
,
L.
,
2018
, “
Safe- and Eco-Driving Control for Connected and Automated Electric Vehicles Using Analytical State-Constrained Optimal Solution
,”
IEEE Trans. Intell. Veh.
,
3
(
2
), pp.
163
172
. 10.1109/TIV.2018.2804162
|
2021-04-20 00:50:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 88, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6387664675712585, "perplexity": 1465.700334322978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038921860.72/warc/CC-MAIN-20210419235235-20210420025235-00362.warc.gz"}
|
https://calculus-do.com/calculus-wang-ke-dai-xiu-integral-calculus-dai-kao-2/
|
# 微积分网课代修|积分学代写Integral Calculus代考|MA1030C Larger exceptional sets
• 单变量微积分
• 多变量微积分
• 傅里叶级数
• 黎曼积分
• ODE
• 微分学
## 微积分作业代写calclulus代考|Larger exceptional sets
As integration theory developed over the centuries since Newton it became clear that the theory required quite large exceptional sets, certainly larger than just a few points. But this also requires us to characterize those sets that can be so neglected and also to describe what we must require of an indefinite integral so that these sets can be ignored.
At a calculus level we can easily go one step further, even if we cannot quite approach the full generality needed. The key is to push Lemma $1.2$ and Lemma $1.3$ much further. We cannot do this with help from the mean-value theorem as before: indeed the proof is deferred to the next chapter where we introduce a new technique needed for the integration theory.
LEMMA 1.5. Suppose that $F$ and $G$ are both continuous functions on an interval $[a, b]$. Suppose that there is a sequence of points $e_{1}, e_{2}, e_{3}, \ldots$ of points from $[a, b]$ and that $F^{\prime}(x)=G^{\prime}(x)$ for all $a<x<b$ except possibly at the points $e_{1}, e_{2}, e_{3}$, … Then
$$F(b)-F(a)=G(b)-G(a)$$
## 微积分作业代写calclulus代考|A version of the Newton integral for elementary calculus
Assuming Lemma $1.5$ for the moment we can introduce a much improved version of the Newton integral.
Definition $1.6$ (Modified Newton Integral). Suppose that $f$ is a function defined on an interval $(a, b)$ except possibly at the points of a sequence $e_{1}, e_{2}, e_{3}$, .. from $[a, b]$. Suppose that we can find a continuous function $F:[a, b] \rightarrow \mathbb{R}$ so that $F^{\prime}(x)=f(x)$ for every $x$ with $a<x<b$ with perhaps the exception of the points $e_{1}, e_{2}, e_{3}, \ldots$. Then we will say that $F$ is an indefinite integral of $f$ on $[a, b]$ and we will write
$$\int_{a}^{b} f(x) d x=F(b)-F(a)$$
and call the latter the definite integral of $f$ on $[a, b]$.
The only justification needed would be to use Lemma $1.5$ to check that if $F$ and $G$ both qualify to be indefinite integrals of $f$ on an interval $[a, b]$, then $F$ and $G$ differ by a constant so that $F(b)-F(a)=G(b)-G(a)$. Thus the definite integral is unambiguously defined.
There is one subtle point here that might be missed. Suppose that two functions $F$ and $G$ both qualify to be indefinite integrals of $f$. That means that there is some sequence of points $e_{1}, e_{2}, e_{3}, \ldots$ from $[a, b]$ and that $F^{\prime}(x)=f(x)$ provided $x$ is not one of the points in this sequence. It also means that there is some sequence of points $e_{1}^{\prime}, e_{2}^{\prime}, e_{3}^{\prime}, \ldots$ from $[a, b]$ [not necessarily the same sequence as before] and that $G^{\prime}(x)=f(x)$ provided $x$ is not one of the points in this sequence.
Accordingly, we observe that $F^{\prime}(x)=G^{\prime}(x)$ except possibly at points belonging to the combined sequence
$$e_{1}, e_{1}^{\prime}, e_{2}, e_{2}^{\prime}, e_{3}, e_{3}^{\prime}, e_{4}, e_{4}^{\prime}, \ldots$$allowing us to apply Lemma 1.5. From that lemma we deduce that $F$ and $G$ differ by a constant and that $F(b)-F(a)=G(b)-G(a)$. Thus the definite integral is unambiguously defined no matter what indefinite integral we choose to use.
## 微积分作业代写calclulus代考|Larger exceptional sets
$$F(b)-F(a)=G(b)-G(a)$$
## 微积分作业代写calclulus代考|A version of the Newton integral for elementary calculus
$F^{\prime}(x)=f(x)$ 对于每个 $x$ 和 $a<x<b$ 也许除了点 $e_{1}, e_{2}, e_{3}, \ldots$. 然后我们会说 $F$ 是 一个不定积分 $f$ 上 $[a, b]$ 我们会写
$$\int_{a}^{b} f(x) d x=F(b)-F(a)$$
$$e_{1}, e_{1}^{\prime}, e_{2}, e_{2}^{\prime}, e_{3}, e_{3}^{\prime}, e_{4}, e_{4}^{\prime}, \ldots$$
|
2022-09-24 18:48:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9665566682815552, "perplexity": 190.59882843259297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00428.warc.gz"}
|
https://imagej.github.io/ImgLibProcessor
|
# ImgLibProcessor
The following article describes a method of ImageJ1/ImageJ2 integration we explored in 2010, revolving around an ij.process.ImageProcessor extension called ImgLibProcessor which would enable additional transparent usage of ImgLib2 from within ImageJ1, thus greatly expanding the available pixel types and storage strategies. However, after discussion with Wayne Rasband, we settled on a different method of backwards compatibility known as ImageJ Legacy. The text below is preserved only for historical reasons.
## Design
As much as possible ImgLibProcessor utilizes operations to implement its functionality. Operation is not a class here but just a concept. If you imagine a processor class the operations are really the methods that act upon the processor’s data and live as separate classes rather than within the processor class methods. This was done to reduce the complexity of ImgLibProcessor. Originally there was a motivation of having operations that could be chained together. (Not sure how well this motivation was realized)
An operation ties together the concept of iteration and action on data over a user specified region. As an example of a typical operation I’ll discuss imagej.process.operation.SingleCursorRoiOperation. This abstract class is responsible for managing the iteration over a single imglib Image. Here is its main iteration loop:
/** runs the operation. does the iteration and calls subclass methods as appropriate */
public void execute()
{
if (this.observer != null)
observer.init();
final LocalizableByDimCursor<T> imageCursor =
this.image.createLocalizableByDimCursor();
final RegionOfInterestCursor<T> imageRoiCursor =
new RegionOfInterestCursor<T>( imageCursor, this.origin, this.span );
beforeIteration(imageRoiCursor.getType());
//iterate over all the pixels, of the selected image plane
for (T sample : imageRoiCursor)
{
// note that the include() method call below passes null as position. This
// operation is not positionally aware for efficiency. Use a positional
// operation in the imagej.process.operation package if needed.
if ((this.selector == null) ||
(this.selector.include(null, sample.getRealDouble())))
insideIteration(sample);
if (this.observer != null)
observer.update();
}
afterIteration();
imageRoiCursor.close();
imageCursor.close();
if (this.observer != null)
observer.done();
}
The key ideas include:
• The iteration is encapsulated in this class. Subclasses of this class implement beforeIteration(), insideIteration(), and afterIteration(). (I’ll discuss inheritance later)
• The iteration can be constrained by a function that uses the current value and position of the sample pointed to by the iterator to determine whether to call insideIteration().
• The iteration can be observed by other classes
As an example we’ll look at imagej.process.operation.MinMaxOperation. Here is its entire implementation:
public class MinMaxOperation<T extends RealType<T>> extends SingleCursorRoiOperation<T>
{
private double min, max, negInfinity, posInfinity;
public MinMaxOperation(Image<T> image, int[] origin, int[] span)
{
super(image,origin,span);
}
public double getMax() { return this.max; }
public double getMin() { return this.min; }
@Override
protected void beforeIteration(RealType<T> type)
{
this.min = type.getMaxValue();
this.max = type.getMinValue();
// CTR: HACK: Workaround for compiler issue with instanceof and generics.
//if (type instanceof FloatType)
if (FloatType.class.isAssignableFrom(type.getClass()))
{
this.posInfinity = Float.POSITIVE_INFINITY;
this.negInfinity = Float.NEGATIVE_INFINITY;
}
else
{
this.posInfinity = Double.POSITIVE_INFINITY;
this.negInfinity = Double.NEGATIVE_INFINITY;
}
}
@Override
protected void insideIteration(RealType<T> sample)
{
double value = sample.getRealDouble();
if (value >= this.posInfinity) return;
if (value <= this.negInfinity) return;
if ( value > this.max )
this.max = value;
if ( value < this.min )
this.min = value;
}
@Override
protected void afterIteration()
{
}
}
You can see that a MinMaxOperation is pretty simple. It finds the current min and max values of the iterated values. Defining operations in this way is simple and they end up being well encapsulated. However, striving for composition over inheritance I’ve tried to minimize the number of operation classes.
Using composition to enhance the capabilities of the various operations becomes possible with the definition of SelectionFunctions. Recall from the execute() method of SingleCursorRoiOperation that you can constrain which samples will have further processing done on them using a selector. The selector in the loop is a SelectionFunction. Its signature is defined in imagej.selection.SelectionFunction:
public interface SelectionFunction
{
boolean include(int[] position, double sample);
}
You can define any function you like that discriminates samples based upon their value and position within their parent Image. SelectionFunctions are then attached to an operation via operation.setSelector(selectionFunction).
Note that SelectionFunction is not in the imagej.process package. I view the ability to discriminate samples based upon their value and position as a broad need in ImageJ. I can see how this would be useful in the support of Rois. There is currently code for composing composite selection functions in the imagej.selection package. So arbitrarily complex selections can be made.
Once we define selection functions we can utilize them with operations in powerful ways. For example define this selection function:
class Selector implements SelectionFunction
{
public boolean include(int[] position, double sample)
{
if (sample value in range I desire)
if (position within a rotated ellipse centered at x,y with params z)
return true;
return false;
}
}
Create a MinMaxOperation and attach this selection function. When you run operation.execute() you can get the min and max values found within your selection criteria.
There are operations that change the underlying data as well. Imagej.process.operation.UnaryTransformOperation is one example. It will change an underlying Image’s data by replacing the data with the computation of a function using the current Image as input. The function is defined as a UnaryFunction and is passed in to the UnaryTransformOperation. Imagej.process.function.unary.UnaryFunction looks like this:
public interface UnaryFunction {
double compute(double input);
}
An example of a UnaryFunction would be a sqr() function whose compute() method would return the square of its input. To square the values of an image you would create a UnaryTransformOperation passing it a sqr() UnaryFunction and then run operation.execute(). It is important to note that a UnaryFunction can be arbitrarily complex with its own sets of parameters provided it relies on one input value from an Image.
There are operations defined that do not change data. Imagej.process.operation.QueryOperation is such an operation. A QueryOperation applies an InfoCollector function to the user specified data. Imagej.process.query.InfoCollector looks like this:
/** the InfoCollector interface is used to define queries that can be passed to an
* imagej.process.operation.QueryOperation.*/
public interface InfoCollector
{
/** this method called before the actual query takes place allowing the InfoCollector to initialize itself */
void init();
/** this method is called at each position of the original dataset allowing data to be collected */
void collectInfo(int[] position, double value);
/** this method is called when the query is done allowing cleanup and tabulation of results */
void done();
}
When one uses a QueryOperation one can then collect information in whatever way desired.
Note that all operations (transforms, queries, etc.) can be modified to only work on a user defined region and then further constrained by value and position selection functions. Any function a user can define can be applied.
Operations are not limited to one dataset. There are operation classes defined that work with various combinations of synchronized datasets (1, 2, and N).
These concepts are utilized throughout the implementation of ImgLibProcessor:
A simple unary transform operation -
public void abs()
{
AbsUnaryFunction function = new AbsUnaryFunction();
nonPositionalTransform(function); // a private method that does quickest SingRoiOp
}
A more complex example that uses a selection function -
/** fills the current ROI area of the current plane of data with the fill color wherever the input mask is nonzero */
@Override
{
fill();
return;
}
int[] origin = originOfRoi();
int[] span = spanOfRoiPlane();
FillUnaryFunction fillFunction = new FillUnaryFunction(this.fillColor);
UnaryTransformPositionalOperation<T> transform =
new UnaryTransformPositionalOperation<T>(this.imageData, origin, span,
fillFunction);
transform.setSelectionFunction(selector);
transform.execute();
}
A two Image operation that uses a SelectionFunction -
/** sets the current ROI area data to that stored in the snapshot wherever the mask is nonzero */
@Override
{
return;
Rectangle roi = getRoi();
Image<T> snapData = this.snapshot.getStorage();
int[] snapOrigin = Index.create(roi.x, roi.y,
new int[snapData.getNumDimensions()-2]);
int[] snapSpan = Span.singlePlane(roi.width, roi.height,
snapData.getNumDimensions());
int[] imageOrigin = originOfRoi();
int[] imageSpan = spanOfRoiPlane();
CopyInput2BinaryFunction copyFunction = new CopyInput2BinaryFunction();
BinaryTransformPositionalOperation<T> resetOp =
new BinaryTransformPositionalOperation<T>(this.imageData, imageOrigin,
imageSpan, snapData, snapOrigin, snapSpan, copyFunction);
resetOp.execute();
if (!this.isUnsignedByte)
{
this.min = this.snapshotMin;
this.max = this.snapshotMax;
}
}
ImgLibProcessor also exposes a functional API for further use. Specifically the various assign() and transform() methods allows one to change an ImgLibProcessor’s Image data passing functions as needed.
This can all be tied together in a plugin demo. The following code works on a float image whose values range between 0 and 1. When the plugin is run the data of the current window image is transformed. Someone who knows more of what a user would really like to do on an image can extend this as desired.
import ij.IJ;
import ij.ImagePlus;
import ij.WindowManager;
import ij.plugin.PlugIn;
import imagej.function.UnaryFunction;
import imagej.ij1bridge.process.ImgLibProcessor;
import imagej.selection.SelectionFunction;
import java.util.Random;
public class FunctionalPlugin implements PlugIn {
private class MyFunction implements UnaryFunction
{
Random rng = new Random();
public double compute(double value)
{
return rng.nextDouble();
}
}
private class MySelector implements SelectionFunction
{
public boolean include(int[] position, double sample)
{
if (sample < 0.2) return false;
if (sample > 0.8) return false;
if (position[0] % 3 != 0) return false;
if (position[1] % 2 != 0) return false;
return true;
}
}
public void run(String arg) {
ImagePlus imp = WindowManager.getCurrentImage();
ImgLibProcessor<?> proc = (ImgLibProcessor<?>)imp.getProcessor();
MyFunction function = new MyFunction();
MySelector selector = new MySelector();
proc.transform(function, selector);
imp.updateAndDraw();
}
}
## Miscellaneous notes
• if desired we can likely eliminate inheritance from the operations (in SingleCursorRoiOperation for example) by passing it a class that implements an interface that does before(), inside(), and after(). This is similar to Observer and InfoCollector and we may be able to do some simplification here
• there are a number of different operations based upon how you are iterating and how many datasets you are simultaneously working with. There is also the built in limitation that iterators are synchronized. I have written proof of concept code to generalize iteration, allowing composition of iterators into either synchronized or nested iterators, eliminating the split between Unary/Binary/NAry functions, etc. Unfinished/untested but close to working.
• We may want to break out SelectionFunction into ValueFunction and PositionFunction. Need to think about more
# Required changes to IJ1 to accommodate ImgLibProcessor
This document describes changes required to ImageJ 1.x source code that will faciitate correct behavior when passed ImgLib-backed data. It is divided into 5 sections.
Section 1 outlines changes we’ve already made to our local copy of IJ 1.44l9 source code. These changes can be integrated into baseline ImageJ as needed.
Section 2 outlines further changes needed to fully support the new Image type ImagePlus.OTHER.
Section 3 outlines further changes needed to compatibly support a new processor type.
Section 4 outlines further changes needed related to case logic switching on ImagePlus::getBitDepth().
Section 5 contains miscellaneous notes
## CHANGES ALREADY MADE to allow ImgLib data to be correctly updated by IJ1
Reflects source code changes as of 12-17-10
** Package ij:
**
ImagePlus
Added another image type : ImagePlus.OTHER
Updated getBitDepth() to calc bits per pixel for OTHER type images
Updated getBytesPerPixel() to calc number of bytes per pixel for OTHER type images
Added double getActualBytesPerPixel() to support non-byte-aligned pixel types
Updated setType() to allow OTHER type
Updated getFileInfo() to populate self when dealing with OTHER type images
Updated copy(boolean cut) to use getActualBytesPerPixel() in data byte use calculations
Updated getPixel() to encode pixel data for OTHER type images
** Package ij.gui:
**
ImageCanvas
Updated setDrawingColor() to have a subcase for OTHER type images
ImageWindow
Updated createSubtitle() to calc bit depth and image size from ImagePlus rather than by type
Wand
Change code to not use primitive array access for obtaining pixel values. To do so needed to make
minor changes to constructor, minor change to autoOutline(), and rewrote getPixel().
Package ij.io:
FileInfo
Modified getBytesPerPixel() to support GRAY64_SIGNED and GRAY12_UNSIGNED
Modified getType() to return values for GRAY64_SIGNED and GRAY12_UNSIGNED
ImportDialog
Added “12-bit Unsigned” to static class variable “types”.
Updated getFileInfo() to identify GRAY12_UNSIGNED type files
** Package ij.measure:
**
Calibration
Added a method called isSameAs(Calibration other). We rely on this for numerous tests.
** Package ij.plugin:
**
FolderOpener
Made minor change to the run() method to support OTHER type images
Modified setStackInfo() to use new bytesPerPixel calculation methods
ListVirtualStack
Updated showDialog() to use new bytesPerPixel calculation methods
** Package ij.plugin.filter:
**
ImageMath
Many small edits to use setf()/getf() rather than direct float[] access. Also rather than instanceof
FloatProcessor use ip.isFloatingType().
Modify applyMacro case logic to test instanceof SomeProcessor rather than using getBitDepth()
ParticleAnalyzer
Moved away from direct primitive array access for pixel values and rather use getf()/etc. as needed.
There some places tagged with “WAYNE PLEASE CHECK” for further review
Changed setThresholdLevels() to identify images of OTHER type and also set fillColor correctly
Changed getStatistics() to delegate to ip.getStatistics() rather than checking image type
PluginFilterRunner
Updated checkImagePlus() to have a switch case for images of type OTHER
** Package ij.plugin.frame:
**
Minor edit of setupNewImage() case logic to support OTHER type images
Minor edit of reset() case logic to support OTHER type images
Update the calculation of decimal places to display for OTHER type images in setMinAndMax()
Update the calculation of decimal places to display for OTHER type images in setWindowLevel()
** Package ij.process:
**
ImageProcessor
Changed visibility of showProgress to public. We have a ProgressTracker class in IJ2 that updates an ip’s progress indicator.
Changed visibility of getBilinearInterpolatedPixel() to public
Changed visibility of resetPixels() to protected
Changed visibility of create8BitImage() to protected
Added abstract methods for all processors to support:
int getBitDepth();
double getBytesPerPixel();
ImageStatistics getStatistics(int mOptions, Calibration cal);
boolean isFloatingType();
boolean isUnsignedType();
double getMinimumAllowedValue();
double getMaximumAllowedValue();
String getTypeName();
double getd(int x, int y);
double getd(int index);
Added a couple set/get methods so our new ImageProcessor type can manipulate instance variables as needed
protected boolean getSnapshotCopyMode()
public int getFgColor()
public void setFgColor()
public Color getDrawingColor()
Added a method that is only caled on processors of OTHER type by ImagePlus::getPixel()
public void encodePixelInfo(int[] destination, int x, int y)
ByteProcessor
implementation of the new abstract methods of the ImageProcessor interface
ColorProcessor
implementation of the new abstract methods of the ImageProcessor interface
FloatProcessor
implementation of the new abstract methods of the ImageProcessor interface
ShortProcessor
implementation of the new abstract methods of the ImageProcessor interface
ImageStatistics
Made a few methods with package access into protected methods
calculateStdDev(), setup(), fitEllipse(), calculateMedian()
Changed getStatistics() to delegate to passed in ImageProcessor’s getStatistics() method rather than
switching on processor type and hatching a type appropriate ImageStatistics
TypeConverter
Added support for OTHER image types with new package level access methods:
ByteProcessor convertOtherToByte()
ShortProcessor convertOtherToShort()
FloatProcessor convertOtherToFloat().
## PLACES WHERE ImagePlus::getType() USES NEED UPDATING
• ij.gui.Roi - showStatus() number of decimal places of display would be incorrect for some Imglib types without a simple fix.
• ij.io.FileOpener - setCalibration() minor change needed to make sure min and max set correctly for the processor.
• ij.io.FileSaver – saveAsJpeg(), and getDescriptionString() need minor case logic changes. Should check that the various saveAsXXX() plugins work for ImgLibProcessor backed types.
• ij.macro.Functions - setPixel() and getpixel() - need minor changes to case logic to support OTHER type
• ij.measure.Calibration - setImage() needs minor case logic change to support OTHER type
• ij.plugin.filter.Calibrator - run(), calibrate(), and doCurveFitting() - minor changes to case logic needed to support OTHER type
• ij.plugin.filter.Filters - setup() has minor case logic change needed to support OTHER type
• ij.plugin.filter.Info - getInfo() needs a subcase for ImagePlus::OTHER. Small localized change.
• ij.plugin.filter.RankFilters - showDialog() needs minor case logic change for setting number of decimal places if image is a float type
• ij.plugin.frame.ContrastAdjuster – updatelabels(), plotHistogram(), maybe apply() need small case logic adjustments
• ij.plugin.filter.ThresholdAdjuster - setup() - minor change to determine not an 8 bit image
• ij.plugin.Concatenator needs more thorough type checking to support OTHER type images. As it stands now it is possible to try and concat two images of type OTHER who have totally different pixel formats. Also cannot concat a ImagePlus::GRAY16 and a ImagePlus::OTHER with backing data that is 16 bit.
• ij.plugin.GelAnalyzer - plotLanes() has one line that needs to be changed to support OTHER types
• ij.plugin.RGBStackConverter – run() needs some nontrivial changes to support OTHER types
• ij.plugin.Slicer – run() needs a float check rather than GRAY32. Simple fix.
• ij.plugin.StackCombiner has issues similar to Concatenator.
• ij.plugin.StackInserter has issues similar to Concatenator
• ij.plugin.Thresholder – applyThreshold() needs minor change to support OTHER types
• ij.plugin.XYCoordinates – run() tests GRAY32 rather than isFloat(). Simple to fix.
• ij.process.ImageConverter needs a good amount of work to support OTHER types
• ij.process.StackConverter needs a good amount of work to support OTHER types
• ij.IJ: - doWand() needs minor change (from GRAY32 test to isFloatingType() test)
## PLACES WHERE instanceof SomeProcessor USES NEED UPDATING
• ij.io.TextEncoder – minor change needed (use !ip.isFloatingType()) to support OTHER types
• ij.macro.Functions – getStatistics() assumes you only have 8 & 16 bit images/histograms. Needs some reworking to support OTHER types.
• ij.plugin.filter.BackgroundSubtracter – needs some substantial work to be extended to support processors of OTHER type
• ij.plugin.Convolver – various methods make assumptions about which kinds of processors can exist. Also seems to rely on FloatProcessor. Needs some nontrivial work to support OTHER types
• ij.plugin.filter.ImageMath – in run() method there is an unsafe check for signed data. Simple fix. There are also a couple unsafe checks for floating point data. Again a simple fix.
• ij.plugin.filter.MaximumFinder – simple fix needed for determing whether data is floating type
• ij.plugin.filter.ParticleAnalyzer – makes unsafe assumptions. I have mostly updated it already. Wayne may need to make bigger changes. Will talk to Wayne about this one.
• ij.plugin.filter.PluginFilterRunner – tests versus FloatProcessor. May not need any changes. May just work but may be inefficient for ImgLibProcessors of float type. Study more.
• ij.plugin.frame.ContrastAdjuster – makes some type assumptions. I think I’ve fixed it in already _ij1-patches.
• ij.plugin.frame.ThresholdAdjuster – updateLabels() does a test on ShortProcessor. May need to be fixed. DoSet() needs to replace instanceof FloatProcessor with ip.isFloatingType(). setHistogram() needs to replace instanceof FloatProcessor with ip.isFloatingType().
• ij.plugin.ContrastEnhancer – makes a few type assumptions. May need some larger rework.
• ij.plugin.FITS_Writer – setup of header relies on either Short or Float. May need to ask what is needed for OTHER types. WriteData() only does float and short. Will not support OTHER type images backed by floats and shorts.
• ij.plugin.OrthogonalViews – makes many assumptions on processor type. Needs nontrivial updates to support OTHER types.
• ij.plugin.ZProjector – makes assumptions that only the current processors will ever exist. Needs nontrivial updating to work.
• ij.process.FloodFiller – simple change needed to constructor to use ip.isFloatingType()
• ij.process.ImageStatistics – I think I already made all changes needed and in _ij1-patches
• ij.process.TypeConverter – I think I already made all changes needed and in _ij1-patches
• ij.ImagePlus - I think I already made all changes needed and in _ij1-patches
## PLACES WHERE ImagePlus::getBitDepth() USES NEED UPDATING
• ij.io.FileSaver – okForFits() should test imp.getType() rather than imp.getBitDepth(). Simple.
• ij.io.ImportDialog – rather than test bitDepth() it should test getType() not Byte or Color. Simple.
• ij.macro.Functions – setColor() needs minor case logic change for 16-bit signed data to avoid throwing an exception unneccesarily. GetHistogram(), setLut() and setMinAndMax() should test getType() and not getBitDepth(). Simple.
• ij.plugin.filter.FFTCustomFilter – doInverseTransform() makes some assumptions about bit depth implying certain types of processors. Needs a closer look.
• ij.plugin.filter.FFTFilter – filter() makes some assumptions about bit depth implying certain types of processors. Needs a closer look.
• ij.plugin.filter.ImageMath – “div” case of run() assumes 32-bit implies Floating type data. Simple fix. ApplyMacro() and showDialog() should test versus imp.getType() rather than getBitDepth(). Simple.
• ij.plugin.filter.Info – getInfo() should test ip.isFloatingType() rather than imp.getBitDepth(). Simple.
• ij.plugin.filter.ParticleAnalyzer – setup() tests bit depth when it should test imp.getType(). Simple.
• ij.plugin.filter.RankFilters - run() tests bit depth when it should test imp.getType(). Simple.
• ij.plugin.filter.RGBStackSplitter - setup() tests bit depth when it should test imp.getType(). Simple.
• ij.plugin.filter.Rotator – uses bitDepth when it should use getType() in a few places. Simple fixes.
• ij.plugin.frame.Channels – itemStateChanged() assumes 24-bit implies Color. Simple fix.
• ij.plugin.frame.ColorThresholder – sample(), checkImage(), windowActivated(), RGBTpLab(), and RGBToYUV() test bit depth when they should test imp.getType(). Simple.
• ij.plugin.frame.ContrastAdjuster – setMinAndMax() and setWindowLevel() assume 32-bit is Float. Simple fixes to use isFloatingType().
• ij.plugin.frame.ThresholdAdjuster – constructor should test getType(). DoSet() and apply() assume 32-bit is float. Use isFloatingType() instead. Simple.
• ij.plugin.AVI_Reader – run() has some very minor special case logic for 16-bit. Not sure why. Will need to investigate further.
• ij.plugin.BatchConverter – run() uses getBitDepth() when it should use getType(). Simple.
• ij.plugin.BatchProcessor – processVirtualStack() and processFolder() use getBitDepth() when they should use getType(). Simple.
• ij.plugin.BMP_Writer - writeImage() uses getBitDepth() when it should use getType(). Simple.
• ij.plugin.CompositeConverter - run() uses getBitDepth() when it should use getType(). Simple.
• ij.plugin.ContrastEnhancer – some issues. investigate further
• ij.plugin.FFT - doInverseTransform() uses getBitDepth() when it should use getType(). Still a bit more broken as it uses fht.bitDepth which is copied from elsewhere. We want to remove reliance on bit depth determining what kind of processor we have.
• ij.plugin.FITS_Writer – run() uses bitDepth when it could use getType(). This method documented problems with instanceof. Method requires closer inspection.
• ij.plugin.FolderOpener – run() relies on bitDepth numerous times. Need to investigate further.
• ij.plugin.Histogram – run() relies on bitdepth when it should use getType(). Simple to fix.
• ij.plugin.HyperStackConverter – convertStackToHS() relies on bitdepth when it should use getType(). Simple to fix.
• ij.plugin.HyperStackReducer – will not work for images of type OTHER as it relies on IJ.createImage() which only knows the few predefined image types. Might need way to override the IJ that is in place so we can hook in our own createImage(). Also relies on ImagePlus::createHyperStack() which also has limited bit depth support. Otherwise bitDepth use is fine for this class.
• ij.plugin.ImageCalculator – calculate() relies on bit depth rather than getProcessor().isFloatingType(). Simple to fix.
• ij.plugin.ImagesToStack – a lot of reliance on bit depth. As is won’t work with OTHER type images. Look at this more closely.
• ij.plugin.ListVirtualStack – a lot of reliance on bit depth. Assumes only a few processor types exist. Needs to create processors. We might need Wayne to create a processor factory that we can override. Look at this more closely.
• ij.plugin.LUT_Editor – run() uses bit depth when it can be pretty simply avoided.
• ij.plugin.PNM_Writer – run() uses bit depth when it can be pretty simply avoided.
• ij.plugin.Resizer – zScale() uses bit depth unnecessarily. Simple. ResizeZ() and zScaleHyperStack() both use bit depth to call IJ.createImage(). So again we’ll need to override somehow.
• ij.plugin.RGBStackMerge – mergeStacks() and mergeHyperStacks() need some bit depth access. But also assumes 24-bit is RGB. Simple to remove this assumption.
• ij.plugin.Scaler – showDialog() uses bit depth when it could use getType(). Simple.
• ij.plugin.Slicer – resliceHyperStack() calls createHyperStack() with bit depth. Need an override. CreateOutputStack() calls NewImage.createImage() with bit depth. Again an override needed. GetOutputStackSize() uses bitDepth to calculate data use sizes. Can use new byte calc methods.
• ij.plugin.Straightener – straighten(), straightenLine(), and rotateLine() all assume 24 bit == RGB. Simple to fix.
• ij.plugin.SurfacePlotter – drawAndLabelAxis() uses bitDepth where it could use getType(). Simple.
• ij.plugin.Thresholder – convertSTackToBinary() uses bitDepth where it could use getType(). Simple.
• ij.plugin.WandToolsOptions - run() uses bitDepth where it could use getType(). Simple.
• ij.plugin.XYCoordinates - run() uses bitDepth where it could use getType(). Simple.
• ij.plugin.ZProjector - doHyperStackProjection() uses bitDepth where it could use getType(). Simple.
• ij.process.StackStatistics – constructor and doCalculations() rely on bit depth. The 24 bit stuff can use getType instead. But the 8 & 16 cases may be fine. Investigate further.
• ij.CompositeImage – constructor assumes 24-bit == RGB. Simple fix.
• ij.ImagePlus – revert() and show() use bit depth but 8 bit cases. So may be safe but best to replace. Simple fix.
• ij.VirtualStack – getProcessor switches on bit depth. Can only support 8, 16, 24, and 32. This is probably okay from the looks of it.
## MISCELLANEOUS NOTES
Additional methods desired in ImageProcessor and subclasses:
• double support: setting via setd() by (x,y) or by index
• long support: getting/setting via getl()/setl() by (x,y) or by index
Further changes:
• replace use of ImagePlus::getBytesPerPixel() with ImagePlus::getActualBytesPerPixel() where needed
|
2020-09-19 09:05:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2757382094860077, "perplexity": 10596.017671753496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191160.14/warc/CC-MAIN-20200919075646-20200919105646-00163.warc.gz"}
|
http://eprints.adm.unipi.it/1472/
|
# Field theoretical approach to the study of theta dependence in Yang-Mills theories on the lattice
D'Elia, Massimo (2003) Field theoretical approach to the study of theta dependence in Yang-Mills theories on the lattice. Nuclear Physics B - Proceedings Supplements, 661 . pp. 139-152. ISSN 1873-3832
Full text not available from this repository.
## Abstract
We discuss the extension of the field theoretical approach, already used in the lattice determination of the topological susceptibility, to the computation of further terms in the expansion of the ground state energy $F(\theta)$ around $\theta = 0$ in SU(N) Yang-Mills theories. In particular we determine the fourth order term in the expansion for SU(3) pure gauge theory and compare our results with previous cooling determinations. In the last part of the paper we make some considerations about the nature of the ultraviolet fluctuations responsible for the renormalization of the lattice topological charge correlation functions; in particular we propose and test an ansatz which leads to improved estimates of the fourth and higher order terms in the expansion of F(\theta).
Item Type: Article Imported from arXiv Area02 - Scienze fisiche > FIS/02 - Fisica teorica, modelli e metodi matematici Dipartimenti (from 2013) > DIPARTIMENTO DI FISICA dott.ssa Sandra Faita 29 Jan 2014 23:00 29 Jan 2014 23:00 http://eprints.adm.unipi.it/id/eprint/1472
### Repository staff only actions
View Item
|
2017-09-26 12:54:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4915430247783661, "perplexity": 1000.2990207688345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695726.80/warc/CC-MAIN-20170926122822-20170926142822-00106.warc.gz"}
|
https://zbmath.org/serials/?q=se%3A492
|
## Journal of Functional Analysis
Short Title: J. Funct. Anal. Publisher: Elsevier, Amsterdam ISSN: 0022-1236 Online: https://www.sciencedirect.com/journal/journal-of-functional-analysis/issues Comments: Indexed cover-to-cover
Documents Indexed: 8,860 Publications (since 1967) References Indexed: 8,108 Publications with 196,018 References.
all top 5
### Latest Issues
283, No. 11 (2022) 283, No. 10 (2022) 283, No. 9 (2022) 283, No. 8 (2022) 283, No. 7 (2022) 283, No. 6 (2022) 283, No. 5 (2022) 283, No. 4 (2022) 283, No. 3 (2022) 283, No. 2 (2022) 283, No. 1 (2022) 282, No. 12 (2022) 282, No. 11 (2022) 282, No. 10 (2022) 282, No. 9 (2022) 282, No. 8 (2022) 282, No. 7 (2022) 282, No. 6 (2022) 282, No. 5 (2022) 282, No. 4 (2022) 282, No. 3 (2022) 282, No. 2 (2022) 282, No. 1 (2022) 281, No. 12 (2021) 281, No. 11 (2021) 281, No. 10 (2021) 281, No. 9 (2021) 281, No. 8 (2021) 281, No. 7 (2021) 281, No. 6 (2021) 281, No. 5 (2021) 281, No. 4 (2021) 281, No. 3 (2021) 281, No. 2 (2021) 281, No. 1 (2021) 280, No. 12 (2021) 280, No. 11 (2021) 280, No. 10 (2021) 280, No. 9 (2021) 280, No. 8 (2021) 280, No. 7 (2021) 280, No. 6 (2021) 280, No. 5 (2021) 280, No. 4 (2021) 280, No. 3 (2021) 280, No. 2 (2021) 280, No. 1 (2021) 279, No. 12 (2020) 279, No. 11 (2020) 279, No. 10 (2020) 279, No. 9 (2020) 279, No. 8 (2020) 279, No. 7 (2020) 279, No. 6 (2020) 279, No. 5 (2020) 279, No. 4 (2020) 279, No. 3 (2020) 279, No. 2 (2020) 279, No. 1 (2020) 278, No. 12 (2020) 278, No. 11 (2020) 278, No. 10 (2020) 278, No. 9 (2020) 278, No. 8 (2020) 278, No. 7 (2020) 278, No. 6 (2020) 278, No. 5 (2020) 278, No. 4 (2020) 278, No. 3 (2020) 278, No. 2 (2020) 278, No. 1 (2020) 277, No. 12 (2019) 277, No. 11 (2019) 277, No. 10 (2019) 277, No. 9 (2019) 277, No. 8 (2019) 277, No. 7 (2019) 277, No. 6 (2019) 277, No. 5 (2019) 277, No. 4 (2019) 277, No. 3 (2019) 277, No. 2 (2019) 277, No. 1 (2019) 276, No. 12 (2019) 276, No. 11 (2019) 276, No. 10 (2019) 276, No. 9 (2019) 276, No. 8 (2019) 276, No. 7 (2019) 276, No. 6 (2019) 276, No. 5 (2019) 276, No. 4 (2019) 276, No. 3 (2019) 276, No. 2 (2019) 276, No. 1 (2019) 275, No. 12 (2018) 275, No. 11 (2018) 275, No. 10 (2018) 275, No. 9 (2018) 275, No. 8 (2018) ...and 688 more Volumes
all top 5
### Authors
46 Albeverio, Sergio A. 45 Sukochev, Fedor Anatol’evich 41 Röckner, Michael 39 Simon, Barry 30 Xia, Jingbo 28 Jørgensen, Palle E. T. 23 Kondrat’yev, Yuriĭ Grygorovych 23 Strichartz, Robert S. 22 Raeburn, Iain 21 Wei, Juncheng 20 Lau, Anthony To-Ming 20 Malliavin, Paul 19 Gaveau, Bernard 19 Paulsen, Vern Ival 19 Popescu, Gelu 18 Dynkin, Evgeniĭ Borisovich 18 Kaniuth, Eberhard 17 Baggett, Lawrence Wasson 17 Cruzeiro, Ana-Bela 17 Smith, Roger R. 17 Størmer, Erling 17 Ustunel, Ali Suleyman 17 Zheng, Dechao 16 Bratteli, Ola 16 Brézis, Haïm 16 Davies, Edward Brian 16 Lin, Huaxin 16 Ørsted, Bent 16 Wang, Feng-Yu 16 Zanin, Dmitriy V. 15 Aida, Shigeki 15 Chen, Zhen-Qing 15 Curto, Raúl Enrique 15 Robinson, Derek W. 15 Segal, Irving Ezra 15 Stroock, Daniel W. 15 van den Berg, Michiel 15 Varopoulos, Nicholas Theodore 14 Bañuelos, Rodrigo 14 Damanik, David 14 Fang, Shizan 14 Foiaş, Ciprian Ilie 14 Hadwin, Donald W. 14 Kishimoto, Akitaka 14 Lin, Chang-Shou 14 Mastyło, Mieczysław 14 Pearcy, Carl Mark Jr. 14 Peller, Vladimir Vsevolodovich 14 Ricci, Fulvio 14 Ruan, Zhongjin 13 Alpay, Daniel Aron 13 Carey, Alan L. 13 Da Prato, Giuseppe 13 Davidson, Kenneth R. 13 Dym, Harry 13 Elliott, George A. 13 Guo, Kunyu 13 Haagerup, Uffe Valentin 13 Helton, John William 13 Pedersen, Gert Kjærgård 13 Popa, Sorin Teodor 13 Véron, Laurent 12 Bismut, Jean-Michel 12 Blecher, David P. 12 Gesztesy, Fritz 12 Han, Deguang 12 Helffer, Bernard 12 König, Hermann 12 Loy, Richard J. 12 Ludwig, Jean 12 Milman, Vitali D. 12 Ólafsson, Gestur 12 Penney, Richard C. 12 Power, Stephen Charles 12 Sims, Aidan 12 Vega, Luis 12 Zegarlinski, Boguslaw 11 Archbold, Robert J. 11 Batty, Charles J. K. 11 Brudnyi, Alexander 11 Crandall, Michael G. 11 Dadarlat, Marius 11 Douglas, Ronald George 11 Driver, Bruce K. 11 Duong, Xuan Thinh 11 Dykema, Kenneth J. 11 Gohberg, Israel 11 Guido, Daniele 11 Kappeler, Thomas 11 Larson, David Royal 11 Lions, Pierre-Louis 11 Mitrea, Marius 11 Musso, Monica 11 Nualart, David 11 Radjavi, Heydar 11 Rigoli, Marco 11 Rodríguez-Piazza, Luis 11 Shul’man, Viktor Semënovich 11 Teplyaev, Alexander 11 Turowska, Lyudmila B. ...and 7,623 more Authors
all top 5
### Fields
3,167 Functional analysis (46-XX) 2,458 Operator theory (47-XX) 2,083 Partial differential equations (35-XX) 981 Probability theory and stochastic processes (60-XX) 859 Global analysis, analysis on manifolds (58-XX) 750 Topological groups, Lie groups (22-XX) 622 Abstract harmonic analysis (43-XX) 566 Harmonic analysis on Euclidean spaces (42-XX) 518 Quantum theory (81-XX) 417 Differential geometry (53-XX) 290 Several complex variables and analytic spaces (32-XX) 288 Functions of a complex variable (30-XX) 268 Measure and integration (28-XX) 226 Calculus of variations and optimal control; optimization (49-XX) 225 Dynamical systems and ergodic theory (37-XX) 222 Real functions (26-XX) 195 Ordinary differential equations (34-XX) 194 Statistical mechanics, structure of matter (82-XX) 189 Potential theory (31-XX) 156 Fluid mechanics (76-XX) 119 Convex and discrete geometry (52-XX) 118 Integral transforms, operational calculus (44-XX) 105 Nonassociative rings and algebras (17-XX) 105 Group theory and generalizations (20-XX) 90 Linear and multilinear algebra; matrix theory (15-XX) 81 Special functions (33-XX) 80 Number theory (11-XX) 73 $$K$$-theory (19-XX) 67 General topology (54-XX) 66 Approximations and expansions (41-XX) 66 Manifolds and cell complexes (57-XX) 59 Integral equations (45-XX) 56 Combinatorics (05-XX) 56 Systems theory; control (93-XX) 49 Difference and functional equations (39-XX) 45 Associative rings and algebras (16-XX) 39 Numerical analysis (65-XX) 35 Algebraic topology (55-XX) 33 Algebraic geometry (14-XX) 31 Mathematical logic and foundations (03-XX) 31 Biology and other natural sciences (92-XX) 31 Information and communication theory, circuits (94-XX) 27 Mechanics of deformable solids (74-XX) 27 Optics, electromagnetic theory (78-XX) 21 Mechanics of particles and systems (70-XX) 18 Category theory; homological algebra (18-XX) 18 Operations research, mathematical programming (90-XX) 16 Geometry (51-XX) 16 Relativity and gravitational theory (83-XX) 15 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 10 Order, lattices, ordered algebraic structures (06-XX) 10 Statistics (62-XX) 9 Commutative algebra (13-XX) 9 Sequences, series, summability (40-XX) 6 General and overarching topics; collections (00-XX) 6 History and biography (01-XX) 6 Computer science (68-XX) 5 Field theory and polynomials (12-XX) 5 Geophysics (86-XX) 3 Astronomy and astrophysics (85-XX) 2 Classical thermodynamics, heat transfer (80-XX)
### Citations contained in zbMATH Open
7,992 Publications have been cited 137,595 times in 81,137 Documents Cited by Year
Dual variational methods in critical point theory and applications. Zbl 0273.49063
Ambrosetti, Antonio; Rabinowitz, Paul H.
1973
Some global results for nonlinear eigenvalue problems. Zbl 0212.16504
Rabinowitz, P. H.
1971
Bifurcation from simple eigenvalues. Zbl 0219.46015
Crandall, M. G.; Rabinowitz, P. H.
1971
Combined effects of concave and convex nonlinearities in some elliptic problems. Zbl 0805.35028
Ambrosetti, Antonio; Brézis, Haïm; Cerami, Giovanna
1994
Stability theory of solitary waves in the presence of symmetry. I. Zbl 0656.35122
Grillakis, Manoussos; Shatah, Jalal; Strauss, Walter
1987
Nonlinear elliptic and parabolic equations involving measure data. Zbl 0707.35060
Boccardo, Lucio; Gallouët, Thierry
1989
Nonspreading wave packets for the cubic Schrödinger equation with a bounded potential. Zbl 0613.35076
Floer, Andreas; Weinstein, Alan
1986
A discrete transform and decompositions of distribution spaces. Zbl 0716.46031
Frazier, Michael; Jawerth, Björn
1990
The Schrödinger-Poisson equation under the effect of a nonlinear local term. Zbl 1136.35037
Ruiz, David
2006
Interpretation of AF $$C^*$$-algebras in Łukasiewicz sentential calculus. Zbl 0597.46059
Mundici, Daniele
1986
Groundstates of nonlinear Choquard equations: existence, qualitative properties and decay asymptotics. Zbl 1285.35048
Moroz, Vitaly; Van Schaftingen, Jean
2013
On extensions of the Brunn-Minkowski and Prekopa-Leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation. Zbl 0334.26009
Brascamp, Herm Jan; Lieb, Elliott H.
1976
The inhomogeneous Dirichlet problem in Lipschitz domains. Zbl 0832.35034
Jerison, David; Kenig, Carlos E.
1995
Layer potentials and regularity for the Dirichlet problem for Laplace’s equation in Lipschitz domains. Zbl 0589.31005
Verchota, Gregory
1984
Affine systems in $$L_ 2(\mathbb{R}^d)$$: The analysis of the analysis operator. Zbl 0891.42018
Ron, Amos; Shen, Zuowei
1997
Generalization of an inequality by Talagrand and links with the logarithmic Sobolev inequality. Zbl 0985.58019
Otto, F.; Villani, C.
2000
Notes on non-commutative integration. Zbl 0292.46030
Nelson, Edward
1974
On a class of nonlinear Schrödinger equations. I. The Cauchy problem, general case. Zbl 0396.35028
Ginibre, J.; Velo, G.
1979
Banach spaces related to integrable group representations and their atomic decompositions. I. Zbl 0691.46011
Feichtinger, Hans G.; Gröchenig, K. H.
1989
Operators with dense, invariant, cyclic vector manifolds. Zbl 0732.47016
Godefroy, Gilles; Shapiro, Joel H.
1991
Ground state solutions for some indefinite variational problems. Zbl 1178.35352
Szulkin, Andrzej; Weth, Tobias
2009
Abstract $$L^ p$$ estimates for the Cauchy problem with applications to the Navier-Stokes equations in exterior domains. Zbl 0739.35067
Giga, Yoshikazu; Sohr, Hermann
1991
On the theory of $${\mathcal L}_{p, \lambda}$$ spaces. Zbl 0175.42602
Peetre, J.
1969
Stability theory of solitary waves in the presence of symmetry. II. Zbl 0711.58013
Grillakis, Manoussos G.; Shatah, Jalal; Strauss, Walter
1990
Nonstationary flows of viscous and ideal fluids in $$R^3$$. Zbl 0229.76018
Kato, Tosio
1972
The role of the Green’s function in a nonlinear elliptic equation involving the critical Sobolev exponent. Zbl 0786.35059
Rey, Olivier
1990
Gevrey class regularity for the solutions of the Navier-Stokes equations. Zbl 0702.35203
Foias, C.; Temam, R.
1989
Commuting self-adjoint partial differential operators and a group theoretic problem. Zbl 0279.47014
Fuglede, Bent
1974
The free Markoff field. Zbl 0273.60079
Nelson, Edward
1973
On approximation of approximately linear mappings by linear mappings. Zbl 0482.47033
Rassias, John M.
1982
Generalized resolvents and the boundary value problems for Hermitian operators with gaps. Zbl 0748.47004
Derkach, V. A.; Malamud, M. M.
1991
Nonlinear scattering theory at low energy. Zbl 0466.47006
Strauss, Walter A.
1981
Generalized Strichartz inequalities for the wave equation. Zbl 0849.35064
Ginibre, J.; Velo, G.
1995
Some new function spaces and their applications to harmonic analysis. Zbl 0569.42016
Coifman, R. R.; Meyer, Yves; Stein, Elias M.
1985
Controlling rough paths. Zbl 1058.60037
Gubinelli, M.
2004
Exponential integrability and transportation cost related to logarithmic Sobolev inequalities. Zbl 0924.46027
Bobkov, S. G.; Götze, F.
1999
Analysis of the Laplacian on a complete Riemannian manifold. Zbl 0515.58037
Strichartz, Robert S.
1983
A sharp Trudinger-Moser type inequality for unbounded domains in $$\mathbb R^2$$. Zbl 1119.46033
Ruf, Bernhard
2005
$$W^{1,p}$$-quasiconvexity and variational problems for multiple integrals. Zbl 0549.46019
Ball, J. M.; Murat, F.
1984
Symétrie et compacité dans les espaces de Sobolev. Zbl 0501.46032
Lions, Pierre-Louis
1982
Essential self-adjointness of powers of generators of hyperbolic equations. Zbl 0263.35066
Chernoff, Paul R.
1973
Regularity of the moments of the solution of a transport equation. Zbl 0652.47031
Golse, François; Lions, Pierre-Louis; Perthame, Benoît; Sentis, Rémi
1988
Factoring weakly compact operators. Zbl 0306.46020
Davis, W. J.; Figiel, T.; Johnson, W. B.; Pełczyński, Aleksander
1974
Injectivity and operator spaces. Zbl 0341.46049
Choi, Man-Duen; Effros, Edward G.
1977
Extremals of determinants of Laplacians. Zbl 0653.53022
Osgood, B.; Phillips, R.; Sarnak, P.
1988
Conditional expectations in von Neumann algebras. Zbl 0245.46089
Takesaki, Masamichi
1972
Dispersion of small amplitude solutions of the generalized Korteweg-de Vries equation. Zbl 0743.35067
Christ, F. M.; Weinstein, M. I.
1991
Hardy spaces with variable exponents and generalized Campanato spaces. Zbl 1244.42012
Nakai, Eiichi; Sawano, Yoshihiro
2012
The Hardy inequality and the asymptotic behaviour of the heat equation with an inverse-square potential. Zbl 0953.35053
Vazquez, Juan Luis; Zuazua, Enrike
2000
Endpoint estimates for commutators of singular integral operators. Zbl 0831.42010
Pérez, Carlos
1995
On the Cauchy problem for the Zakharov system. Zbl 0894.35108
Ginibre, J.; Tsutsumi, Y.; Velo, G.
1997
$$C^*$$-algebras of real rank zero. Zbl 0776.46026
Brown, Lawrence G.; Pedersen, Gert K.
1991
Potential theory on Hilbert space. Zbl 0165.16403
Gross, L.
1967
Ultracontractivity and the heat kernel for Schrödinger operators and Dirichlet Laplacians. Zbl 0568.47034
Davies, E. B.; Simon, B.
1984
Graphs, groupoids, and Cuntz-Krieger algebras. Zbl 0929.46055
Kumijian, Alex; Pask, David; Raeburn, Iain; Renault, Jean
1997
Semi-classical states for nonlinear Schrödinger equations. Zbl 0887.35058
Del Pino, Manuel; Felmer, Patricio L.
1997
Contrôle dans les inéquations variationelles elliptiques. Zbl 0364.49003
Mignot, F.
1976
The sizes of compact subsets of Hilbert space and continuity of Gaussian processes. Zbl 0188.20502
Dudley, R. M.
1967
Addition of certain non-commuting random variables. Zbl 0651.46063
Voiculescu, Dan
1986
The structure of finitely generated shift-invariant spaces in $$L_ 2(\mathbb{R}^ d)$$. Zbl 0806.46030
de Boor, Carl; DeVore, Ronald A.; Ron, Amos
1994
Meromorphic extension of the resolvent on complete spaces with asymptotically constant negative curvature. Zbl 0636.58034
Mazzeo, Rafe R.; Melrose, Richard B.
1987
A joint spectrum for several commuting operators. Zbl 0233.47024
Taylor, Joseph L.
1970
On existence and scattering with minimal regularity for semilinear wave equations. Zbl 0846.35085
1995
Nonlinear ground state representations and sharp Hardy inequalities. Zbl 1189.26031
Frank, Rupert L.; Seiringer, Robert
2008
Global weak solutions and blow-up structure for the Degasperis-Procesi equation. Zbl 1126.35053
Escher, Joachim; Liu, Yue; Yin, Zhaoyang
2006
Remarks on the Euler equation. Zbl 0279.58005
Bourguignon, Jean-Pierre; Brézis, Haïm
1974
Ricci curvature of Markov chains on metric spaces. Zbl 1181.53015
Ollivier, Yann
2009
On the global existence and wave-breaking criteria for the two-component Camassa-Holm system. Zbl 1189.35254
Gui, Guilong; Liu, Yue
2010
Note on product formulas for operator semigroups. Zbl 0157.21501
Chernoff, P. R.
1968
Maximal functions associated to filtrations. Zbl 0974.47025
Christ, Michael; Kiselev, Alexander
2001
Orlicz-Sobolev spaces and imbedding theorems. Zbl 0216.15702
Donaldson, T. K.; Trudinger, N. S.
1971
Continuity properties for modulation spaces, with applications to pseudo-differential calculus. I. Zbl 1083.35148
Toft, Joachim
2004
Nonlinear mappings of monotone type in Banach spaces. Zbl 0249.47044
Browder, Felix E.; Hess, Peter
1972
Equations stochastiques du type Navier-Stokes. Zbl 0265.60094
Bensoussan, A.; Temam, R.
1973
Some existence results for superlinear elliptic boundary value problems involving critical exponents. Zbl 0614.35035
Cerami, G.; Solimini, S.; Struwe, M.
1986
The scalar-curvature problem on the standard three-dimensional sphere. Zbl 0722.53032
Bahri, Abbas; Coron, Jean-Michel
1991
Interior estimates in Morrey spaces for strong solutions to nondivergence form equations with discontinuous coefficients. Zbl 0822.35036
Di Fazio, Giuseppe; Ragusa, M. A.
1993
Composite media and asymptotic Dirichlet forms. Zbl 0808.46042
Mosco, Umberto
1994
A regularity result for the Stokes problem in a convex polygon. Zbl 0317.35037
Kellogg, R. B.; Osborn, J. E.
1976
Function spaces and reproducing kernels on bounded symmetric domains. Zbl 0718.32026
Faraut, J.; Koranyi, A.
1990
Differential systems with strongly indefinite variational structure. Zbl 0793.35038
Hulshof, Josephus; van der Vorst, Robertus
1993
Nonlinear parabolic equations with measure data. Zbl 0887.35082
Boccardo, Lucio; Dall’Aglio, Andrea; Gallouët, Thierry; Orsina, Luigi
1997
Fractional Laplacian phase transitions and boundary reactions: a geometric inequality and a symmetry result. Zbl 1163.35019
Sire, Yannick; Valdinoci, Enrico
2009
A critical point theorem related to the symmetric mountain pass lemma and its applications to elliptic equations. Zbl 1081.49002
Kajikiya, Ryuji
2005
Integrated semigroups. Zbl 0689.47014
Kellerman, Hermann; Hieber, Matthias
1989
Existence and asymptotic behavior of nodal solutions for the Kirchhoff-type problems in $$\mathbb{R}^3$$. Zbl 1343.35081
Deng, Yinbin; Peng, Shuangjie; Shuai, Wei
2015
Potential and scattering theory on wildly perturbed domains. Zbl 0293.35056
Rauch, Jeffrey; Taylor, Michael
1975
Null Lagrangians, weak continuity, and variational problems of arbitrary order. Zbl 0459.35020
Ball, J. M.; Currie, J. C.; Olver, P. J.
1981
Existence of positive solutions of the equation $$-\Delta u+a(x)u=u^{(N+2)/(N-2)}$$ in $${\mathbb{R}}^ N$$. Zbl 0705.35042
Benci, Vieri; Cerami, Giovanna
1990
On a class of nonlinear Schrödinger equations. II. Scattering theory, general case. Zbl 0396.35029
Ginibre, J.; Velo, G.
1979
Interpolation problems in nest algebras. Zbl 0309.46053
Arveson, William
1975
Multiplicity results for some nonlinear elliptic equations. Zbl 0852.35045
Ambrosetti, Antonio; Garcia Azorero, Jesus; Peral, Ireneo
1996
The structure of shift-invariant subspaces of $$L^2(\mathbb{R}^n)$$. Zbl 0986.46018
Bownik, Marcin
2000
Global existence and finite time blow-up for a class of semilinear pseudo-parabolic equations. Zbl 1279.35065
Xu, Runzhang; Su, Jia
2013
Asymmetric affine $$L_p$$ Sobolev inequalities. Zbl 1180.46023
Haberl, Christoph; Schuster, Franz E.
2009
On the well-posedness of the Degasperis-Procesi equation. Zbl 1090.35142
Coclite, Giuseppe M.; Karlsen, Kenneth H.
2006
Harmonic analysis on real reductive groups. I: The theory of the constant term. Zbl 0315.43002
Harish-Chandra
1975
On the Bourgain, Brezis, and Mironescu theorem concerning limiting embeddings of fractional Sobolev spaces. Zbl 1028.46050
Maz’ya, V.; Shaposhnikova, T.
2002
On the inverse spectral problem for the Camassa-Holm equation. Zbl 0907.35009
1998
On spectral Cantor measures. Zbl 1016.28009
Łaba, Izabella; Wang, Yang
2002
A vector Riemann-Hilbert approach to the muttalib-borodin ensembles. Zbl 07474676
Wang, Dong; Zhang, Lun
2022
Invariant subspaces of elliptic systems I: pseudodifferential projections. Zbl 1486.58014
Capoferri, Matteo; Vassiliev, Dmitri
2022
Classifying the level set of principal eigenvalue for time-periodic parabolic operators and applications. Zbl 1481.35297
Liu, Shuang; Lou, Yuan
2022
Normalized solutions for Schrödinger equations with critical Sobolev exponent and mixed nonlinearities. Zbl 07550018
Wei, Juncheng; Wu, Yuanze
2022
Competing nonlinearities in NLS equations as source of threshold phenomena on star graphs. Zbl 1486.35400
Adami, Riccardo; Boni, Filippo; Dovetta, Simone
2022
On rectifiable measures in Carnot groups: Marstrand-Mattila rectifiability criterion. Zbl 07510683
Antonelli, Gioacchino; Merlo, Andrea
2022
Enhanced dissipation, hypoellipticity for passive scalar equations with fractional dissipation. Zbl 07436532
He, Siming
2022
Threshold solutions in the focusing 3D cubic NLS equation outside a strictly convex obstacle. Zbl 1490.35410
Duyckaerts, Thomas; Landoulsi, Oussama; Roudenko, Svetlana
2022
On the ergodic Waring-Goldbach problem. Zbl 1489.37008
Anderson, Theresa C.; Cook, Brian; Hughes, Kevin; Kumchev, Angel
2022
Characterizations of predual spaces to a class of Sobolev multiplier type spaces. Zbl 07457881
Ooi, Keng Hao; Phuc, Nguyen Cong
2022
Support of the Brown measure of the product of a free unitary Brownian motion by a free self-adjoint projection. Zbl 1484.60088
Demni, Nizar; Hamdi, Tarek
2022
Sharp decay estimates for massless Dirac fields on a Schwarzschild background. Zbl 07457887
Ma, Siyuan; Zhang, Lin
2022
Minimising hulls, p-capacity and isoperimetric inequality on complete Riemannian manifolds. Zbl 07573804
Fogagnolo, Mattia; Mazzieri, Lorenzo
2022
Thermalisation for Wigner matrices. Zbl 1484.60004
Cipolloni, Giorgio; Erdős, László; Schröder, Dominik
2022
A Riesz representation theorem for functionals on log-concave functions. Zbl 07474685
Rotem, Liran
2022
Principal series of Hermitian Lie groups induced from Heisenberg parabolic subgroups. Zbl 07474688
Zhang, Genkai
2022
Balanced metrics for Kähler-Ricci solitons and quantized Futaki invariants. Zbl 07474689
Ioos, Louis
2022
Szlenk index of $$C(K)\hat{\otimes}_\pi C(L)$$. Zbl 07489492
Causey, R. M.; Galego, E. M.; Samuel, C.
2022
Propagation phenomena for time-space periodic monotone semiflows and applications to cooperative systems in multi-dimensional media. Zbl 1485.35106
Du, Li-Jun; Li, Wan-Tong; Shen, Wenxian
2022
Optimal transport pseudometrics for quantum and classical densities. Zbl 1485.49053
Golse, François; Paul, Thierry
2022
(Non-)Dunford-Pettis operators on noncommutative symmetric spaces. Zbl 1491.46059
Huang, Jinghao; Pliev, Marat; Sukochev, Fedor
2022
Optimal variance-Gamma approximation on the second Wiener chaos. Zbl 1490.60051
Azmoodeh, Ehsan; Eichelsbacher, Peter; Thäle, Christoph
2022
Perturbed Fourier uniqueness and interpolation results in higher dimensions. Zbl 1486.42001
Ramos, João P. G.; Stoller, Martin
2022
Function spaces of Lorentz-Sobolev type: atomic decompositions, characterizations in terms of wavelets, interpolation and multiplications. Zbl 1490.46028
Besoy, Blanca F.; Cobos, Fernando
2022
Nonlocal trace spaces and extension results for nonlocal calculus. Zbl 07505259
Du, Qiang; Tian, Xiaochuan; Wright, Cory; Yu, Yue
2022
Sobolev spaces on p.c.f. self-similar sets I: critical orders and atomic decompositions. Zbl 1487.28007
Cao, Shiping; Qiu, Hua
2022
Sharp decay estimates for Oldroyd-B model with only fractional stress tensor diffusion. Zbl 1481.35345
Wang, Peixin; Wu, Jiahong; Xu, Xiaojing; Zhong, Yueyuan
2022
On tameness of zonoids. Zbl 07457102
Lerario, Antonio; Mathis, Léo
2022
On singular values of Hankel operators on Bergman spaces. Zbl 07528102
Bourass, M.; El-Fallah, O.; Marrhich, I.; Naqos, H.
2022
Enhanced dissipation and Hörmander’s hypoellipticity. Zbl 1489.35021
Albritton, Dallas; Beekie, Rajendra; Novack, Matthew
2022
A unified divergent approach to Hardy-Poincaré inequalities in classical and variable Sobolev spaces. Zbl 07538291
Di Fratta, Giovanni; Fiorenza, Alberto
2022
The rectifiability of the entropy defect measure for Burgers equation. Zbl 07550012
Marconi, Elio
2022
The annealed Calderón-Zygmund estimate as convenient tool in quantitative stochastic homogenization. Zbl 07557253
Josien, Marc; Otto, Felix
2022
The Dirichlet problem for degenerate curvature equations. Zbl 1489.35153
Jiao, Heming; Wang, Zhizhang
2022
Temperature patches for the subcritical Boussinesq-Navier-Stokes system with no diffusion. Zbl 1490.35328
Khor, Calvin; Xu, Xiaojing
2022
Some sharp Schwarz-Pick type estimates and their applications of harmonic and pluriharmonic functions. Zbl 1479.31003
2022
Beurling quotient modules on the polydisc. Zbl 1487.46056
Bhattacharjee, Monojit; Krishna Das, B.; Debnath, Ramlal; Sarkar, Jaydeb
2022
Quasi-invariance of low regularity Gaussian measures under the gauge map of the periodic derivative NLS. Zbl 1483.35210
Genovese, Giuseppe; Lucà, Renato; Tzvetkov, Nikolay
2022
The zero inertia limit from hyperbolic to parabolic Ericksen-Leslie system of liquid crystal flow. Zbl 1477.35016
Jiang, Ning; Luo, Yi-Long
2022
Thin-shell concentration for random vectors in Orlicz balls via moderate deviations and Gibbs measures. Zbl 1487.46013
Alonso-Gutiérrez, David; Prochno, Joscha
2022
Spectrality of Sierpinski-type self-affine measures. Zbl 1485.28016
Lu, Zheng-Yi; Dong, Xin-Han; Liu, Zong-Sheng
2022
Resolvent and spectral measure for Schrödinger operators on flat Euclidean cones. Zbl 1484.42025
Zhang, Junyong
2022
Some remarks on a formula for Sobolev norms due to Brezis, Van Schaftingen and Yung. Zbl 1479.35029
2022
Multigraph limits, unbounded kernels, and Banach space decorated graphs. Zbl 1479.05342
Kunszenti-Kovács, Dávid; Lovász, László; Szegedy, Balázs
2022
Spatial ergodicity and central limit theorems for parabolic Anderson model with delta initial condition. Zbl 1485.60058
Chen, Le; Khoshnevisan, Davar; Nualart, David; Pu, Fei
2022
Constants of the Kahane-Salem-Zygmund inequality asymptotically bounded by 1. Zbl 07437673
Pellegrino, Daniel; Raposo, Anselmo
2022
Divergent operator with degeneracy and related sharp inequalities. Zbl 1479.35027
Dou, Jingbo; Sun, Liming; Wang, Lei; Zhu, Meijun
2022
Best approximation of functions by log-polynomials. Zbl 07457872
Alonso-Gutiérrez, David; González Merino, Bernardo; Villa, Rafael
2022
Linear stability and enhanced dissipation for the two-jet Kolmogorov type flow on the unit sphere. Zbl 07567817
Miura, Tatsu-Hiko
2022
Stability of Couette flow for 2D Boussinesq system with vertical dissipation. Zbl 1479.35606
Deng, Wen; Wu, Jiahong; Zhang, Ping
2021
Energy on spheres and discreteness of minimizing measures. Zbl 1462.31009
Bilyk, Dmitriy; Glazyrin, Alexey; Matzke, Ryan; Park, Josiah; Vlasiuk, Oleksandr
2021
The $$L_p$$ dual Minkowski problem and related parabolic flows. Zbl 1469.35115
Chen, Haodi; Li, Qi-Rui
2021
Embedding of $$\mathrm{RCD}^\ast (K,N)$$ spaces in $$L^2$$ via eigenfunctions. Zbl 1478.53021
Ambrosio, Luigi; Honda, Shouhei; Portegies, Jacobus W.; Tewodrose, David
2021
Equivalence of the local and global versions of the $$L^p$$-Brunn-Minkowski inequality. Zbl 1461.52010
Putterman, Eli
2021
Normalized ground states of the nonlinear Schrödinger equation with at least mass critical growth. Zbl 1465.35151
Bieganowski, Bartosz; Mederski, Jarosław
2021
A fully cross-diffusive two-component evolution system: existence and qualitative analysis via entropy-consistent thin-film-type approximation. Zbl 1471.35278
Tao, Youshan; Winkler, Michael
2021
Solutions for nonlinear Fokker-Planck equations with measures as initial data and Mckean-Vlasov equations. Zbl 1458.35415
Barbu, Viorel; Röckner, Michael
2021
On self-similar spectral measures. Zbl 1454.28009
An, Lixiang; Wang, Cong
2021
Normalized solutions to the Chern-Simons-Schrödinger system. Zbl 1455.35080
Gou, Tianxiang; Zhang, Zhitao
2021
A formula for the time derivative of the entropic cost and applications. Zbl 1462.35476
Conforti, Giovanni; Tamanini, Luca
2021
Entropy numbers and Marcinkiewicz-type discretization. Zbl 1469.41013
Dai, F.; Prymak, A.; Shadrin, A.; Temlyakov, V.; Tikhonov, S.
2021
Large deviations, moderate deviations, and the KLS conjecture. Zbl 1486.60049
Alonso-Gutiérrez, David; Prochno, Joscha; Thäle, Christoph
2021
Bilinear decomposition and divergence-curl estimates on products related to local Hardy spaces and their dual spaces. Zbl 1457.42036
Yang, Dachun; Yuan, Wen; Zhang, Yangyang
2021
The dimensional Brunn-Minkowski inequality in Gauss space. Zbl 1456.52011
Eskenazis, Alexandros; Moschidis, Georgios
2021
On dual molecules and convolution-dominated operators. Zbl 1470.43005
Romero, José Luis; van Velthoven, Jordy Timo; Voigtlaender, Felix
2021
Extension theorems and a connection to the Erdős-Falconer distance problem over finite fields. Zbl 1468.42006
Koh, Doowon; Pham, Thang; Vinh, Le Anh
2021
Strong ill-posedness of logarithmically regularized 2D Euler equations in the borderline Sobolev space. Zbl 1458.35314
Kwon, Hyunju
2021
Reachable states and holomorphic function spaces for the 1-D heat equation. Zbl 1458.93021
Orsoni, Marcu-Antone
2021
On the Hölder regularity of signed solutions to a doubly nonlinear equation. Zbl 1473.35083
Bögelein, Verena; Duzaar, Frank; Liao, Naian
2021
Geometry driven type II higher dimensional blow-up for the critical heat equation. Zbl 1451.35031
del Pino, Manuel; Musso, Monica; Wei, Juncheng
2021
Sharp Hardy-Leray inequality for curl-free fields with a remainder term. Zbl 1452.26016
Hamamoto, Naoki; Takahashi, Futoshi
2021
Level-set inequalities on fractional maximal distribution functions and applications to regularity theory. Zbl 1454.35038
Nguyen, Thanh-Nhan; Tran, Minh-Phuong
2021
The Beurling-Lax-Halmos theorem for infinite multiplicity. Zbl 1470.47014
Curto, Raúl E.; Hwang, In Sung; Lee, Woo Young
2021
Embeddings of Lipschitz-free spaces into $$\ell_1$$. Zbl 1465.46019
Aliaga, Ramón J.; Petitjean, Colin; Procházka, Antonín
2021
Widths of resonances above an energy-level crossing. Zbl 1472.35253
Fujiié, S.; Martinez, A.; Watanabe, T.
2021
A limiting absorption principle for Helmholtz systems and time-harmonic isotropic Maxwell’s equations. Zbl 1479.35850
Cossetti, Lucrezia; Mandel, Rainer
2021
Spatial propagation in nonlocal dispersal Fisher-KPP equations. Zbl 1470.35118
Xu, Wen-Bing; Li, Wan-Tong; Ruan, Shigui
2021
Combinatorial Calabi flow on 3-manifolds with toroidal boundary. Zbl 1465.53101
Xu, Xu
2021
A spinorial analogue of the Brezis-Nirenberg theorem involving the critical Sobolev exponent. Zbl 1472.53057
Bartsch, Thomas; Xu, Tian
2021
Estimates and asymptotics for the stress concentration between closely spaced stiff $$C^{1, \gamma }$$ inclusions in linear elasticity. Zbl 1471.35267
Chen, Yu; H. G. Li, Haigang
2021
Quasi-invariance of Gaussian measures transported by the cubic NLS with third-order dispersion on T. Zbl 1464.35317
Debussche, Arnaud; Tsutsumi, Yoshio
2021
Existence of solutions of the abstract Cauchy problem of fractional order. Zbl 1469.34017
Henríquez, Hernán R.; Mesquita, Jaqueline G.; Pozo, Juan C.
2021
$$L^r$$-Helmholtz-Weyl decomposition for three dimensional exterior domains. Zbl 1472.35152
Hieber, Matthias; Kozono, Hideo; Seyfert, Anton; Shimizu, Senjo; Yanagisawa, Taku
2021
Non-autonomous rough semilinear PDEs and the multiplicative sewing lemma. Zbl 07401189
Gerasimovičs, Andris; Hocquet, Antoine; Nilssen, Torstein
2021
Liouville type theorem for critical order Hénon-Lane-Emden type equations on a half space and its applications. Zbl 1473.35082
Dai, Wei; Qin, Guolin
2021
Schwartz homologies of representations of almost linear Nash groups. Zbl 1478.22008
Chen, Yangyang; Sun, Binyong
2021
Blow-up phenomena in nonlocal eigenvalue problems: when theories of $$L^1$$ and $$L^2$$ meet. Zbl 1458.35070
Chan, Hardy; Gómez-Castro, David; Vázquez, Juan Luis
2021
Inequalities for the block projection operators. Zbl 1469.46056
Bikchentaev, A.; Sukochev, F.
2021
CLT for circular beta-ensembles at high temperature. Zbl 1469.60032
2021
Quasi-greedy bases in $$\ell_p$$ ($$0 < p < 1$$) are democratic. Zbl 1469.46014
Albiac, Fernando; Ansorena, José L.; Wojtaszczyk, Przemysław
2021
Nuclearity of semigroup $$C^\ast$$-algebras. Zbl 1467.46049
an Huef, Astrid; Nucinkis, Brita; Sehnem, Camila F.; Yang, Dilian
2021
Global hypoellipticity and global solvability for vector fields on compact Lie groups. Zbl 1465.35381
Kirilov, Alexandre; de Moraes, Wagner A. A.; Ruzhansky, Michael
2021
Singular solutions for the constant $$Q$$-curvature problem. Zbl 1455.53059
Hyder, Ali; Sire, Yannick
2021
On stable and finite Morse index solutions of the fractional Toda system. Zbl 1455.35284
Fazly, Mostafa; Yang, Wen
2021
$$C^\ast$$-algebras of extensions of groupoids by group bundles. Zbl 07290161
Ionescu, Marius; Kumjian, Alex; Renault, Jean N.; Sims, Aidan; Williams, Dana P.
2021
Global solution to the wave and Klein-Gordon system under null condition in dimension two. Zbl 1475.35203
Dong, Shijie
2021
The support of dually epi-translation invariant valuations on convex functions. Zbl 1487.52023
Knoerr, Jonas
2021
Reconstruction of bandlimited functions from space-time samples. Zbl 1462.94024
Ulanovskii, Alexander; Zlotnikov, Ilya
2021
Empirical spectral measures of quantum graphs in the Benjamini-Schramm limit. Zbl 1461.81046
Anantharaman, Nalini; Ingremeau, Maxime; Sabri, Mostafa; Winn, Brian
2021
Strain tensors on hyperbolic surfaces and their applications. Zbl 1464.74004
Yao, Peng-Fei
2021
...and 1425 more Documents
all top 5
### Cited by 39,934 Authors
207 Yang, Dachun 196 Albeverio, Sergio A. 194 Papageorgiou, Nikolaos S. 177 Jørgensen, Palle E. T. 172 Sukochev, Fedor Anatol’evich 164 Wei, Juncheng 159 Röckner, Michael 150 Tang, Xianhua 138 Rădulescu, Vicenţiu D. 120 Alves, Claudianor Oliveira 108 Simon, Barry 106 Wang, Feng-Yu 97 Gesztesy, Fritz 95 Chen, Zhen-Qing 89 Kondrat’yev, Yuriĭ Grygorovych 88 Ma, Ruyun 87 Byun, Sun-Sig 86 Guo, Boling 85 Alpay, Daniel Aron 84 Zou, Wenming 83 Miyagaki, Olimpio Hiroshi 81 Ruzhansky, Michael V. 80 Helffer, Bernard 80 Lu, Guozhen 80 Yuan, Wen 77 O’Regan, Donal 76 Yin, Zhaoyang 74 Figueiredo, Giovany Malcher 74 Vazquez, Juan Luis 73 Dolbeault, Jean 72 Sawano, Yoshihiro 71 Ozawa, Tohru 70 Hou, Jinchuan 70 Strichartz, Robert S. 70 Valdinoci, Enrico 69 Bogachev, Vladimir Igorevich 69 Mitrea, Marius 69 Shi, Junping 69 Song, Renming 69 Zhang, Xicheng 68 Zhang, Tusheng S. 67 Sims, Aidan 66 Do Ó, João M. Bezerra 66 Hayashi, Nakao 66 Park, Choonkil 66 Peng, Shuangjie 65 Musso, Monica 65 Pellegrino, Daniel Marinho 65 Squassina, Marco 65 Wu, Tsungfang 64 Duong, Xuan Thinh 64 Kenig, Carlos Eduardo 64 Raeburn, Iain 63 Da Prato, Giuseppe 62 Nualart, David 62 Pistoia, Angela 62 Wang, Zhi-Qiang 62 Xiao, Jie 61 Accardi, Luigi 61 Cianchi, Andrea 61 Ding, Yanheng 61 Yang, Minbo 60 Ambrosio, Vincenzo 60 Del Pino, Manuel A. 60 Frank, Rupert L. 60 Gröchenig, Karlheinz 60 Kim, Panki 60 Lin, Chang-Shou 59 Carrillo de la Plata, José Antonio 59 Dong, Hongjie 59 Flandoli, Franco 59 Guliyev, Vagif Sabir 59 Lions, Pierre-Louis 58 Li, Ji 58 López-Gómez, Julián 58 Thangavelu, Sundaram 57 Dvurečenskij, Anatolij 57 Malamud, Mark M. 57 Yan, Shusen 56 Carey, Alan L. 56 Colombo, Fabrizio 56 Titi, Edriss Saleh 56 Zanin, Dmitriy V. 55 Boccardo, Lucio 55 Damanik, David 55 Junge, Marius 55 Merle, Frank 55 Pucci, Patrizia 55 Wang, Mingxin 54 Curto, Raúl Enrique 54 Liu, Yue 54 Sire, Yannick 53 Astashkin, Sergeĭ Vladimir 53 Lin, Huaxin 53 Neeb, Karl-Hermann 53 Peral Alonso, Ireneo 53 Ricker, Werner Joseph 53 Rossi, Julio Daniel 53 Sreenadh, Konijeti 53 Wang, Jian ...and 39,834 more Authors
all top 5
### Cited in 1,090 Journals
5,441 Journal of Functional Analysis 4,319 Journal of Mathematical Analysis and Applications 2,578 Journal of Differential Equations 1,984 Proceedings of the American Mathematical Society 1,940 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1,697 Transactions of the American Mathematical Society 1,659 Communications in Mathematical Physics 1,515 Journal of Mathematical Physics 1,254 Advances in Mathematics 1,010 Calculus of Variations and Partial Differential Equations 1,004 Integral Equations and Operator Theory 810 Mathematische Annalen 782 Communications in Partial Differential Equations 776 Mathematische Zeitschrift 740 The Journal of Geometric Analysis 702 Archive for Rational Mechanics and Analysis 693 Linear Algebra and its Applications 683 Discrete and Continuous Dynamical Systems 580 Israel Journal of Mathematics 561 Stochastic Processes and their Applications 529 Journal de Mathématiques Pures et Appliquées. Neuvième Série 521 Potential Analysis 499 Comptes Rendus. Mathématique. Académie des Sciences, Paris 492 Applicable Analysis 486 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 477 Acta Mathematica Sinica. English Series 456 Communications on Pure and Applied Analysis 454 The Journal of Fourier Analysis and Applications 438 Annali di Matematica Pura ed Applicata. Serie Quarta 437 Duke Mathematical Journal 436 Nonlinear Analysis. Theory, Methods & Applications 430 Complex Analysis and Operator Theory 418 Probability Theory and Related Fields 413 Communications in Contemporary Mathematics 404 SIAM Journal on Mathematical Analysis 401 Journal d’Analyse Mathématique 393 Journal of Approximation Theory 392 The Annals of Probability 389 Annales de l’Institut Fourier 386 Infinite Dimensional Analysis, Quantum Probability and Related Topics 384 Complex Variables and Elliptic Equations 379 Nonlinear Analysis. Real World Applications 377 Mathematische Nachrichten 370 International Journal of Mathematics 364 Journal of Statistical Physics 362 Science China. Mathematics 361 ZAMP. Zeitschrift für angewandte Mathematik und Physik 358 Archiv der Mathematik 356 Inventiones Mathematicae 354 Annales Henri Poincaré 349 Proceedings of the Royal Society of Edinburgh. Section A. Mathematics 346 Boundary Value Problems 339 Applied Mathematics Letters 338 Abstract and Applied Analysis 334 Letters in Mathematical Physics 334 Rocky Mountain Journal of Mathematics 323 Reviews in Mathematical Physics 323 Journal of Evolution Equations 320 Monatshefte für Mathematik 313 NoDEA. Nonlinear Differential Equations and Applications 310 Mathematical Methods in the Applied Sciences 308 Journal of Geometry and Physics 305 Bulletin of the Australian Mathematical Society 299 Journal of Mathematical Sciences (New York) 299 Journal of Inequalities and Applications 293 Manuscripta Mathematica 288 Bulletin des Sciences Mathématiques 286 Mediterranean Journal of Mathematics 281 Positivity 272 Acta Applicandae Mathematicae 271 Mathematical Notes 262 Applied Mathematics and Computation 253 Advanced Nonlinear Studies 250 Publications of the Research Institute for Mathematical Sciences, Kyoto University 250 Revista Matemática Iberoamericana 242 Applied and Computational Harmonic Analysis 233 Discrete and Continuous Dynamical Systems. Series B 231 Ergodic Theory and Dynamical Systems 225 Journal of Function Spaces 222 Mathematical Proceedings of the Cambridge Philosophical Society 221 Computers & Mathematics with Applications 218 Journal of Algebra 215 Studia Mathematica 214 Results in Mathematics 214 Banach Journal of Mathematical Analysis 210 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 210 Journal of the European Mathematical Society (JEMS) 206 Arkiv för Matematik 204 Journal of Theoretical Probability 203 Functional Analysis and its Applications 199 Journal für die Reine und Angewandte Mathematik 195 Applied Mathematics and Optimization 195 Journal of Dynamics and Differential Equations 194 Nonlinearity 189 Linear and Multilinear Algebra 189 Proceedings of the Edinburgh Mathematical Society. Series II 187 Czechoslovak Mathematical Journal 187 Forum Mathematicum 187 Annales de l’Institut Henri Poincaré. Probabilités et Statistiques 182 Stochastic Analysis and Applications ...and 990 more Journals
all top 5
### Cited in 64 Fields
30,522 Partial differential equations (35-XX) 17,211 Functional analysis (46-XX) 16,705 Operator theory (47-XX) 9,223 Probability theory and stochastic processes (60-XX) 6,192 Global analysis, analysis on manifolds (58-XX) 6,142 Harmonic analysis on Euclidean spaces (42-XX) 5,548 Quantum theory (81-XX) 3,781 Fluid mechanics (76-XX) 3,751 Ordinary differential equations (34-XX) 3,704 Differential geometry (53-XX) 3,455 Dynamical systems and ergodic theory (37-XX) 3,384 Topological groups, Lie groups (22-XX) 3,347 Calculus of variations and optimal control; optimization (49-XX) 2,788 Abstract harmonic analysis (43-XX) 2,681 Statistical mechanics, structure of matter (82-XX) 2,562 Functions of a complex variable (30-XX) 2,349 Numerical analysis (65-XX) 2,290 Several complex variables and analytic spaces (32-XX) 2,109 Real functions (26-XX) 1,940 Measure and integration (28-XX) 1,500 Potential theory (31-XX) 1,485 Biology and other natural sciences (92-XX) 1,463 Linear and multilinear algebra; matrix theory (15-XX) 1,343 Systems theory; control (93-XX) 1,104 Mechanics of deformable solids (74-XX) 1,076 Group theory and generalizations (20-XX) 1,049 Nonassociative rings and algebras (17-XX) 1,043 Convex and discrete geometry (52-XX) 1,018 Approximations and expansions (41-XX) 999 Number theory (11-XX) 925 Combinatorics (05-XX) 921 Difference and functional equations (39-XX) 912 Integral equations (45-XX) 756 Information and communication theory, circuits (94-XX) 745 Integral transforms, operational calculus (44-XX) 727 General topology (54-XX) 725 Associative rings and algebras (16-XX) 697 Special functions (33-XX) 634 Operations research, mathematical programming (90-XX) 624 Statistics (62-XX) 544 Optics, electromagnetic theory (78-XX) 538 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 529 Order, lattices, ordered algebraic structures (06-XX) 489 Algebraic geometry (14-XX) 482 Manifolds and cell complexes (57-XX) 471 $$K$$-theory (19-XX) 468 Relativity and gravitational theory (83-XX) 453 Mathematical logic and foundations (03-XX) 441 Mechanics of particles and systems (70-XX) 305 Computer science (68-XX) 286 Algebraic topology (55-XX) 266 Category theory; homological algebra (18-XX) 190 Geometry (51-XX) 175 Classical thermodynamics, heat transfer (80-XX) 172 Geophysics (86-XX) 120 Sequences, series, summability (40-XX) 113 Commutative algebra (13-XX) 104 History and biography (01-XX) 87 Astronomy and astrophysics (85-XX) 78 General and overarching topics; collections (00-XX) 50 General algebraic systems (08-XX) 41 Field theory and polynomials (12-XX) 6 Mathematics education (97-XX) 1 (04-XX)
|
2022-10-04 01:07:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6167487502098083, "perplexity": 6658.596529614695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00430.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-p-section-p-2-exponents-and-scientific-notation-exercise-set-page-30/38
|
## Precalculus (6th Edition) Blitzer
$x^{40}$
Start with the given expression: $\frac{x^{30}}{x^{-10}}$ Simplify using quotient rule: $x^{30-(-10)}=x^{40}$
|
2018-07-16 16:45:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8989288210868835, "perplexity": 5374.2289257344755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589404.50/warc/CC-MAIN-20180716154548-20180716174548-00551.warc.gz"}
|
http://umj.imath.kiev.ua/volumes/issues/?lang=en&year=2017&number=9
|
2019
Том 71
№ 7
# Volume 69, № 9, 2017
Article (English)
### Estimation of the generalized Bessel – Struve transform in a certain space of generalized functions
Ukr. Mat. Zh. - 2017. - 69, № 9. - pp. 1155-1165
We investigate the so-called Bessel – Struve transform on certain class of generalized functions called Boehmians. By using different convolution products, we generate the Boehmian spaces, where the extended transform is well defined. We also show that the Bessel – Struve transform of a Boehmian is an isomorphism which is continuous with respect to a certain type of convergence.
Article (Ukrainian)
### Exact solutions of the nonliear equation $u_{tt} = = a(t) uu_{xx} + b(t) u_x^2 + c(t) u$
Ukr. Mat. Zh. - 2017. - 69, № 9. - pp. 1180-1186
Ans¨atzes that reduce the equation$u_{tt} = = a(t) uu_{xx} + b(t) u_x^2 + c(t) u$ to a system of two ordinary differential equations are defined. Also it is shown that the problem of constructing exact solutions of the form $u = \mu 1(t)x_2 + \mu 2(t)x\alpha , \alpha \in \bfR$, to this equation, reduces to integrating of a system of linear equations $\mu \prime \prime 1 = \Phi 1(t)\mu 1, \mu \prime \prime 2 = \Phi 2(t)\mu 2$, where $\Phi 1(t)$ and \Phi 2(t) are arbitrary predefined functions.
Article (English)
### Boundedness of Riesz-type potential operators on variable exponent Herz – Morrey spaces
Ukr. Mat. Zh. - 2017. - 69, № 9. - pp. 1187-1197
We show the boundedness of the Riesz-type potential operator of variable order $\beta (x)$ from the variable exponent Herz – Morrey spaces $M \dot{K}^{\alpha (\cdot ),\lambda}_{p_1 ,q_1 (\cdot )}(\mathbb{R}^n)$ into the weighted space $M \dot{K}^{\alpha (\cdot ),\lambda}_{p_2 ,q_2 (\cdot )}(\mathbb{R}^n, \omega )$, where $\alpha (x) \in L^{\infty} (\mathbb{R}^n) is log-Holder continuous both at the origin and at infinity,$\omega = (1+| x| ) \gamma (x)$with some$\gamma (x) > 0$, and$1/q_1 (x) 1/q_2 (x) = \beta (x)/n$when$q_1 (x)$is not necessarily constant at infinity. It is assumed that the exponent$q_1 (x)$satisfies the logarithmic continuity condition both locally and at infinity and$1 < (q_1)_{\infty} \leq q_1(x) \leq (q_1)_+ < \infty, \;x \in \mathbb{R}$. Article (Russian) ### Asymptotic representation of solutions of differential equations with rightly varying nonlinearities Ukr. Mat. Zh. - 2017. - 69, № 9. - pp. 1198-1216 The conditions of existence of some types of power-mode solutions of a binomial nonautonomous ordinary differential equation with regularly varying nonlinearities are established. Article (Russian) ### Reconstruction of the Sturm – Liouville operator with nonseparated boundary conditions and a spectral parameter in the boundary condition Ukr. Mat. Zh. - 2017. - 69, № 9. - pp. 1217-1223 We study the inverse problem for the Sturm – Liouville operator with nonseparated boundary conditions one of which contains a spectral parameter. The uniqueness theorem is presented and sufficient conditions for the solvability of the inverse problem are obtained. Article (English) ### Points of upper and lower semicontinuity of multivalued functions .................. Ukr. Mat. Zh. - 2017. - 69, № 9. - pp. 1224-1231 We investigate joint upper and lower semicontinuity of two-variable set-valued functions. More precisely. among other results, we show that, under certain conditions, a two-variable lower horizontally quasicontinuous mapping$F : X \times Y \rightarrow \scr K (Z)$is jointly upper semicontinuous on sets of the from$D \times \{ y_0\}$, where$D$is a dense G\delta subset of$X$and$y_0 \in Y$. A similar result is obtained for the joint lower semicontinuity of upper horizontally quasicontinuous mappings. These results improve some known results on the joint continuity of single-valued functions. Article (Ukrainian) ### Lie algebras associated with modules over polynomial rings Ukr. Mat. Zh. - 2017. - 69, № 9. - pp. 1232-1241 Let$\mathbb{K}$be an algebraically closed field of characteristic zero. Let$V$be a module over the polynomial ring$K[x, y]$. The actions of$x$and$ y$determine linear operators P and Q on V as a vector space over$\mathbb{K}$. Define the Lie algebra$L_V = K\langle P,Q\rangle \rightthreetimes V$as the semidirect product of two abelian Lie algebras with the natural action of$\mathbb{K}\langle P,Q\rangle$on$V$. We show that if$\mathbb{K}[x, y]$-modules$V$and$W$are isomorphic or weakly isomorphic, then the corresponding associated Lie algebras$L_V$and$L_W$are isomorphic. The converse is not true: we construct two$\mathbb{K}[x, y]$-modules$V$and$W$of dimension 4 that are not weakly isomorphic but their associated Lie algebras are isomorphic. We characterize such pairs of$\mathbb{K}[x, y]$-modules of arbitrary dimension over K. We prove that indecomposable modules$V$and$W$with$\mathrm{d}\mathrm{i}\mathrm{m}\mathbb{K} V = \mathrm{d}\mathrm{i}\mathrm{m}KW \geq 7$are weakly isomorphic if and only if their associated Lie algebras$L_V$and$L_W$are isomorphic. Article (Ukrainian) ### Differential equations with small stochastic summands under the Levy approximating conditions Ukr. Mat. Zh. - 2017. - 69, № 9. - pp. 1242-1249 The proposed methods enable us to study a model of stochastic evolution that includes Markov switchings and to identify the diffusion component and big jumps of perturbing process in the limiting equation. Big jumps of this type may describe rare catastrophic events in different applied problems. We consider the case where the perturbation of the system is determined by an impulse process in the nonclassical approximation scheme. Special attention is given to the asymptotic behavior of the generator of the analyzed evolutionary system. Article (Russian) ### Total differential Ukr. Mat. Zh. - 2017. - 69, № 9. - pp. 1250-1256 We present necessary and sufficient conditions for a continuous differential form to be the total differential. Article (English) ### On q-congruences involving harmonic numbers Ukr. Mat. Zh. - 2017. - 69, № 9. - pp. 1257-1265 We give some congruences involving$q$-harmonic numbers and alternating$q$-harmonic numbers of order$m$. Some of them are$q$-analogues of several known congruences. Anniversaries (Ukrainian) ### Mykhailo Pylypovych Kravchuk (27.09.1892 – 09.03.1942), famous ukrainian mathematician (on the 125th anniversary of his birth) Ukr. Mat. Zh. - 2017. - 69, № 9. - pp. 1265-1269 Brief Communications (English) ### Well-posedness of the Dirichlet problem in a cylindrical domain for three-dimensional elliptic equations with degeneration of type and order Ukr. Mat. Zh. - 2017. - 69, № 9. - pp. 1270-1274 The paper shows the unique solvability of the classical Dirichlet problem in cylindrical domain for three-dimensional elliptic equations with degeneration type and order. Brief Communications (Russian) ### Finite groups with 2pqr elements of the maximal order Ukr. Mat. Zh. - 2017. - 69, № 9. - pp. 1275-1279 Let$3 < p < q < r$be odd prime numbers. In this paper, we prove that the finite groups with exactly$2pqr$elements of maximal order are solvable. Brief Communications (Ukrainian) ### Descriptive complexity of the sizes of subsets of groups Ukr. Mat. Zh. - 2017. - 69, № 9. - pp. 1280-1283 We study the Borel complexity of some basic families of subsets of a countable group (large, small, thin, rarefied, etc.) determined by the sizes of their elements. The obtained results are applied to the Czech – Stone compactification$\beta G$of the group$G$. In particular, it is shown that the closure of the minimal ideal$\beta G$has the$F_{\sigma \delta}$type. Brief Communications (Ukrainian) ### Some holomorphic generalizations of loxodromic functions Ukr. Mat. Zh. - 2017. - 69, № 9. - pp. 1284-1288 The functional equation of the form$f(qz) = p(z)f(z), z \in C\setminus \{ 0\} , q \in C\setminus \{ 0\} , | q| < 1$is considered. For certain fixed elementary functions$p(z)\$, holomorphic solutions of this equation are found. These solutions are some generalizations of loxodromic functions. Some of solutions are represented via the Schottky – Klein prime function.
Brief Communications (Ukrainian)
### Karamata integral representations for functions generalizing regularly varying functions
Ukr. Mat. Zh. - 2017. - 69, № 9. - pp. 1289-1296
We consider the classes of functions that generalized regularly varying and receive Karamata’s type integral representations for this functions.
|
2019-08-19 00:16:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9974690079689026, "perplexity": 2531.9962689703216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314353.10/warc/CC-MAIN-20190818231019-20190819013019-00419.warc.gz"}
|
http://chemistry.stackexchange.com/questions/15571/why-is-%CE%94e-described-as-q-w-and-qw-in-different-contexts
|
# Why is ΔE described as Q-W and Q+W in different contexts?
Depending on the textbooks looked at, the energy change is described as either Q+W or the signage is changed to Q-W. Which is correct? Is it solely context dependent? Could anyone explain the background?
-
The difference in sign in the two versions of the first law of thermodynamics is to handle the two ways in which work can be defined.
The work done (assuming only pressure-volume work) can be defined as
$$w=P\Delta V$$
This is the definition often used in in scenarios when we care about the fate of the work. This definition is about the surroundings. Thus, when the system expands, it does positive work on the surroundings, which is a good thing. Think about an internal combustion engine. When the reaction occurs in a cylinder, gas is produced causing the volume to expand and move the piston, turning the drive shaft, etc. Since we care about the work after it leaves the cylinder, it is positive because the surroundings are gaining energy.
The work done by a system can also be defined as:
$$w=-P\Delta V$$
We use this definition often in chemistry and in general when we care more about how much energy is left in the system than we care about what the energy that left the system is doing. This definition is about the system. In the case of the engine above, energy has left the system, so the work done is negative to reflect the fact that the system has less energy than it used to.
The sign convention in the definition of the change in internal energy (and I prefer $\Delta U$ to $\Delta E$) reflects that we can define work done with respect to the system or the surroundings. The sign in front of $w$ guarantees that we always get
$$\Delta U=q-P\Delta V$$
With respect to the surroundings:
$$w=P\Delta V$$ $$\Delta U = q-w=q-P\Delta V$$
With respect to the system:
$$w=-P\Delta V$$ $$\Delta U = q+w=q-P\Delta V$$
A better question might be
Why then do we not change the sign of $q$ when we switch reference frames from system to surroundings?
-
Thanks @Ben Norris, and what would be your thoughts on your closing remark? :) – MattKneale Aug 24 '14 at 16:45
My guess is that $q=C\Delta T$ and it is much easier to determine the heat capacity of the system than that of the surroundings, thus $q$ is always defined for the system. – Ben Norris Aug 24 '14 at 20:21
|
2016-05-30 06:53:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7452666759490967, "perplexity": 297.60642288116895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464050919950.49/warc/CC-MAIN-20160524004839-00091-ip-10-185-217-139.ec2.internal.warc.gz"}
|
http://mathoverflow.net/revisions/59184/list
|
Let $F$ be an infinite field and let $f: F \times F \to F$ be a function of two variables such that $f(x_0,y)$ is a polynomial in $y$ for every $x_0 \in F$ and $f(x,y_0)$ is a polynomial in $x$ for every $y_0 \in F$. (Of course, being a polynomial for a function $f: F \to F$ means there exists $p(x) \in F[x]$ such that $f(x)=p(x)$ for all $x \in F$.) Now, is $f$ itself necessarily a polynomial?
Surprisingly the answer depends on the cardinality of $F$. It is negative when $F$ is countable and positive when $F$ is uncountable. For countable $F$, enumerate the elements as $a_1, a_2, \dots$ and consider $$f(x,y)=\sum_{i=1}^{\infty} (x-a_1)(x-a_2)\cdots (x-a_i)(y-a_1)\cdots (y-a_i)$$
It is obvious that $f$ satisfies the condition, and not hard to show that it is not a polynomial.
|
2013-05-19 07:23:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9729112982749939, "perplexity": 29.91355914250228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696384213/warc/CC-MAIN-20130516092624-00080-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://sherif.era-solutions.com/bvx3z0vwz/ap-physics-1-2022-practice-exam-1-mcq-answers.html
|
Ap physics 1 2022 practice exam 1 mcq answers. Note: Due to recent changed in the AP Curriculum from College Board, the order of testing can vary in this class. 585 lei. AP Physics: Circular Motion Solved Problems. (D) It depends on the ratio of m/f. Engineering Physics Multiple Choice Questions and Answers PDF download, a book to practice quiz questions and answers on chapters: Alternating fields 1996-m3-scoring-guideline-ap-physics 1/4 Downloaded from edocs. The JSSC PGT Teacher Physics free online test series is accessible in School Entrance Exam 2022 For Class 11 Arihant Experts 2022-03-05 1. which of the following best describes the passenger's linear and angular velocity while passing point A? Click the card to flip 👆 Definition 1 / 43 angular velocity constant and linear velocity changing PHYSICS MCQS TEST NO: 1. 2022 by Suny n Grant side of modern physics. Easy. 12%. ford workshop manuals gt focus 2004 75 07 2004. The + Q charge is fixed in position, and the + Q charge is brought close to + Q . AP Physics 1 Practice Test 7 | 1. It is the student's responsibility to practice the concepts and ask questions when needed. The horizontal component of the ball’s velocity remains constant during its entire trajectory because. Physics mock tests help you remember basic concepts and perform better in the actual The 2022 AP Physics 1 exam format will be: Section I: Multiple Choice 50% of the score, 1 hour and 30 mins to complete. GCSE 9-1 Physics Exam Practice Workbook, with Practice Test Paper (Letts GCSE 9-1 Revision Success) Letts GCSE 2019-05-16 Exam Board: Edexcel, AQA, and OCR Gateway Level: GCSE Grade 9-1 Subject: Physics First Teaching: September 2016, First Exams: June 2018 SSC GK GENERAL AWARENESS SSC MULTIPLE CHOICE QUESTIONS An object of volume 2 × 10 -3 m 3 and weight 6 N is placed into a tank of water, where it floats. The magnitude of the gravitational force that they exert on each other is F 1 Facts about the test: The AP Physics 1 exam has 50 multiple choice questions (45 single-select and 5 multiple-select) and you will be given 90 minutes to complete the section. Problem Sets The Facts about the test: The AP Physics 1 exam has 50 multiple choice questions (45 single-select and 5 multiple-select) and you will be given 90 minutes to 5 Questions | 1 Hour 30 Minutes | 50% of Exam Score This section contains 5 free-response questions of the following types: Experimental Design (1 question) Forces Practice Problems: AP Physics. Time Limit. Also each chapter in the book begins with a summary of the chapter which will help in effective understanding of the theme of the chapter and to make sure that the students will be able to answer all popular questions concerned to a particular chapter whether it is Long Answer Type or Short Answer Type Question. Number of Questions. A short time later, Rock Y is . 86 (34%) Our completely free AP Physics 1 practice tests are the perfect way to brush up your skills. Physics- The section was moderate in difficulty level. Single-select questions are each followed by four possible responses, only one of which is correct. The other end of the . D) The magnitude of the change in momentum cannot be determined without knowing the mass of the object. The following questions were not written by College Board and, although they Unit 4 ap environmental science practice exam answers. 19 terms. PHYSICS MCQS TEST NO: 1 is a Online MCQs Based Test. -Speeding up The AP Physics 1 Exam Format is: Multiple Choice Section -50 Multiple Choice questions -90 minutes -50% of exam score Part A of the multiple choice section has 45 The universe has joined us together through some unexplainable magic of which I can only assume came from your beauty. Set filtre hidraulice, cutie e vit. All these resources help you get at least 30-40 percent more marks in your exams. The multiple-choice section consists of two question types. 8 General Suggestions for the FRQs of Any AP Physics 1 Practice Exam and Notes Effective Fall 2014 2 About the College Board The College Board is a mission-driven not-for-profit organization that connects students to college success and opportunity. Get Albert's free 2022 AP® Physics 1 review guide to help with your exam prep here. . utsa. Kami Export - ap-physics-1-2022-practice-exam-1-frq. Groups, Systems and Many-Body . AP Physics 1 Mock Test #1. D: the ball is not acted upon by any . Chapter 1 Module 1 AP 1. Founded in 1900, the College Board was created to expand access to higher education. asherkort. Speeding up vs. Multi-select questions are a new addition to the AP Physics Exam, and require two of the listed answer choices to be selected to answer the question correctly. ”. These Practice test files are clean and can be directly downloaded. 6dct450 dual clutch transmission parts catalogue. AP Physics Over 20 multiple-choice questions on circular motion and gravitation which appear on the AP Physics 1 exam are provided with detailed explanations. Hard. Find out more! Unit 1 | Kinematics Ask the key questions — How fast? How far? How long? — about the "geometry of motion". Experimental Design Questions for AP Physics Explained. papers, solved MCQs. The universe has joined us together through some unexplainable magic of which I can only assume came from your beauty. Normal force D. Social Services . There are plenty of great AP Physics 1 practice exams to choose from. The JSSC PGT Teacher Physics free online test series is accessible in This item: Princeton Review AP Physics 1 Prep, 2022: Practice Tests + Complete Content Review + Strategies & Techniques (2022) (College School Entrance Exam 2022 For Class 11 Arihant Experts 2022-03-05 1. ap central . What is the displacement of the jogger? 4. Show your work for each part in the space provided after that part. Kinetic friction B. When using the same reference height, it doesn’t fall to the “0” line, so there should be some gravitational potential energy. . marisolkim. The following chapters in the online OpenStax textbook cover the exam content: Chapters 1-4, Chapter 5 (just section 5-1 . AP Physics 1 Practice Test 6 | Answer Key | Answer Explanations. Unit 4 ap environmental science practice exam answers. School Entrance Exam 2022 For Class 11 Arihant Experts 2022-03-05 1. Section 1 Type: Multiple Choice Time: 90 minutes Number of Questions: 50 Score: 50% of the total score Section 2 Type: Free Response Time: 90 Minutes Number of Questions: 5 School Entrance Exam 2022 For Class 11 Arihant Experts 2022-03-05 1. 1. Hewitt 1992 Sterling Test Prep AP Physics 1 Practice Questions: High Yield AP Physics 1 Practice Questions with Detailed Explanations give you MCQs, assignments and an exam preparation kit. Instructions for the this test are given below, containing the detail about . Engineering Physics Multiple Choice Questions and Answers PDF download, a book to practice quiz questions and answers on chapters: Alternating fields GCSE 9-1 Physics Exam Practice Workbook, with Practice Test Paper (Letts GCSE 9-1 Revision Success) Letts GCSE 2019-05-16 Exam Board: Edexcel, AQA, and OCR Gateway Level: GCSE Grade 9-1 Subject: Physics First Teaching: September 2016, First Exams: June 2018 SSC GK GENERAL AWARENESS SSC MULTIPLE CHOICE QUESTIONS 1996-m3-scoring-guideline-ap-physics 1/4 Downloaded from edocs. For the Features of the Toppersexam JSSC PGTTCE Physics Online Test Series 2022. Reflections on the 2015 Exam. 6. ap physics practice test answers Term 1 / 43 An amusement park ride consists of a large wheel of radius R that rotates. C. Over 20 multiple-choice questions on circular motion and gravitation which appear on the AP Physics 1 exam are provided with detailed explanations. So, this choice is wrong. QQT and PASA for AP Physics Explained. Total Playlist Time: 269 Minutes. Explore how things move with kinematic equations describing motion in one or two dimensions. getrag 6dct450 free search pdf doc live. Engineering Physics Multiple Choice Questions and Answers PDF download, a book to practice quiz questions and answers on chapters: Alternating fields AP Economics: For Micro and Macro we have links to several great AP practice exams including some great textbook chapter tests. practice ap physics exams. If the force of gravity between the Moon and the Earth were to stop, which statement best describes the resulting motion of the moon? (A) It would continue rotating on its axis, and it would revolve around the Earth as usual. getrag AP Microeconomics, Macroeconomics, Calculus AB, Calculus BC, Statistics, Language and Composition, Literature and Composition, United States History, United States Government and Politics, Physics I, Physics C: Mechanics, Physics C: Electricity and Magnetism, Spanish Language and Culture . 99 Details Save: $6. Features of the Toppersexam JSSC PGTTCE Physics Online Test Series 2022. Designed by the teachers at SAVE MY EXAMS for the Edexcel IGCSE Physics syllabus. Select the one that is best in each case and then ®ll in the corresponding circle on the answer sheet. Im a little stuck Short Free Response Side View Figure 1 A sphere of mass M is attached to one end of a rigid stick of negligible mass. * Each practice exam includes: * 75 multiple-choice questions * Free-response questions in 2 parts 11 English Core, Physics, Chemistry & Biology Exams 2022-2023, getting familiar with the areas that need your focus and the areas which are your strength becomes easier. Section II: Free Response. A runner runs halfway around a circular path of radius 2. getrag 6dct450 mps6 alltranz co nz. bjerknes. Download free-response questions from past exams along with scoring guidelines, sample responses from exam takers, and scoring distributions. Section I: Multiple Choice. Question. 68. If you are using assistive technology and need help accessing these PDFs in another format, contact Services for Students with Disabilities at 212-713-8333 or by email at [email protected] . Your smile can make a score of 1 feel like a 5. Structured Questions. C: the ball is not acted upon by any force. Multiple Choice Questions. B: the net force acting on the ball is zero. Start your test prep right. IIT JAM solved papers and Practice sets are the preparatory guides for Physics, Chemistry, Biotechnology and Mathematics 2. 5\,{\rm m}$ at a constant (A) Block 1 (B) Block 2 (C) They had equal power provided. Indicate all of your answers to the multiple-choice questions on the answer sheet. Find the individual forces being applied to all objects in a system and calculate work for each AP Physics 1 2022 Exam Questions & Answers Physics- The section was moderate in difficulty level. You’ll be asked to analyze data and create and explain various experiments. This PDF practice test includes 60 questions along with an answer key. Get your test prep started with this free AP Macroeconomics practice exam from the College Board. The sphere is pulled back until the string is horizontal and then released from rest. Conceptual Physics Paul G. 1 | Position, Velocity, and Acceleration Pick one of our AP Physics 1 practice tests now and begin! To pass the AP Physics 1 Exam, you’ll need to understand a variety of scientific and mathematical principles. Download free-response questions from past exams along with scoring guidelines, sample Questions 1, 4, and 5 are short free-response questions that require about 13 minutes each to answer and are worth 7 points each. Engineering Physics MCQ PDF book helps to practice test questions from exam prep notes. Instructions Section I of this examination contains 50 multiple-choice questions. ‘Till we meet again, my dear. The graphs of speed v versus time t for both cars are shown above. cainer. What force keeps the car, when turning on the curve, from skidding? A. cob. EES-150 Review for Exam 1; Chapter 2 notes - Summary The Real World: an Introduction to Sociology; Maternal Newborn Scenarios; Chapter One Outline - Summary Campbell Biology Concepts and Connections; Mid term HIS 104 - Exam Questions and notes; IS2080 - Chapter 10 Practice; Chapter 1 Notes; Chapter 1 - BANA 2081 - Lecture notes 1,2 quantum-numbers-practice-problems-with-answers 1/18 Downloaded from desk. Medium. These AP Macro multiple choice questions are great for test prep. Rock X is released from rest at the top of a cliff that is on Earth. 32, 2016 was 2. Students can also read AP 9th Class Social Important GED Social Studies Practice Test 2022 Question Answers (Free Printable PDF) Download the GED Exam Social Studies review test prep worksheet or participate in free quiz with an explanation. As you can see, this component varies with time. The magnitude of the gravitational force that they exert on each other is F 1 More information. There are more tests are also available related to the Physics Subject. AP Physics Instructions Section I of this examination contains 50 multiple-choice questions. automata Volvo S80 2 (2006->)[124] #2 1589089. -Measured in meters per second squared (m/s^2) average a = Δv/t. 33, 2017 was 2. You, my AP Physics 1 Exam Free-Response Questions and Scoring Information Archive. Physics Multiple attempts of Physics mock test will help you revise the entire syllabus. Practice Test. AP Physics: Unit 3 Progress Check: MCQ Part A. 40, 2018 was 2. The AP Physics 1 Exam consists of two sections: a multiple-choice section and a free-response section. Centripetal force Answer: This one is tricky! Download free-response questions from past exams along with scoring guidelines, sample responses from exam takers, and scoring distributions. 145 terms. Multiple choice questions Questions 1, 4, and 5 are short free-response questions that require about 13 minutes each to answer and are worth 7 points each. IIT JAM Physics Solved Papers and Practice sets 2022 Atique Hasan 2021-05-12 1. What percentage of the object's volume is above the surface of the water? A. 70%. Engineering Physics Multiple Choice Questions and Answers PDF download, a book to practice quiz questions and answers on chapters: Alternating fields The current version of AP® Physics 1 has only been offered since the 2014-2015 school year. B) 30 kg/m. This is just one of the solutions for you to be successful. (Round your answer to a whole number) 57. The JSSC PGT Teacher Physics free online test series is accessible in Make sure you’re studying with the most up-to-date prep materials! Look for the newest edition of this title, The Princeton Review AP Physics 1 Prep, 2023 (ISBN: 9780593450840, on-sale August 2022). Mar 2021 - Apr 2022 1 year 2 months. Mode of exam- Online; Duration of Exam- 3 Getrag mps6. The current version of AP® Physics 1 has only been offered since the 2014-2015 school year. Which of the following is true at time t = 20 seconds? Car Y is behind car X. You, my dear, aren’t even on the AP scale. (a) the vertical component of velocity of a projectile is given by v_y=v_0\sin\theta-gt vy = v0 sinθ −gt. physics entrance exam previous year papers, as one of the most keen sellers here will no question be in the midst of the best options to review. Covers a lot of . » Best AP Physics 1 Books. 50 multiple choice questions (1 hour, 30 minutes), 50% of exam score. Slowing Down. Oswaal NCERT Problems - Solutions (Textbook + Exemplar) Class 7 Mathematics Book (For 2022 Exam) Oswaal Editorial Board 2021-06-16 • Chapter wise & Topic wise presentation for ease of learning • Quick . 60%. Electricity . AP Physics 1 -Algebra-Based- MCQ - All Topics. AP Physics 1 FRQ Solutions. edu on November 18, 2022 by guest WebNov 10, 2022The exam pattern of IIT JAM 2023 has been given below: Test papers-Mathematical Statistics, Biotechnology, Geology, Chemistry, Economics, Mathematics, Physics. 5\,{\rm m}$at a constant A) 20 kg/m. NEET Mock Test 2022 & Chapter wise Online Practice Test Type of Questions: Multiple Choice Questions: Number of Questions: 200; Candidatess have to attempt 180 questions. A 5 is the highest score you can receive, but what I have received is a 10. AP AP Physics 1: Algebra-Based Sample Exam Questions Sample Multiple-Choice Questions RR 1. Multiple choice questions AP Physics 1: Algebra-Based Sample Exam Questions Sample Multiple-Choice Questions RR 1. ezbubu files wordpress com. Net force time distance 3. 51 and 2. 36, 2019 was 2. GCSE Revision Notes IGCSE Revision Notes A Level Revision Notes Biology Chemistry AP Physics 1 2015 Free Response Solutions. If your teacher uses AP classroom, you could also try the progress checks at the end of each unit (if your teacher assigns them), or a practice test on there as well (if your teacher assigns it to you). CALCULATORS MAY BE USED ON BOTH SECTIONS OF THE AP PHYSICS 1 EXAM. Total Marks: 720 Marks: Questions in Each Section: Physics - 50, Chemistry - 50, Botany - 50, Zoology - 50: Marking Scheme +4 for Each Correct Answer and-1 for Each Incorrect . These online tests include hundreds of free practice questions along with detailed explanations. 4 Moments. That means it should take you around 15 minutes to complete 8 questions. This website has 11 AP Physics 1 multiple choice quizzes. Here's what you need to know about them. GCSE 9-1 Physics Exam Practice Workbook, with Practice Test Paper (Letts GCSE 9-1 Revision Success) Letts GCSE 2019-05-16 Exam Board: Edexcel, AQA, and OCR Gateway Level: GCSE Grade 9-1 Subject: Physics First Teaching: September 2016, First Exams: June 2018 SSC GK GENERAL AWARENESS SSC MULTIPLE CHOICE QUESTIONS Both cars then travel on two parallel lanes of the same straight road. 11 hours ago · Dear Students, you can free download social studies worksheet for grade 1, 2020. A car is moving around a curve on an interstate highway at 55 mph. Here is a link to lots of practice multiple choice created by a teacher, Michael Friedman (not by me). A performance report will be given to the applicants following the submission of each test. Electromagnetism, Modern physics, Optics topics dominated the section. no on November 18, 2022 by . These scores are very consistent but may increase as students and teachers become more accustomed to 1996-m3-scoring-guideline-ap-physics 1/4 Downloaded from edocs. Solution: The best approach to answer these kinds of kinematics questions in the AP physics exam is to write down the projectile motion formulas . AP Physics 2 Practice Test | Answer Key | Answer Explanations Free-Response Questions. pl If you are search for ap bio unit 2 progress check frq answers, simply check out our info below : Unit 2 ap bio frq unit 2 ap bio frq ap biology practice enzyme frq answers / ap gov unit 2 practice test / zimsec past exam papers a level pdf / introduction to sociology 2eCh. You will receive incredibly detailed scoring results at the end of your AP Physics 1 practice test to help you identify your strengths and weaknesses. XI Physics Examination. com: Princeton Review AP Physics 1 Prep, 2022: Practice Tests + Complete Content Review + Strategies & Techniques (2022) (College Test Preparation): 9780525570707: The Princeton Review: Books Books › Teen & Young Adult › Education & Reference Buy new:$13. 65 in 2020. Short Answer for AP Physics Explained. Two solid spheres of radius R made of the same type of steel are placed in contact, as shown in the figures above. AP® Physics 1 | Practice Exam #1 Suggested Time Limit: 180 minutes This exam contains 2 sections: Multiple Choice and Free Response, both of which are allotted 90 min and count for 50% of your score. Questions 2 and 3 are long free-response questions that require about 25 minutes each to answer and are worth 12 points each. » Download AP Physics 1 Practice Tests. AP Physics 1 Free Response Questions The free response section consists of five multi-part questions, which require you to write out your solutions, showing your work. 13 List Price: $19. Has some good practice questions for quick review. Fill in only the ovals for numbers 1 through 50 on your answer sheet. Test 01 - Constant Velocity Test 02 - Constant Acceleration Test 03 - Vectors Test 04 - Projectiles Test 05 - Forces Test 06 - Circular Circular Motion Equations Test 07 - Energy Test 08 - Momentum Test 09 - Rotation Test 10 - Waves Test 11 - Circuits Solutions Test 01 - Constant Velocity Test 02 - Constant Acceleration Test 03 - Vectors 1. Includes multiple choice and FRQ. D. GCSE 9-1 Physics Exam Practice Workbook, with Practice Test Paper (Letts GCSE 9-1 Revision Success) Letts GCSE 2019-05-16 Exam Board: Edexcel, AQA, and OCR Gateway Level: GCSE Grade 9-1 Subject: Physics First Teaching: September 2016, First Exams: June 2018 SSC GK GENERAL AWARENESS SSC MULTIPLE CHOICE QUESTIONS AP Physics 1 Practice Questions (All units) Term 1 / 122 The average acceleration is the ratio of which of the following quantities? Click the card to flip 👆 Definition 1 / 122 ∇v:∇t Click the card to flip 👆 Flashcards Learn Test Match Created by alejandro_ad Terms in 1) A ball rolls off the edge of the table. » Do AP Physics 1 Practice Tests. acs-inorganic-chemistry-exam-practice-questions 4/6 Downloaded from cobi. 22 terms. Right-click on the file below and click “Save As”. The three-hour test requires solid problem solving skills. Practice Online AP Physics 1: Unit 1: Kinematics- Position, Velocity, and Acceleration-Exam Style questions with Answer- MCQ . Candidates can give Tests to analyze their preparation. A: the ball is not acted upon by a force in the horizontal direction. If you are using assistive technology and need help accessing these PDFs in another format, contact Services for Students with Disabilities at 212-713-8333 or by email at ssd@info . Today, the membership association is made up of Pick one of our AP Physics 1 practice tests now and begin! To pass the AP Physics 1 Exam, you’ll need to understand a variety of scientific and mathematical principles. Expertauto1 7458. 34 m. He has also provided the answers and explanations. Don't know how to answer these. These all tests are based on the different topics of Physics. AP English: Our AP English resources I am continuously uploading AP Physics 2 Tests from various resources available on the internet, so that the AP Exam takers will focus only on exam preparation. The magnitude of the gravitational force that they exert on each other is F 1 View 2021 Physics 1 Mock Exam MCQ (003). “ The Organic Chemistry Tutor ” - Despite the confusing name, this youtuber has great physics videos showing a number of physics concepts and problem solving approaches. The order of tests will be the same as below HOWEVER, some topics might be condensed or combined with other topics. 13. The JSSC PGT Teacher Physics free online test series is accessible in FREE Physics revision notes on Distance-Time Graphs. pdf from PHYSICS 1 SC421 at Ridge Point High School. 5 free-response questions (1 AP Physics 1 2022 Exam Questions & Answers. Ella_Maulden. Problem (1): A motorcycle weighing$200\,{\rm kg}$turning around an unbanked circular track of radius$12. We also feature prior year free response questions and some videos with free response tips. We also have a large assortment of notes, cram packets and exam review videos. B. uib. The AP Physics 1: Algebra-Based Exam will test your understanding of the scientific concepts covered in the course units, as well as your ability to use algebra when solving problems related to Newtonian mechanics, energy, and more. pdf. The figure above shows two positively charged particles. All My Solutions to the AP Physics 1 Free Response Questions. and Acceleration-Exam Style questions with Answer- MCQ. These reports show us that the mean score in 2015 was 2. AP Physics 1: Algebra-Based Sample Exam Questions Sample Multiple-Choice Questions RR 1. Directions: Each of the questions or incomplete statements below is followed by four suggested answers or completions. Section IA: Single-select 45 questions Section II AP PHYSICS 1 SECTION II Directions: Questions 1–5 here are as follows: one experimental design question (worth 12 points), one quantitative/qualitative translation Acceleration (a) -Rate of change of velocity w/ respect to time. Unlike the multiple-choice section, which is scored by a computer, the free-response section is graded by high school and college teachers. CALCULATORS MAY BE USED IN THIS PART OF THE EXAMINATION. Questions 2 and 3 are long free-response AP Physics 1 2022 Exam Questions (pdf) Corrections for Question 1: part (c): Because there’s a loss of energy, compared to Diagram A, the block in Diagram B does not fall as far. Publisher's Note: Products purchased from third-party sellers are not guaranteed by the publisher for quality or authenticity, and may not Like with all AP exams, the AP Physics 1 exam has two sections: the multiple choice and the free-response section. edu on November 18, 2022 by guest 1996 M3 Scoring Guideline Ap Physics Yeah, reviewing a book 1996 m3 scoring guideline ap physics could go to your near associates listings. Amazon. All forces questions on the AP Physics 1 exams, AP Physics 1 2022 Exam Questions (pdf) Corrections for Question 1: part (c): Because there’s a loss of energy, compared to Diagram A, the block in Diagram B does not fall as AP Physics 1: Algebra-Based Past Exam Questions Free-Response Questions Download free-response questions from past exams along with scoring guidelines, sample AP Physics 1 Practice Test 5 | Answer Key | Answer Explanations. Final kinetic energy minus initial kinetic energy 2. Static friction C. There are hundreds of questions along with an answers page for each unit that provides the solution. Take one of our many AP Physics 1 practice tests for a run-through of commonly asked questions. Practice with the whole set or by topic. From time (Round your answer to a whole number) 57. 1996-m3-scoring-guideline-ap-physics 1/4 Downloaded from edocs. Download file or read online AP past exam paper 2015 AP Physics 1 Exam MCQ Multiple Choice Questions with Answers and FRQ Free Response Questions with Scoring Guidelines - Collegeboard Advanced Placement. A simple pendulum consists of a sphere tied to the end of negligible mass. 2. You can use a four-function, scientific, or graphing calculator throughout the exam, and you will be provided . AP Physics 1 Exam Multiple-Choice Questions NOTE: To simplify calculations, you may use g = 10 m/s2 in all problems. As understood, skill does not recommend that . 55 questions Not started Multiple Choice Suggested Time Limit: 90 minutes This section of the exam contains 50 multiple choice questions. C) 40 kg/m. 30%. ap physics 1 2022 practice exam 1 mcq answers
uejucyx zabe xxjkcc kqkqae pkod bvzh mnksudj xthsc ymaewyir wjtimkpo
|
2023-02-08 11:04:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27587082982063293, "perplexity": 2280.9306914867702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500758.20/warc/CC-MAIN-20230208092053-20230208122053-00129.warc.gz"}
|
http://motls.blogspot.com/2008/02/problems-with-independence-of-kosovo.html?m=1
|
## Sunday, February 17, 2008
### Problems with the independence of Kosovo
Kosovo has unilaterally declared independence from Serbia. Some people outside Kosovo celebrate the decision. I think that this reaction is irresponsible.
A millenium ago, Kosovo was a part of the Serbian state. Slavs used to live there. Of course, the Slavs had moved to their favorite regions just a few centuries earlier and there has been a lot of traffic in Europe 1000-1500 years ago. But there exist both historical as well as modern reasons to consider the territory to be a part of the Serbian domain of influence. Some people call it the cultural heartland of Serbia.
The Patriarchate of Peć in Kosovo where Serbian Orthodox Patriarchs are officially inthroned. Should this 13th century heritage site belong to the Albanians, too? How did it exactly happen? And is it OK to point out that Kosovo authorities have unfortunately begun to falsify the history?
Later, there has also been a lot of influence from the Turks, Islam, and Albanians who currently represent over 90% of the population (2+ million) of Kosovo. Places with this kind of complicated history are always sources of tension and - in some cases - wars. I believe that the best working strategy in similar cases is to try to preserve the status quo as much as possible and to convince both (or all) sides that such a new beginning is acceptable. Compensations shouldn't be about a complete control over a territory.
The question of the Kosovo independence splits Europe in a very serious way. Most importantly, Serbia vigorously opposes it. However, Bulgaria, Cyprus, Greece, Romania, Russia, Slovakia, and Spain disagree with it, too. That's a rather impressive collection of countries in the region, including a major nuclear power, that no responsible politician should try to ignore. Add Bosnia and Herzegovina, China, Georgia, Indonesia, Sri Lanka, South Africa, Vietnam, and many others that are not too happy about the independence; Bulgaria, Egypt, Malta, the Netherlands, Portugal, Sweden, and others want to be cautious or leave the decisions to the U.N. If someone thinks that it will be straightforward for the world to accept this new country, his mistake could have grave consequences. This question splits the world and the region into two comparably strong groups.
Why does the EU so self-confidently ignore the opinion of its 6 (out of 27) members? Why do the U.S. follow?
I don't believe that one can honestly say that Serbia is bad and Albania is good. At the same moment, I feel that some people approach these disputes in similar naive and dangerous ways. I feel that it is the simple reason why both the EU and the U.S. plan to recognize the new country. Why do they dislike Serbia so much?
Well, because Serbia has recently had some aggressive communist leaders; if this is your thinking, get rid of these silly stereotypes: Serbia currently has a pro-West president. A related anti-Serbian sentiment arises because the U.S. and other countries recently fought against this nation. The problems in Yugoslavia have been temporarily solved for the price of treating the Albanians (and a few others) as the good guys and the Serbians as the bad guys. That's the main reason why the Albanians love America so much today. Does it mean that this is how the two nations should be viewed forever, even in cases when it begins to looks obviously unfair?
There is one more general reason why some people support the independence of Kosovo: because these folks systematically enjoy to support an underdog and Albanians in Greater Serbia may count as an example.
It is damn dangerous to support underdogs who create serious enough problems for more powerful - and at least partly justifiable - groups and nations in the region.
The independence of Kosovo is likely to lead to a new wave of escalating tension in the region. This development certainly can't be compared to the Velvet Divorce - the peaceful dissolution of Czechoslovakia. While it is true that most Czechs used to be afraid of this split, pretty much everyone agreed that it was a legitimate choice (and the right of the Slovak nation) when the politicians were actually negotiating about it. At the end of 1992, both sides agreed with the plan and they had a framework that treated both nations as equal. There existed no significant territorial dispute. The political representations did their best to proceed in a perfectionist and smooth fashion. That's why it worked and why the divorce has actually improved the Czecho-Slovak relations at the end.
The dissolution of the Soviet Union was less peaceful but it was still understood by a significant part of Russian people that most of those nations simply had the right to be independent (again). The territories have never quite belonged to Russia. In the case of the Baltic and a few other countries, this statement is obvious. In the case of e.g. Ukraine or Belarus, it is less obvious but the consequences of the independence are less significant because of the proximity of those Eastern Slavic nations that is guaranteed to last.
The Serbia-Montenegro split in 2006 can be viewed as another example of a Velvet Divorce. At the end, everyone agreed with it. Moreover, both regions are controlled by Southern Slavic nations.
However, these observations don't work in the case of Serbia and Kosovo. From a certain perspective, its territory can be legitimately viewed as a historical part of Serbia and the difference between the Albanians and Serbians is simply more indisputable and more explosive than the difference between Russians and Ukrainians (or even Czechs and Slovaks).
A hypothetical new Kosovo state can't be "neutral" in any way. Because this region uses the euro as their currency, it can be a seed of a war inside the de facto eurozone - something that we normally think of as a region of peace and stability.
I feel that it would be more acceptable a solution to divide the territory in an arbitrary way (see the map above for an example) into the Serbian territory and the Albanian territory and merge these parts with Serbia and Albania, respectively, giving them (temporarily?) the status of autonomous regions within Serbia and Albania, and attempting to create a plan to peacefully transfer most of the Albanians from the Serbian portion within a few years and to convince both sides that it is a fair compromise.
It is hard to believe that Serbia will accept to lose the territory completely. It's just too big a change. And I think it is absolutely silly to imagine that an independent Kosovo doesn't mean that Greater Albania increases in size. The declaration doesn't really mean anything else. A future unification of Albania and the independent Kosovo (plus a portion of Macedonia that may separate later) would only be a matter of time and formalities. This dispute is about a battle of Serbian and Albanian blood to control this territory and it would be foolish to pretend otherwise.
Miloševič's ethnic cleansing in Bosnia more than a decade ago (8,000 dead in the Srebrenica massacre, and they were Slavs) was horrible but it also shows how strongly the Serbians think that Bosnia was a part of their broader realm. The situation with Kosovo is analogous. And sorry to say, they have a point. Human lives are precious but I feel that the unhappy and undeserved fate of 8,000 lives just can't change the character of similar territories. In the past, millions of lives have been paid for comparable territories in wars.
Finally. An independent Kosovo would become a dangerous example for a large number of similar regions in Europe with similar separatist tendencies. For example, Spain opposes Kosovo's independence because of fears from a future Basque state. If Russia had to resign over Kosovo, it would probably start to support the independence (=a step towards incorporation) of "its" regions in Georgia (South Ossetia and Abkhazia) and Moldova (Trans-Dniester area).
It is kind of paradoxical that certain people who love ever closer European unification also love the separation of the European countries into ever smaller pieces. Yugoslavia has also been a smaller, local counterpart of the European Union. In fact, it was a reasonably working federal state and Josip Tito wasn't even Serbian: he was 50% Slovenian, 50% Croatian (that's probably why he was creating autonomous regions in Serbia but not in other subcountries of Yugoslavia).
Unless additional reasons are carefully explained, it is inconsistent to support a tight unification in one case and a complete separation in the other case. I think that some people want to social-engineer difficult things in Europe and they build on naive, black-and-white descriptions of various complex controversies while they misunderstand the values and sentiments that really matter in these conflicts.
What do you think?
1. It's interesting to see a Czech's opinion on this... As a Romanian, I've also blogged about this: http://corinamurafa.wordpress.com/2008/02/17/thoughts-on-kosovo/
The solution you're proposing is creative, yet I believe unfeasible. Kosovars will never give up on Pristina, and the situation has - unfortunately - become a zero sum game.
2. There is a strong prejudice against Serbs in the West. Western liberals are always looking for "conservatives" to dishonestly demonize (and also for bad guys they can dishonestly label as "conservative"). What I found most remarkable about this situation is how stealthily the ruling elites of Western countries like the USA were able to reverse their position on Kosovo independence without anyone publicly noticing. (I knew all along that they were insincere about Kosovo staying part of Serbia, that they were just saying that in order to justify the war; but nobody seems to remember this now, I have not seen a single story in the Western mainstream media pointing out that a reversal has occurred.)
3. Lubos ... You marshall a pretty good list of arguments in favour of your position. However, they all crash like a hadron collider against this one statistic : 90%.
Where on the earth can the will of 10% of the actual living people be more legitimate than the 90% regardless of what the ancient history of the place is ?
Think of Nothern Ireland where the Protestants barely had a 51% majority over a Catholic populace and think of what troubles were stirred up for over 30 years because of some strange 400 year old politics and pride of some ridiculous Orange men.
And this Kosovo situation is not even about one Christian denomination warring against another Christian denomination but Christians versus Muslims and vice versa which of course is much more volatile.
Given these facts countries like Russia and other Slavic brothers would do well not to uncessarily pour gasoline on any sparks that be flying.
All sides (muslim and orthodox) need to grow up and enjoy eurozone prosperity and not throw it all away over some 1000 year old nonsense.
4. It is kind of paradoxical that certain people who love ever closer European unification also love the separation of the European countries into ever smaller pieces.
Is it really paradoxical? I thought one of the schemes of the EU colleagues was to dilution of the strong powers by the proliferation of little ones, each with a vote power far beyond its economic significance.
5. Albanians have historic rights to Kosovo. Albanians are decendants of the Illyrians. A tribe of Illyrians which lived in modern day Kosovo was called "Dardania" which means "Land of Pears" in Albanian. They were Conqured by the Romans later on when Western Rome split with Eastern Constaninople the land of Dardania feel in Byzantine rule during this period many churches were built e.g. Gracanica, Patriarch of Peja. When the barbarians (Slavs) raided the Byzantine Empire the Emperor (Justinian) which was born close to Ulpiana (Anceint Pristina) tried to defend the land but soon the Byzantine Empire was crushed and the modern Kosovo fell under Sebrian Kingdom which spanned from Rascia to Northern Greece. Many of the destroyed churches that were built during the Byzantine Empire e.g. Gracanica (evidence was found that it was actually built in the 7th century) were rebuilt and turned into Serbian Orthdox ones. Albanians in Kosovo during this period were Sebrian Orthdox and had the same rights as the Serbs (Although law on Albanians different in Villages they were equal in the cities). With the coming of the Ottoman Empire many Christian Albanians in Kosovo converted to Islam due to Pressure caused by taxes the Albanians that remiand Serbian Orthodox assimilated into Serbian.
6. Yeah yeah, we all know that albanian patriotism about them being descendants of the Illyrians, but does anyone know when that idea came to be? Obviously not. That idea was propagated by albanian communists in their small-time rule of albania. In that horrific time the commies needed something to fire up the people, and use the weakening of the ottoman empire to form an independent state that never actually existed. Although the albanians keep claiming that their language is related to the Illyrian, they should explore a bit more on that subject, because the albanian dialect and language is a satem-shaped language, and that has no connection with the Illyrians, since the satem-shaped dialect was in the Balto- Slavic states. The Illyrian languages are: Ardiaei, Delmatae, Pannonii, Autariates, Taulanti. And who gave you the idea that orthodox albanians assimilated to the Serbian people?!? The Christians in albania are located in the south of the country. The south of albania is bordered with Greece and partially Macedonia. And no, the albanians didn't convert to islam because of high taxes. The ottoman empire wanted a high tribute, but those who wouldn't give the tribute would be slaughtered. Why do you think "bosnians" converted to islam? Because of high taxes??? No, they did it to keep their heads on their shoulders.
@Joe Shipman
I am glad there is someone who actually researches something before believing everything the media says. In today's world the strongest state is the one with the most powerful media, not an army!
@Celal Braider
Its not regardless. Let me give you a comparison. The British and French came to North America which was inhabited by Native Americans. They crusaded against them like against satan himself. Today the Native Americans, who were known as a partially peaceful people (pay attention to the word PARTIALLY)live in reservations. How dare you even compare the numbers in KosMet. The Serbs populated over 80. percent of KosMet until the NATO aggression, which was said to be a heroic act. Yeah right. Serbia was having a "Stand Alone" battle against: US, UK,France, Germany, Canada, Italy, Belgium, Netherlands, Denmark, Turkey, Norway, Portugal, Iceland, Greece, Luxembourg, Poland, Czech Republic, Hungary. Accomplices were: Romania, Bulgaria, Slovakia, Macedonia, albania, Slovenia, Bosnia & Herzegovina, and croatia.
7. @Joe Shipman
I am glad there is someone who actually researches something before believing everything the media says. In today's world the strongest state is the one with the most powerful media, not an army!
@Celal Braider
Its not regardless. Let me give you a comparison. The British and French came to North America which was inhabited by Native Americans. They crusaded against them like against satan himself. Today the Native Americans, who were known as a partially peaceful people (pay attention to the word PARTIALLY)live in reservations. How dare you even compare the numbers in KosMet. The Serbs populated over 80. percent of KosMet until the NATO aggression, which was said to be a heroic act. Yeah right. Serbia was having a "Stand Alone" battle against: US, UK,France, Germany, Canada, Italy, Belgium, Netherlands, Denmark, Turkey, Norway, Portugal, Iceland, Greece, Luxembourg, Poland, Czech Republic, Hungary. Accomplices were: Romania, Bulgaria, Slovakia, Macedonia, albania, Slovenia, Bosnia & Herzegovina, and croatia.
8. Which brings us to the question: WHY is the US actually helping this country? The answer is simple, you just have to do a little research. The FORMER Serbian base Urosevac, now inhabited by the US, although they keep saying its UNMIK and the Kosovo police, is one of the most concentrated place of uranium in Europe, ironically, because Serbia was bombed with depleted uranium shells. Also, Kosovo, and actually Serbia is located in a perfect strategic AND transit location, which the ones controling the area would most certainly use.
9. The media of the Philadelphia Inquirer Rebeca Chamberlain and David E. Powell published the "Serbian Rape System", in which Serbian people would gather albanian and other women in town squares and rape them, while the terrified people were watching from the sides. The Serbs were so satanized in the western world that general Short, with the help of Wesley Clark, could bomb anything in Serbia. Today no one in the western world has the guts to say the horrific tales of the Serbian people and what they have suffered during that bombing. They were accidentally bombing civilian buildings because of "old maps", they said. Isn't the US the most powerful state in the world with agents everywhere. Unfortunately, they are, and no one even WANTS to stand up to them because they know they will end up even worse than Serbia. One interesting detail: After the capitulation of Germany in 1945, the first victims killed by the Germans were Serbians. Two Serbs in Prizren, or Peja which is the shqip name, tried to run away from the NATO tanks that entered the city. They were shot with 14 shells from a T-72. Not one of them survived. The first victims of Germany after WWII were Serbs. Think about that.
In the end id like to comment the picture the author made. A divide will not please either state. The albanians (scqip as they like to call themselves) want the entire Kosovo because of their ambitions to spawn Great albania, which existed during WWII, and its roots were given by fascist Italy and mussolini & hitler. Not only Kosovo, but also west Macedonia which is overpopulated by them, and eastern/southeastern Montenegro, also with a lot of albanian population, and they can talk about it openly, because the US stands behind their back.
10. And one question for the author: You mentioned that 8000 bosnians died in the falling apart of Yugoslavia, but did you also mention the abnormal number of Serbian people slaughtered by the "domobran" army?!? No, you didnt. The Ustase, as they call themselves, and even today they use it as a patriotistic name, killed, raped and pillaged all over Bosnia. Why? Because 45% of Bosnia were Serbs, and ARE today, another 45% were Bosnians, and 10% were Croats. They wanted to cleanse Croatia from Serbs & Bosnians and they did it. But it wasn't enough for them. They wanted to control both Bosnia and Serbia, by assimilating Bosnians into catholics. Well, those that are Muslims and Orthodox Christians. But, they were stopped, not fully, but stopped, by the europeans who waited for Serbia or Bosnia to counter-attack, and then propose a peace treaty. So why dont you ask about the Serbs who were slaughtered there? Let me remind you that the Croat independent state became when the nazis conquered Serbia, and it was made so people, primarily Jews, THEN Serbians, get set there to get murdered in camps which, confirmed by nazi officers, had worse tormenting than Auschwitz itself. Think about the foundation of that state. It is founded on killing innocent souls. God watches all, my friend.
|
2017-06-28 10:39:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2906910181045532, "perplexity": 4060.8925154172634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323604.1/warc/CC-MAIN-20170628101910-20170628121910-00601.warc.gz"}
|
http://altaaqaglobal.com/e6fpsq/uses-of-parallelogram-law-of-forces-04d606
|
# uses of parallelogram law of forces
{\displaystyle T} Parallelogram Method We use the triangle law of vector addition and parallelogram law of vector addition for vectors addition of any two vectors. Then these forces can be represented as two sides of a parallelogram. 1 ) x {\displaystyle \mathbb {R} ^{2}} F H G Parallelogram law definition, a rule for adding two vectors, as forces (parallelogram of forces ), by placing the point of application of one at the point of origin of the other and obtaining their sum by constructing the line connecting the two remaining end points, the sum being the diagonal of the parallelogram whose adjacent sides are the two vectors. The second edition of the T raité de Dynamique appears in 1758 and as Since it also sends If two vectors acting simultaneously at a point can be represented both in magnitude and direction by the adjacent sides of a parallelogram drawn from a point, then the resultant vector is represented both in magnitude and direction by the diagonal of the parallelogram passing through that point. G 2 L. Lagrange, Théorie des fonctions analytiques (Paris, 1797), part 3, ch. ) ) H ⊕ G R ) . For example, see Figure 1. 1 2 Show transcribed image text. sides of a parallelogram, these the diagonal of the parallelogram will be the resultant of both forces. When more than two forces are involved, the geometry is no longer parallelogrammatic, but the same principles apply. F {\displaystyle \mathbf {F} _{1}\oplus \mathbf {F} _{2}} ∈ We model forces as Euclidean vectors or members of F What is Parallelogram Law? has length F F 2 R R However, when combining our two sets of auxiliary forces we used the associativity of T If and R By Newton's second law, this vector is also a measure of the force which would produce that velocity, thus the two forces are equivalent to a single force. R 2 Accounting for both motions, the particle traces the line AC. H 2 If both vectors have the same origin, the physicist draws a line parallel to a vector beginning at the tip of the second vector, and repeats the process for the second vector. G F endobj <> 1 ), then for all forces {\displaystyle \oplus } 1 ( endobj e 12 0 obj They are represented in magnitude and direction by the adjacent sides OA and OB of a parallelogram OACB drawn from a point O.Then the diagonal OC passing through O, will represent the resultant R in magnitude and direction. = 10 0 obj 2 As soon as the ball reaches the hill, it starts to . {\displaystyle {\sqrt {a^{2}+b^{2}}}} <> F and + [2], We model forces as Euclidean vectors or members of This construction has the same result as moving F2 so its tail coincides with the head of F1, and taking the net force as the vector joining the tail of F1 to the head of F2. If two forces acting at a point are represented in magnitude and direction by the two adjacent sides of a parallelogram, then their resultant isrepresented in magnitude and direction by the diagonal passing through the point. F . This procedure can be repeated to add F3 to the resultant F1 + F2, and so forth. 9 0 obj <> F Parallelogram law definition is - a law in physics: the resultant of two vector quantities represented in magnitude, direction, and sense by two adjacent sides of a parallelogram both of which are directed toward or away from their point of intersection is the diagonal of the parallelogram through that point. {\displaystyle \mathbf {F} _{1}\oplus \mathbf {F} _{2}=\mathbf {F} _{1}+\mathbf {F} _{2}} 1 Since endobj 8 0 obj 2 being the length of Thus for the case where Engineering mechanics baa1113 chapter 2: force vectors. {\displaystyle \mathbf {H} _{2}:={\tfrac {b^{2}}{x^{2}}}\left(\mathbf {F} _{1}\oplus \mathbf {F} _{2}\right)} <> F | a 2 Parallelogram Law of Vectors explained Let two vectors P and Q act simultaneously on a particle O at an angle . 2 Ans- It is used to find the resultant of two vector quantities like force and velocity. If ABCD is a parallelogram, then AB = DC and AD = BC. {\displaystyle \mathbf {e} _{1}} endobj It states that the sum of the squares of the lengths of the four sides of a parallelogram equals the sum of the squares of the lengths of the two diagonals. ��Z�˦� # ����.�ÈI�Ϻ�� { ���� force and velocity resultant using the force was true convenient use! Launching of a parallelogram, these the diagonal of parallelogram associativity of ⊕ { \displaystyle \oplus } must equivalent... Or visualizing ) the results of applying two forces to an object at! Has opposite sides equal, i.e in mathematics, the particle traces the line AC force f1 and in... When rotated and velocity when there are two force acting simultaneously on a particle O an. Mathematically valid was true a schematic figure with two set of similar sides, to. Calculate a resultant using the resultant using the which is a parallelogram figure with two set of similar.... Be represented as two sides of a stunt person from a fixed point sides: AB,,... The line AC if ABCD is a prime example sides and diagonals of a parallelogram accidentally kicks it towards uses of parallelogram law of forces. Or not the other force acts independently and will produce its particular velocity whether the force... Ab, BC, CD, DA watch video after this slide or you can video... Use parallelogram in a sentence | parallelogram sentence examples today the parallelogram will be linear addition operator Lami... } } a Method for solving ( or visualizing ) the results of applying two forces n't... 2 } } were developed ( chiefly Duchayla 's and Poisson 's ), 3... Find the resultant of given vectors diagonals of a parallelogram necessarily has opposite sides equal,.! D. Shorter Side of parallelogram O B They are equivalent to a single velocity along AC map will be.. Apparatus They are equivalent to a single velocity along AC a circus is parallelogram. Limitations of this law is it can not be used determine the resultant f1 + f2, these. The sides: AB, BC, CD, DA the mathematical proof of given... Determine the resultant of scalar quantities i t should had a specified angle kicking. She accidentally kicks it towards a steep hill proof, They are equivalent to the resultant of quantities... Explained Let two vectors P and Q act simultaneously on a particle O at angle! � '' & h U� إ�Fl�� T� 6������ � } ���D�������y���g��9�̙3so ���Q~Ř�'�ܕ� ��7����\3=z�t ��Z�˦� uses of parallelogram law of forces ����.�ÈI�Ϻ�� ����. Longer parallelogrammatic, but the same principles apply Lagrange, Théorie des fonctions analytiques ( Paris, 1797,... To elementary geometry this map will be linear DC and AD = BC to find the resultant of vector... Cannon in a circus is a parallelogram law of vector addition for vectors addition of any two vectors and. 'S and Poisson 's ), part 3, ch Lagrange, Théorie des analytiques... Is accepted as an empirical fact, non-reducible to Newton 's first principles Method we use these for! Two-Dimensional model single velocity along AC vectors is used to determine the resultant of both forces than two forces 3! The sides: AB, BC, CD, uses of parallelogram law of forces most easily understood in the two-dimensional model at a such. Proof as a tacit postulate: in parallelogram law of forces, the geometry is longer. Sammie is kicking a ball and she accidentally kicks it towards a steep hill of two quantities...... two forces are involved, the particle has both velocities O D. Shorter Side parallelogram...
|
2021-07-30 19:40:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9232547283172607, "perplexity": 919.3100284638313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153980.55/warc/CC-MAIN-20210730185206-20210730215206-00126.warc.gz"}
|
https://scicomp.stackexchange.com/questions/36958/who-uses-finite-elements-with-higher-continuity
|
# Who uses finite elements with higher continuity?
Lagrange elements of any polynomial describe piecewise continuous functions. Typically, those functions are differentiable.
Mixed finite element methods use vector fields of even less continuity, such as normal continuity. With some great oversimplication, one might argue that discontinuous Galerkin methods, hybridized methods, etc, completely toss out contininuity concerns.
What are the major applications for finite elements with higher smoothness? I am thinking here of standard textbook elements such as the Argyris, Hermite, or Morley elements. While I can see that one might be tempted to use them for higher-order partial differential equations, they do not seem to dominate the literature on those topics.
Are finite elements with $$C^1$$ continuity or higher widely adopted? What are their main applications in real life?
This paper by Kirby and Mitchell describes the implementation of $$C^1$$ elements in the Firedrake package*. One of the main use cases is biharmonic problems, which show up in the elastic deformation of thin plates, or other higher-order PDE like Cahn-Hilliard. My impression from reading the paper and talking to Rob Kirby is that $$C^1$$ elements are better but used relatively less often because they're difficult to implement. For example, one of the main technical innovations of that paper was around the scheme used to map the physical triangle to the reference triangle. For Lagrange elements this is easy -- you just use an affine transformation. For $$H(\textrm{div})$$-conforming elements you need to use a Piola transformation, and for Argyris and related elements the transformations are even more unusual. One of the findings of that paper (see figure 15) was that $$C^1$$ elements have produce a more favorable sparsity pattern that takes less time to factor than non-conforming methods based on polynomials of the same order. So basically you get the same accuracy at less cost compared to using a penalty or DG formulation with polynomials of the same order.
*Disclaimer, I contribute to Firedrake and develop an application based on it for my job.
$$C^1$$ elements are mostly a historic relic. In the finite element method, the traditional view is that the best methods are "conforming", i.e., methods where the finite element space $$V_h$$ is a subspace of the space $$V$$ in which the solution lies. For second-order elliptic equations, $$V=H^1$$ and functions that are continuous and piecewise polynomial (but not necessarily continuously differentiable) are a subspace of $$V$$.
But that is not the case for fourth-order equations such as the biharmonic equation. There, $$V=H^2$$ which contains only functions that are continuous differentiable (i.e., $$C^1$$). So in that case, the usual Lagrange elements are not a subspace of $$V$$ and it is not clear how to implement the bilinear form with these elements. So people, going back to the 1960s, developed elements that are $$C^1$$ and for which consequently $$V_h \subset V$$. This works, but the elements are quite difficult to implement for non-conforming meshes and they just don't quite fit into the systematic view we have of elements today, the paper by Kirby and Mitchell mentioned in one of the other comments notwithstanding.
But starting in the 1990s, we learned how to use non-conforming elements more efficiently -- first in the form of discontinuous Galerkin methods for elliptic equations and then also how to use Lagrange elements for biharmonic equations. I would specifically refer you to the 2005 paper by Sue Brenner and Sung on the $$C^0$$ Interior Penalty ("C0IP") method for biharmonic problems that is also used in the step-47 tutorial program of deal.II and that shows how relatively easy it is to solve these kinds of problems with just the usual elements. (Disclaimer: I'm one of the authors of deal.II and of step-47 in particular.)
Now, it is true that the paper by Kirby and Mitchell shows that the $$C^1$$ elements have advantages regarding condition numbers and solver speeds. But at least in my opinion, I don't think this outweighs the very substantial pain to implementing them on unstructured meshes and meshes that potentially contain hanging nodes. I had a long discussion with Rob Kirby about that paper at some point and have to admit that he's one of my heros for undertaking this kind of project -- he's the only person I know who had the gumption to implement $$C^1$$ elements in the last 20 years, and I think I know a substantial fraction of the people who implement finite elements :-)
Two libraries that I know of, besides Firedrake, that use $$C^1$$ elements are:
Another application where I found it was in Magnetohydrodynamics, in the following paper:
Jardin, S. C. (2004). A triangular finite element with first-derivative continuity applied to fusion MHD applications. Journal of Computational Physics, 200(1), 133-152.
But I could not make it work even for the Laplace equation.
|
2021-04-13 10:05:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6724677085876465, "perplexity": 451.6530013890512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072180.33/warc/CC-MAIN-20210413092418-20210413122418-00146.warc.gz"}
|
http://jmre.ijournals.cn/en/ch/reader/view_abstract.aspx?file_no=20090324&flag=1
|
The Equivalence of Two Convergent Sequence of Bounded Sequences in Normed Space
Received:March 13, 2007 Revised:October 30, 2007
Key Words: almost convergence quasi-almost convergence.
Fund Project:the National Natural Science Foundation of China (No.10871101); the Research Fund for the Doctoral Program of Higher Education (No.20060055010).
Author Name Affiliation WANG Rui Dong School of Mathematical Sciences, Nankai University, Tianjin 300071, China Department of Mathematics, Tianjin University of Technology, Tianjin 300191, China
Hits: 2967
Two kinds of convergent sequences on the real vector space ${\bf m}$ of all bounded sequences in a real normed space $X$ were discussed in this paper, and we prove that they are equivalent, which improved the results of [1].
|
2023-04-01 23:41:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36585354804992676, "perplexity": 513.2835847497693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950363.89/warc/CC-MAIN-20230401221921-20230402011921-00034.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/158814/if-someone-dropped-a-black-hole-into-the-sun-when-and-how-will-we-notice-it
|
# If someone dropped a black hole into the Sun, when and how will we notice it?
Suppose someone dropped a black hole into our lovely Sun a few million years ago. It was big enough (far bigger than that) from the start to eat matter faster than radiating it away, and kept growing and growing. At some time in the future, it would consume most of the star and the change would be obvious to most. But there should be some less noticeable changes before that, something that we could detect with all the various instruments, earthbound and in space, directed at the Sun.
So, that is my question: what would be the first thing we would notice? Would it be increase in x-rays from accretion? Would it be some change in size or temperature due to slightly disrupted fusion? Or maybe something else? To be honest, I have no idea how to even start tackling this question, so any pointers are welcome.
• Assume current-day measurement tech. It is acceptable to include currently planned missions/projects (or those on their way, like Solar Orbiter). Nothing beyond year 2050 though.
• All that matters is that we notice something is not normal with our Sun, not figuring out that it's the black hole inside it. Anything that makes us go "Huh, stars don't do that" is preferable to things that have reasonable explanations, even if rare.
• Bonus if you'll also calculate how much time we would have left till the Sun is gone, but I can calculate that on my own (thanks to HDE 226868).
• Notice that since the black hole was added long ago, any measurements of Sun's mass included it, and since accretion doesn't destroy mass, gravity (probably) won't change in any noticeable way.
• I do not care how we arrived to the current situation. All the question is about is: we detect something today because of a black hole growing inside the Sun. What would that something be?
• possible duplicate worldbuilding.stackexchange.com/q/74809/30492
– L.Dutch
Oct 20, 2019 at 4:30
• to avoid being locked as duplicate, define the black hole... the previous question and answer are only valid for that specific blackhole Oct 20, 2019 at 4:52
• @L.Dutch This is specifically about the case when the black hole is big enough to start accretion, as opposed to that question. The only answer to the linked question pretty much says "nothing happens" which is the opposite of what happens here. Oct 20, 2019 at 4:54
• @V.Sim I can't define it, it depends on answers to this question. If we can't detect a 1% $M_s$ blackhole, you can say it started there, if we can, then maybe 0.1%? 0.01%? As far as I can understand, I don't think hole's starting point and past history matter much, all I'm asking about is its final few thousands? millions? hundred millions? years. Oct 20, 2019 at 5:00
• To answer this question we first need to know what the minimum size is that allows the BH to eat more mass than it radiates. There is a question on the physics stack and one of mine that deals with this problem but they have no answer. Its not just that the BH needs to run into enough mass to keep going, that mass also has to overcome the pressure the Hawking radiation exerts to actually reach the BH. We dont know where that point is (or not on this site anyway). That said, the added weight would likely be detected first assuming the BH is on the border of being able to suck in enough mass. Oct 20, 2019 at 6:57
When you said "big enough", you pointed to a question that mentions a black hole of about 6 $$\times$$ 108kg. The answer marked as correct in that question says that matter will mostly be unable to interact with the black hole at all.
Someone asked the scientists at Cornell how long a black hole one order of magnitude more massive than yours would take to consume the Earth. Cristopher Springob, a PhD in the subject, provided the following answer:
(...) A black hole that weighs a billion tons would have an event horizon that's only about 10-15 meters. So it would be so small that it would really only eat particles that happened to run into it, which wouldn't happen very often. If you were to plant it in the center of the Earth, it would just sit there forever, never consuming enough matter for anyone to notice.
If instead of setting it in the Earth's core, you were to drop it from the surface of the Earth, it would sink down through the middle, pop out the other side, and slide back and forth through the Earth for all eternity. If you assume that the black hole would only consume atoms that it happens to run into, then I calculate that it would take about 1028 years for it to consume the entire Earth, far longer than the age of the Universe. This assumes that the black hole wouldn't lose any mass due to Hawking radiation. If you factor that in, it would probably never consume the whole Earth.
... Which is consistent with the answer to the first question you linked. And though the sun may be very dense at its core, it is not dense enough to change the situation. We will never notice this black hole. In fact there might be a lot of those in the sun right now and we would never know.
• I thought it was obvious from the following words that I mean a black hole way bigger than the one in the linked question, as I specifically said "big enough to eat matter faster than radiating it away". But I clarified the question nonetheless. Oct 20, 2019 at 5:36
• At the mass mentiomed in my answer it will matter faster than it radiates. But at an abismally slow rate. "it would really only eat particles that happened to run into it, which wouldn't happen very often (...)" Oct 20, 2019 at 6:06
• Look, that's not what the question is about. I ask what would we see when the black hole is big enough for us to notice something. Saying "well for some black hole "when" is "never"" doesn't answer my question. Oct 20, 2019 at 6:31
• @Alice if so then you need to specify your black hole's mass in another question. Oct 20, 2019 at 14:19
|
2022-05-17 15:09:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5191986560821533, "perplexity": 471.2109515108143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517485.8/warc/CC-MAIN-20220517130706-20220517160706-00712.warc.gz"}
|
https://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/vkGetPhysicalDeviceImageFormatProperties.html
|
## C Specification
To query additional capabilities specific to image types, call:
// Provided by VK_VERSION_1_0
VkResult vkGetPhysicalDeviceImageFormatProperties(
VkPhysicalDevice physicalDevice,
VkFormat format,
VkImageType type,
VkImageTiling tiling,
VkImageUsageFlags usage,
VkImageCreateFlags flags,
VkImageFormatProperties* pImageFormatProperties);
## Parameters
• physicalDevice is the physical device from which to query the image capabilities.
• format is a VkFormat value specifying the image format, corresponding to VkImageCreateInfo::format.
• type is a VkImageType value specifying the image type, corresponding to VkImageCreateInfo::imageType.
• tiling is a VkImageTiling value specifying the image tiling, corresponding to VkImageCreateInfo::tiling.
• usage is a bitmask of VkImageUsageFlagBits specifying the intended usage of the image, corresponding to VkImageCreateInfo::usage.
• flags is a bitmask of VkImageCreateFlagBits specifying additional parameters of the image, corresponding to VkImageCreateInfo::flags.
• pImageFormatProperties is a pointer to a VkImageFormatProperties structure in which capabilities are returned.
## Description
The format, type, tiling, usage, and flags parameters correspond to parameters that would be consumed by vkCreateImage (as members of VkImageCreateInfo).
If format is not a supported image format, or if the combination of format, type, tiling, usage, and flags is not supported for images, then vkGetPhysicalDeviceImageFormatProperties returns VK_ERROR_FORMAT_NOT_SUPPORTED.
The limitations on an image format that are reported by vkGetPhysicalDeviceImageFormatProperties have the following property: if usage1 and usage2 of type VkImageUsageFlags are such that the bits set in usage1 are a subset of the bits set in usage2, and flags1 and flags2 of type VkImageCreateFlags are such that the bits set in flags1 are a subset of the bits set in flags2, then the limitations for usage1 and flags1 must be no more strict than the limitations for usage2 and flags2, for all values of format, type, and tiling.
Valid Usage
Valid Usage (Implicit)
• VUID-vkGetPhysicalDeviceImageFormatProperties-physicalDevice-parameter
physicalDevice must be a valid VkPhysicalDevice handle
• VUID-vkGetPhysicalDeviceImageFormatProperties-format-parameter
format must be a valid VkFormat value
• VUID-vkGetPhysicalDeviceImageFormatProperties-type-parameter
type must be a valid VkImageType value
• VUID-vkGetPhysicalDeviceImageFormatProperties-tiling-parameter
tiling must be a valid VkImageTiling value
• VUID-vkGetPhysicalDeviceImageFormatProperties-usage-parameter
usage must be a valid combination of VkImageUsageFlagBits values
usage must not be 0
• VUID-vkGetPhysicalDeviceImageFormatProperties-flags-parameter
flags must be a valid combination of VkImageCreateFlagBits values
• VUID-vkGetPhysicalDeviceImageFormatProperties-pImageFormatProperties-parameter
pImageFormatProperties must be a valid pointer to a VkImageFormatProperties structure
Return Codes
On success, this command returns
• VK_SUCCESS
On failure, this command returns
• VK_ERROR_OUT_OF_HOST_MEMORY
• VK_ERROR_OUT_OF_DEVICE_MEMORY
• VK_ERROR_FORMAT_NOT_SUPPORTED
|
2020-12-04 21:13:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29324573278427124, "perplexity": 9846.31785408631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141743438.76/warc/CC-MAIN-20201204193220-20201204223220-00005.warc.gz"}
|
https://stacks.math.columbia.edu/tag/05P0
|
Flatness is the same for modules and sheaves.
Lemma 27.19.1. Let $X = \mathop{\mathrm{Spec}}(R)$ be an affine scheme. Let $\mathcal{F} = \widetilde{M}$ for some $R$-module $M$. The quasi-coherent sheaf $\mathcal{F}$ is a flat $\mathcal{O}_ X$-module if and only if $M$ is a flat $R$-module.
Proof. Flatness of $\mathcal{F}$ may be checked on the stalks, see Modules, Lemma 17.16.2. The same is true in the case of modules over a ring, see Algebra, Lemma 10.38.19. And since $\mathcal{F}_ x = M_{\mathfrak p}$ if $x$ corresponds to $\mathfrak p$ the lemma is true. $\square$
Comment #1224 by David Corwin on
Suggested slogan: Flatness is the same for modules and sheaves
There are also:
• 2 comment(s) on Section 27.19: Flat modules
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2019-05-20 07:11:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.965690016746521, "perplexity": 583.2150778393961}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255773.51/warc/CC-MAIN-20190520061847-20190520083847-00124.warc.gz"}
|
http://mathoverflow.net/questions/6880/does-magma-have-a-function-to-decide-if-two-indefinite-integral-quadratic-forms?sort=votes
|
# Does MAGMA have a function to decide if two indefinite, integral quadratic forms are isometric?
Let's say we have two $n$-dimensional lattices $(V,b)$ and $(W,b_1)$ equipped with integral bilinear forms $b$ and $b_1$ respectively. Is there an implemented function in MAGMA that decides whether $(V,b)$ and $(W,b_1)$ are isometric? Equivalently given two symmetric $n \times n$ integer matrices $M$ and $N$, is there any function that decides if $T^{t}MT=N$ for some $T \in GL_{n}(Z)$. For positive definite $M$ and $N$ one can do it by defining LM:=LatticeWithGram(M) and LN:=LatticeWithGram(N) and then asking IsIsometric(LM,LN). Since the input of LatticeWithGram must be positive definite, the above does not work for indefinite matrices.
-
You should probably ask this is some MAGMA forum... – Mariano Suárez-Alvarez Nov 26 '09 at 12:16
Or e-mail Harris Nover...he knows all this stuff. – Ben Weiss Mar 20 '10 at 2:35
not an answer to the question, but: Checking isometry is much easier for indefinite forms; it's purely local, by strong approximation for the spin group. If interested search for "spinor genus."
-
Yes by strong approximation the spinor genus and the isometry class are the same for indefinite forms( at least in dimension bigger than 2). The problem is that I don't know how to check whether the spinor genus of M and N is the same. If M is indefinite MAGMA does not accept LatticeWithGram(M). – Guillermo Mantilla Nov 27 '09 at 4:42
OK, if you really want to use MAGMA, you can use some trick like trying to replace M,N by p-adically close but definite forms M', N'; then use Magma's implemented functions to check if these are equivalent at p. (You do this at all primes p dividing the discriminant.) I'm not sure if Magma's implemented functions cover the "spinor" version of this, however. – moonface Nov 27 '09 at 7:17
|
2014-07-10 19:37:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8038974404335022, "perplexity": 752.6172579404762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776421879.69/warc/CC-MAIN-20140707234021-00066-ip-10-180-212-248.ec2.internal.warc.gz"}
|
https://socratic.org/questions/80-of-a-number-is-122-what-if-40-of-the-number
|
# 80% of a number is 122, what if 40% of the number?
Because 40% is $\frac{1}{2}$ of 80%, therefore 61 (which is $\frac{1}{2}$ of 122) is 40% of the number.
Because 40% is $\frac{1}{2}$ of 80%, therefore 61 (which is $\frac{1}{2}$ of 122) is 40% of the number.
|
2019-08-17 16:35:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5182468891143799, "perplexity": 624.2355123822659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313428.28/warc/CC-MAIN-20190817143039-20190817165039-00108.warc.gz"}
|
https://nrich.maths.org/public/leg.php?code=-99&cl=3&cldcmpid=6152
|
Search by Topic
Resources tagged with Working systematically similar to Time to Evolve:
Filter by: Content type:
Stage:
Challenge level:
There are 130 results
Broad Topics > Using, Applying and Reasoning about Mathematics > Working systematically
Spot the Card
Stage: 4 Challenge Level:
It is possible to identify a particular card out of a pack of 15 with the use of some mathematical reasoning. What is this reasoning and can it be applied to other numbers of cards?
Problem Solving, Using and Applying and Functional Mathematics
Stage: 1, 2, 3, 4 and 5 Challenge Level:
Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information.
Bent Out of Shape
Stage: 4 and 5 Challenge Level:
An introduction to bond angle geometry.
Counting on Letters
Stage: 3 Challenge Level:
The letters of the word ABACUS have been arranged in the shape of a triangle. How many different ways can you find to read the word ABACUS from this triangular pattern?
Crossing the Town Square
Stage: 2 and 3 Challenge Level:
This tricky challenge asks you to find ways of going across rectangles, going through exactly ten squares.
Troublesome Triangles
Stage: 2 and 3 Challenge Level:
Many natural systems appear to be in equilibrium until suddenly a critical point is reached, setting up a mudslide or an avalanche or an earthquake. In this project, students will use a simple. . . .
More on Mazes
Stage: 2 and 3
There is a long tradition of creating mazes throughout history and across the world. This article gives details of mazes you can visit and those that you can tackle on paper.
One Out One Under
Stage: 4 Challenge Level:
Imagine a stack of numbered cards with one on top. Discard the top, put the next card to the bottom and repeat continuously. Can you predict the last card?
Pole Star Sudoku 2
Stage: 3 and 4 Challenge Level:
This Sudoku, based on differences. Using the one clue number can you find the solution?
Colour Islands Sudoku
Stage: 3 Challenge Level:
An extra constraint means this Sudoku requires you to think in diagonals as well as horizontal and vertical lines and boxes of nine.
Making Maths: Double-sided Magic Square
Stage: 2 and 3 Challenge Level:
Make your own double-sided magic square. But can you complete both sides once you've made the pieces?
Reach 100
Stage: 2 and 3 Challenge Level:
Choose four different digits from 1-9 and put one in each box so that the resulting four two-digit numbers add to a total of 100.
Oranges and Lemons, Say the Bells of St Clement's
Stage: 3 Challenge Level:
Bellringers have a special way to write down the patterns they ring. Learn about these patterns and draw some of your own.
Twinkle Twinkle
Stage: 2 and 3 Challenge Level:
A game for 2 people. Take turns placing a counter on the star. You win when you have completed a line of 3 in your colour.
Extra Challenges from Madras
Stage: 3 Challenge Level:
A few extra challenges set by some young NRICH members.
Masterclass Ideas: Working Systematically
Stage: 2 and 3 Challenge Level:
A package contains a set of resources designed to develop students’ mathematical thinking. This package places a particular emphasis on “being systematic” and is designed to meet. . . .
Stage: 3 Challenge Level:
Rather than using the numbers 1-9, this sudoku uses the nine different letters used to make the words "Advent Calendar".
Twin Line-swapping Sudoku
Stage: 4 Challenge Level:
A pair of Sudoku puzzles that together lead to a complete solution.
Stage: 3 and 4 Challenge Level:
Four small numbers give the clue to the contents of the four surrounding cells.
Difference Sudoku
Stage: 3 and 4 Challenge Level:
Use the differences to find the solution to this Sudoku.
A First Product Sudoku
Stage: 3 Challenge Level:
Given the products of adjacent cells, can you complete this Sudoku?
Tea Cups
Stage: 2 and 3 Challenge Level:
Place the 16 different combinations of cup/saucer in this 4 by 4 arrangement so that no row or column contains more than one cup or saucer of the same colour.
Cayley
Stage: 3 Challenge Level:
The letters in the following addition sum represent the digits 1 ... 9. If A=3 and D=2, what number is represented by "CAYLEY"?
Factors and Multiple Challenges
Stage: 3 Challenge Level:
This package contains a collection of problems from the NRICH website that could be suitable for students who have a good understanding of Factors and Multiples and who feel ready to take on some. . . .
Magic Caterpillars
Stage: 4 and 5 Challenge Level:
Label the joints and legs of these graph theory caterpillars so that the vertex sums are all equal.
LOGO Challenge - Triangles-squares-stars
Stage: 3 and 4 Challenge Level:
Can you recreate these designs? What are the basic units? What movement is required between each unit? Some elegant use of procedures will help - variables not essential.
Creating Cubes
Stage: 2 and 3 Challenge Level:
Arrange 9 red cubes, 9 blue cubes and 9 yellow cubes into a large 3 by 3 cube. No row or column of cubes must contain two cubes of the same colour.
Medal Muddle
Stage: 3 Challenge Level:
Countries from across the world competed in a sports tournament. Can you devise an efficient strategy to work out the order in which they finished?
Crossing the Bridge
Stage: 3 Challenge Level:
Four friends must cross a bridge. How can they all cross it in just 17 minutes?
Inky Cube
Stage: 2 and 3 Challenge Level:
This cube has ink on each face which leaves marks on paper as it is rolled. Can you work out what is on each face and the route it has taken?
9 Weights
Stage: 3 Challenge Level:
You have been given nine weights, one of which is slightly heavier than the rest. Can you work out which weight is heavier in just two weighings of the balance?
The Naked Pair in Sudoku
Stage: 2, 3 and 4
A particular technique for solving Sudoku puzzles, known as "naked pair", is explained in this easy-to-read article.
Coins
Stage: 3 Challenge Level:
A man has 5 coins in his pocket. Given the clues, can you work out what the coins are?
Football Sum
Stage: 3 Challenge Level:
Find the values of the nine letters in the sum: FOOT + BALL = GAME
Ones Only
Stage: 3 Challenge Level:
Find the smallest whole number which, when mutiplied by 7, gives a product consisting entirely of ones.
Pair Sums
Stage: 3 Challenge Level:
Five numbers added together in pairs produce: 0, 2, 4, 4, 6, 8, 9, 11, 13, 15 What are the five numbers?
First Connect Three for Two
Stage: 2 and 3 Challenge Level:
First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line.
Introducing NRICH TWILGO
Stage: 1, 2, 3, 4 and 5 Challenge Level:
We're excited about this new program for drawing beautiful mathematical designs. Can you work out how we made our first few pictures and, even better, share your most elegant solutions with us?
How Old Are the Children?
Stage: 3 Challenge Level:
A student in a maths class was trying to get some information from her teacher. She was given some clues and then the teacher ended by saying, "Well, how old are they?"
Olympic Logic
Stage: 3 and 4 Challenge Level:
Can you use your powers of logic and deduction to work out the missing information in these sporty situations?
I've Submitted a Solution - What Next?
Stage: 1, 2, 3, 4 and 5
In this article, the NRICH team describe the process of selecting solutions for publication on the site.
Fence It
Stage: 3 Challenge Level:
If you have only 40 metres of fencing available, what is the maximum area of land you can fence off?
Isosceles Triangles
Stage: 3 Challenge Level:
Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw?
More Children and Plants
Stage: 2 and 3 Challenge Level:
This challenge extends the Plants investigation so now four or more children are involved.
Gr8 Coach
Stage: 3 Challenge Level:
Can you coach your rowing eight to win?
An Introduction to Magic Squares
Stage: 1, 2, 3 and 4
Find out about Magic Squares in this article written for students. Why are they magic?!
Intersection Sudoku 1
Stage: 3 and 4 Challenge Level:
A Sudoku with a twist.
Building with Longer Rods
Stage: 2 and 3 Challenge Level:
A challenging activity focusing on finding all possible ways of stacking rods.
Magnetic Personality
Stage: 2, 3 and 4 Challenge Level:
60 pieces and a challenge. What can you make and how many of the pieces can you use creating skeleton polyhedra?
LOGO Challenge - the Logic of LOGO
Stage: 3 and 4 Challenge Level:
Just four procedures were used to produce a design. How was it done? Can you be systematic and elegant so that someone can follow your logic?
|
2016-05-26 03:06:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2345275580883026, "perplexity": 2223.359899331899}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049275437.19/warc/CC-MAIN-20160524002115-00159-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://ham.stackexchange.com/questions/3500/decibels-as-ratio-and-dbm-as-absolute-values
|
# decibels as ratio and dBm as absolute values
I am just trying to understand the nitty gritty of using decibels and absolute measurements in dBm. I see them referred to routinely in RF, and using reference sheets I can understand enough to make sense of gains and losses, but I would like instead to acquire a much better understanding for myself, being able to convert and calculate freely (without the need for reference sheets) between any power or dB value.
I think I understand that dB is merely a ratio of measurement. But if, for example, a component under test had 20W at its input and 15W at its output and there is an overall loss of 5W, how would that be worked out and expressed in a ratio of dB?
Trying to work it out I did:
20W (in) = 10log10(20W/1W) = 13dB(W)
15W (out) = 10log10(15W/1W) = 11.76dB(W)
so, component power loss = 13dB(W) – 11.76dB(W) = 1.24dB(W)
However, I can see that I am merely subtracting absolute measurements still and not determining the dB ratio of power transfer efficiency or loss. If so, then what would be the equivalent dB ratio for this example?
In addition, I am able to convert 13dB(W) back to watts with:
10 to the power of (13/10) or 10^1.3 = 19.95W
and so too for 11.76dB(W):
10^1.176 = 14.997W
but for 1.24dB(W):
10^0.124 = 1.33W
which does not appear correct, as I expected instead a value of ~ 5W, for which 1.24dB(W) represents between 13dB(W) and 11.76dB(W), so I unsure where I am going wrong. If someone could let me now I would sure appreciate the insight.
Finally, what would it mean to say a receiver has a receive level of -45dBm?
Is it right to conclude that such a receiver is capable of extracting information from signals received at -45dBm less power than what they (signals) were originally transmitted at? Or actually it just occurred to me that since 45dBm is an absolute value that it perhaps means that the receiver could receive signals as low as -45dBm or:
10^(-45/10) = 0.0000316mW
and still be able to extract the information transmitted so long as the S/N ratio is adequate? Which if that was the case then, if one transmits a signal with 20W or 43dBm of power, then the signal could be attenuated by up to 88dBm:
43dBm - -45dBm = 88dBm
and still be detected adequately enough by the receiver in question so that the original information sent is received?
I hope I have explained my questions well enough to be easily understood.
## 3 Answers
Trying to work it out I did:
20W (in) = 10log10(20W/1W) = 13dB(W)
15W (out) = 10log10(15W/1W) = 11.76dB(W)
so, component power loss = 13dB(W) – 11.76dB(W) = 1.24dB(W)
Your error is merely in the units, not in the calculation. Taking the difference of two dBW (or any two absolute dB values using the same reference level) gets you dB.
If you want to think of this algebraically, you could use the definition
$$x\,\mathrm{dBW} \equiv x\,\mathrm{dB} + (1\,\mathrm{dBW})$$
where $(1\,\mathrm{dBW})$ can be treated as a funny-named constant we don't need to assign any numerical value to, much like any other unit symbol — except that we're adding, not multiplying, it, because logarithms transform multiplications to additions (basic property: $\log(a\cdot b) = \log a + \log b$). If we transform the above to non-logarithmic form, we get the trivial and obvious equation
$$y\,\mathrm{W} = y \cdot (1\,\mathrm{W})$$
Using the above definition for dBW, your calculation becomes:
\begin{align} &\phantom{{}={}} 13\,\mathrm{dBW} - 11.76\,\mathrm{dBW} \\ &= (13\,\mathrm{dB} + (1\,\mathrm{dBW}) - (11.76\,\mathrm{dB} + (1\,\mathrm{dBW})) \tag{change notation}\\ &= 13\,\mathrm{dB} - 11.76\,\mathrm{dB} + (1\,\mathrm{dBW}) - (1\,\mathrm{dBW}) \tag{reorder terms}\\ &= 13\,\mathrm{dB} - 11.76\,\mathrm{dB} \tag{x - x = 0} \\ &= 1.24\,\mathrm{dB} \tag{compute} \end{align}
However, I can see that I am merely subtracting absolute measurements still and not determining the dB ratio of power transfer efficiency or loss. If so, then what would be the equivalent dB ratio for this example?
The result of the subtraction is a ratio expressed in dB; you've just mislabeled it as being absolute.
In addition, I am able to convert 13dB(W) back to watts … but for 1.24dB(W):
10^0.124 = 1.33W
which does not appear correct, as I expected instead a value of ~ 5W
The 1.24 value is dB, not dBW, so treating it as an absolute value does not get you the answer relative to the original power level you measured. (However, there is a meaning to the figure 1.33 W: it's the amount of power in that would be required to get 1 W of power out. You can check this against simply computing ratios with no logarithms.)
Remember, logarithms are simply a computational convenience. If you have a calculation starting in watts and ending in watts, then you can do the intermediate work in dBW and dB (adding and subtracting) or in power and ratios-of-power (multiplying and dividing) and get the same answer. If you don't get the same answer, you made an error in setting up the formulas.
Finally, what would it mean to say a receiver has a receive level of -45dBm? … Or actually it just occurred to me that since 45dBm is an absolute value that it perhaps means that the receiver could receive signals as low as -45dBm … then, if one transmits a signal with 20W or 43dBm of power, then the signal could be attenuated by up to 88dBm:
Yes, you have this right, except that it's “attenuated by up to 88 dB”, not dBm.
• It might be good to point out why the 1dBW part can be factored out: $\log a + \log b = \log(a \cdot b)$, so $13\:\mathrm{dBW} \to 13\:\mathrm{dB} + 1\:\mathrm{dBW}$ is analogous to $19.95\:\mathrm W \to 19.95 \cdot 1\:\mathrm W$. – Phil Frost - W8II Feb 4 '15 at 14:31
• @PhilFrost Added a bit about that. – Kevin Reid AG6YO Feb 4 '15 at 16:44
Trying to work it out I did:
20W (in) = 10log10(20W/1W) = 13dB(W)
15W (out) = 10log10(15W/1W) = 11.76dB(W)
so, component power loss = 13dB(W) – 11.76dB(W) = 1.24dB(W)
Here's your problem: the answer is 1.24 dB, not 1.24 dB(W).
Why? Subtraction of logarithms corresponds to division. You end up with watts in the numerator and denominator which cancel, leaving you with a unitless ratio.
Formally:
$$\log(a) - \log(b) = \log\left({ a \over b }\right)$$
When you do $13\:\mathrm{dB(W)} - 11.76\:\mathrm{dB(W)}$, you are actually doing:
$$\require{cancel} { \left(10^{13\over 10}\right)\cancel{\mathrm W} \over \left(10^{11.76 \over 10}\right)\cancel{\mathrm W} } = {19.95 \over 14.10 } = 1.33$$
We can convert 1.33 to decibels:
$$10 \cdot \log(1.33) = 1.24\:\mathrm{dB}$$
This also raises a point of common practice: since you are calculating a loss, it's conventional to arrange the calculation so that the result expressed in decibels is negative. A negative number in decibels corresponds to a fraction less than 1. Example:
$$11.76\:\mathrm{dB(W)} - 13\:\mathrm{dB(W)} = -1.24\:\mathrm{dB}$$
More generally:
$$\text{power out} \ -\ \text{power in} = \text{loss or gain in decibels}$$
Which, if you think about the identity above, is equivalent to
$${\text{power out} \over \text{power in}} = \text{loss or gain as a ratio}$$
This makes the ratio not 1.33 as calculated above, but 1/1.33 or 0.752. This allows us to calculate losses in watts by multiplication. If we were to put 100W into this system, for example...
$$100\:\mathrm W \cdot 0.752 = 75.2 \:\mathrm W$$
...we'd get 75.2W out.
• very helpful answer. I wish I had enough rep to bump up as "useful." Although I accepted the other answer, since it was posted first and does answer all my questions, I really found your description about dB's being a "unitless ratio" due to the division required to get there (logarithmic subtraction). I didn't see that before but now it makes so much sense. Cheers for that. Worth the price of admission alone. – nanker Jan 20 '15 at 20:35
When you use this stuff on the job, it becomes second nature...
I hope these other ways to look at in general help.
The thing to remember is that dB is only used to represent a ratio of two powers. That is the definition. A bel is a ratio of power of 10 to 1; after Alexender Graham Bell. That is why the B is capital and the d is not. (see note below) A decibel is one tenth of a bel.
A ratio means that you divide one by the other (when expressing power in watts). The reference is on the bottom (denominator) of the ratio.
That 5 W difference between input and output is not a ratio of 5 W to anything, so converting it to dB makes no sense, it is a difference, not a ratio. I can't think of a time when the actual amount lost is expressed that way. We usually use -1.25 dB or express it as an efficiency of 75%.
We only say that there is 1.24dB of loss (actually 1.2499, or 1.25), we (engineers) also can say that it has a "gain" of -1.25 dB. These represent ratios of 1.33 and 0.75 respectively.
You subtracted in the reverse order you should have. When calculating the difference of two things you subtract the reference from your number. Since we usually reference the output to the input, the difference is minus 5 watts.
Please note the following:
• 20 to 15 watts is a loss of 5 W, but a ratio of 0.75 and -1.25 dB.
• 1 W to 0.75 W is a loss of 0.25 W, but a ratio of 0.75 and -1.25 dB.
Going to dB allows you to add or subtract numbers when looking at gains and losses..
The "absolute" versions are still ratios referenced to a known power that is shown by the last letter:
• dBm references a mW
• dBW reference is a watt
TV engineers use dBk = reference to a kW.
Note: For the pedants, we do use dB to represent voltages (and sometimes currents) in places like OP-Amp designs, where the powers are not being calculated. The powers and impedances are totally ignored in many cases.
However, this is improper use of dB .
Double however, those of us who do that know what we mean and that makes it OK. This is because there are places where only the voltages matter and powers are unimportant, and using dB is still useful in the grand scheme.
|
2019-09-21 05:39:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9113819599151611, "perplexity": 1169.886271502406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574265.76/warc/CC-MAIN-20190921043014-20190921065014-00050.warc.gz"}
|
https://dsp.stackexchange.com/questions/58509/find-the-noise-settings-to-reproduce-it
|
Find the noise settings to reproduce it
I work on images that contain shot noise (Poisson) acquired from a microscop. On few images, I have a "flat" zone that is supposed to have the exact same intensity, but it's not because of the noise. I was thinking that I could use this zone to estimate the quantity of noise, and then estimate the parameters K and Lambda of the Poisson equation.
Is there a method to do it?
What you'd often be looking for would the variance – the expectation of the squared difference to the mean value in that region. If the noise you add to the actual image is zero-mean, then the mean of the flat region is (in expectation) the actual intensity, and subtracting it from all pixels and squaring the result would give you the noise power.
However, shot noise is not zero-mean:
each flat-zone image pixel $$v_i$$ follows an actual-intensity-"offset" poisson distribution
$$v_i = m + n_i,\quad n_i\sim\text{Pois}(\lambda)$$
with $$m$$ being the actual intensity of the object, $$n_i$$ being shot noise samples, and $$\lambda$$ the intensity of the Poisson variable.
Now, since shot noise and actual image are independent,
$$\mathbb E(v_i) = \mathbb E(m + n_i) = \mathbb E(m) + \mathbb E(n_i) = m + \lambda\text.$$
Sadly, since we don't know the actual $$m$$ a priori, we'll have to dig one moment deeper:
\begin{align} \DeclareMathOperator{\Var}{Var} \Var (v_i) &= \mathbb E\left((m + n_i- \mathbb E(v_i))^2\right)\\ & = \mathbb E\left((m + n_i- m - \lambda)^2\right) \\ & = \mathbb E\left(( n_i - \lambda)^2\right) \\ &=: \Var(n_i)\\ &=\lambda \end{align}
So, with the Variance of your observation sample as estimator for the $$\lambda$$ of your Poisson variable, you get the sole statistical property of your shot noise for free. If you want to know the actual flat color, you'd also subtract that from the average of the flat region.
|
2020-04-04 22:22:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9783972501754761, "perplexity": 761.9214377956928}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370525223.55/warc/CC-MAIN-20200404200523-20200404230523-00546.warc.gz"}
|
https://www.gamedev.net/forums/topic/695775-combining-deferred-rendering-batching-model-matrices-skeletal-animations-and-shadow-maps/
|
• 14
• 15
• 11
• 10
• 9
• ### Similar Content
• By AndyCo
Im looking for some project to boost up my portfolio, Im not a pro but I`m not bad at all.
Feel free to contact me.
• I have a particle system with the following layout:
system / emitter / particle
particle is driven by particle data, which contains a range of over lifetime properties, where some can be random between two values or even two curves. to maintain a smooth evaluation between two ranges, i randomize a "lerp offset" on particle init and use that value when evaluating curves. the issue is that i'm using that same offset value for all properties (10ish) and as a result i'm seeing some patterns, which i'd like to remove. The obvious way is to just add more storage for floats, but i'd like to avoid that. The other way is to generate a seed of some sort and a random table, and use that to generate 10 values, ie: start with short/integer, mask it, then renormalize to float 0-1.
any other ideas?
• I want to calculate the position of the camera, but I always get a vector of zeros.
D3DXMATRIX viewMat; pDev->GetTransform(D3DTS_VIEW, &viewMat); D3DXMatrixInverse(&viewMat, NULL, &viewMat); D3DXVECTOR3 camPos(viewMat._41, viewMat._42, viewMat._43); log->Write( L"Camera Position: %f %f %f\n", camPos.x, camPos.y, camPos.z);
Could anyone please shed some lights on this?
thanks
Jack
• By bsudheer
Leap Leap Leap! is a fast-paced, endless running game where you leap from rooftop to rooftop in a computer simulated world.
This is a free run game and get excited by this fabulous computer simulated world of skyscrapers and surreal colors in parallax effect. On your way, collect cubes and revival points as many as you can to make a long run.
Features of Leap Leap Leap:
-Option of two themes: Black or White.
-Simple one touch gameplay.
-Attractive art.
-Effective use of parallax.
Appstore: https://itunes.apple.com/us/app/leap-leap-leap/id683764406?mt=8
• By isu diss
I'm following rastertek tutorial 14 (http://rastertek.com/tertut14.html). The problem is, slope based texturing doesn't work in my application. There are plenty of slopes in my terrain. None of them get slope color.
float4 PSMAIN(DS_OUTPUT Input) : SV_Target { float4 grassColor; float4 slopeColor; float4 rockColor; float slope; float blendAmount; float4 textureColor; grassColor = txTerGrassy.Sample(SSTerrain, Input.TextureCoords); slopeColor = txTerMossRocky.Sample(SSTerrain, Input.TextureCoords); rockColor = txTerRocky.Sample(SSTerrain, Input.TextureCoords); // Calculate the slope of this point. slope = (1.0f - Input.LSNormal.y); if(slope < 0.2) { blendAmount = slope / 0.2f; textureColor = lerp(grassColor, slopeColor, blendAmount); } if((slope < 0.7) && (slope >= 0.2f)) { blendAmount = (slope - 0.2f) * (1.0f / (0.7f - 0.2f)); textureColor = lerp(slopeColor, rockColor, blendAmount); } if(slope >= 0.7) { textureColor = rockColor; } return float4(textureColor.rgb, 1); } Can anyone help me? Thanks.
# 3D Combining Deferred rendering, Batching, Model Matrices, Skeletal animations, and shadow maps
## Recommended Posts
Hello all,
I am currently working on a game engine for use with my game development that I would like to be as flexible as possible. As such the exact requirements for how things should work can't be nailed down to a specific implementation and I am looking for, at least now, a default good average case scenario design.
Here is what I have implemented:
• Deferred rendering using OpenGL
• Arbitrary number of lights and shadow mapping
• Each rendered object, as defined by a set of geometry, textures, animation data, and a model matrix is rendered with its own draw call
• Skeletal animations implemented on the GPU.
• Model matrix transformation implemented on the GPU
• Frustum and octree culling for optimization
Here are my questions and concerns:
• Doing the skeletal animation on the GPU, currently, requires doing the skinning for each object multiple times per frame: once for the initial geometry rendering and once for the shadow map rendering for each light for which it is not culled. This seems very inefficient. Is there a way to do skeletal animation on the GPU only once across these render calls?
• Without doing the model matrix transformation on the CPU, I fail to see how I can easily batch objects with the same textures and shaders in a single draw call without passing a ton of matrix data to the GPU (an array of model matrices then an index for each vertex into that array for transformation purposes?)
• If I do the matrix transformations on the CPU, It seems I can't really do the skinning on the GPU as the pre-transformed vertexes will wreck havoc with the calculations, so this seems not viable unless I am missing something
Overall it seems like simplest solution is to just do all of the vertex manipulation on the CPU and pass the pre-transformed data to the GPU, using vertex shaders that do basically nothing. This doesn't seem the most efficient use of the graphics hardware, but could potentially reduce the number of draw calls needed.
Really, I am looking for some advice on how to proceed with this, how something like this is typically handled. Are the multiple draw calls and skinning calculations not a huge deal? I would LIKE to save as much of the CPU's time per frame so it can be tasked with other things, as to keep CPU resources open to the implementation of the engine. However, that becomes a moot point if the GPU becomes a bottleneck.
##### Share on other sites
Posted (edited)
Quote
• Doing the skeletal animation on the GPU, currently, requires doing the skinning for each object multiple times per frame: once for the initial geometry rendering and once for the shadow map rendering for each light for which it is not culled. This seems very inefficient. Is there a way to do skeletal animation on the GPU only once across these render calls?
If you really want to save results, you could store the resultant transforms in an SSBO (or a texel storage unit or something) on your first pass, via vertex index, and grab them on your second. However, I get the feeling that the memory writes and reads will be slower than a few matrix multiplications, and that's not to mention you would need to have one of these objects per instance of your animated mesh.
This approach also introduces a dependency between shadow map passes and your general pipeline. If you don't do this, both shaders can be executing at the same time.
Typically, the majority of objects in a scene are not undergoing skeletal animation. For your general use case, I wouldn't worry about recalculating animations. Vertices are processed pretty fast.
Quote
• Without doing the model matrix transformation on the CPU, I fail to see how I can easily batch objects with the same textures and shaders in a single draw call without passing a ton of matrix data to the GPU
Don't worry about it. pcie x16 transfers at a rate of 4 GB/s, so ~67 mb/frame for a 60 fps target. A matrix is 64 bytes, and you're passing bone transforms. if we go ham and say you have 500 bones per model (YEESH!), you could still pass 10,000 full skeletons and have more than half of your PCIE bandwidth left over for the frame.
I'm also a bit confused here. If you do the model transforms on the cpu, you have to pass not only a bunch of transformed verts, but now you have to pass every instance of a transformed mesh as a -separate mesh-, meaning you can't do instancing for that mesh now.
Quote
(an array of model matrices then an index for each vertex into that array for transformation purposes?)
yes.
Edit: usually you have some float weights and integer bone indices (corresponding to the weights) per vert. You can store these as attributes (vec4+ivec4) or put them in a buffer and get them from a vert index attribute.
I personally have used the second to keep my mesh format consistent and to make attaching arbitrary vertex data less of a horror for future development, obviously at the cost of a bit of performance
Edited by Ugly
##### Share on other sites
17 hours ago, Ugly said:
a bit confused here. If you do the model transforms on the cpu, you have to pass not only a bunch of transformed verts, but now you have to pass every instance of a transformed mesh as a -separate mesh-, meaning you can't do instancing for that mesh now.
Indeed, another reason I would not like to go that route.
Thanks for all the insights, they will be an immense help moving forward. I am currently passing in the bones as a uniform (actually as dual quaternions which I then convert to matrices in the shader) and the weights and indices as vert attributes for my skeletal animations, but I will look into trying it indexed to see if it fits my needs.
One related question I had just cropped up as I was doing some more reading. Something I came across mentioned not to reuse buffers for write calls (e.g. a single fixed size VAO reused for batches) due to implicit synchronization killing performance, though some of what I have seen on batching does just that.
How would you perform view frustum culling of objects each frame if modifying the data in buffers can be a killer on performance? I can't imagine you would want to submit/maintain a bunch of data to/on the GPU that isn't needed for rendering
##### Share on other sites
I think I figured part of this out myself. Between frames shouldn't be an issue since all of those draw calls will need to be completed for the frame anyway.
Should you not reuse VAO for batching, if you need more space than a batch can handle, create a new VAO? This seems like the number of buffers can grow significantly though if you are sending a lot of data
##### Share on other sites
5 minutes ago, kanageddaamen said:
if you need more space than a batch can handle, create a new VAO?
I'm assuming you mean VBO, rather than VAO?
##### Share on other sites
Posted (edited)
24 minutes ago, swiftcoder said:
I'm assuming you mean VBO, rather than VAO?
Wouldn't you need to create an entire new VAO, otherwise the other VBOs in the VAO you are rendering will be passed by the draw call, thereby increasing the batch size which you are trying to keep constant? I must admit I am no expert on the various draw call options and their capabilities
EDIT: I suppose you would just bind different VBOs and make some glVertexAttribPointer calls for the next batch call
Edited by kanageddaamen
##### Share on other sites
10 minutes ago, kanageddaamen said:
Wouldn't you need to create an entire new VAO, otherwise the other VBOs in the VAO you are rendering will be passed by the draw call, thereby increasing the batch size which you are trying to keep constant?
VAOs are purely client-side state. They work exactly the same as making the individual glBindBuffer/glEnableVertexAttribArray/glVertexAttribPointer calls yourself.
As such they don't affect batching at all. You still have one batch per glDraw* call, regardless of how you bound the vertex buffers.
##### Share on other sites
Just now, swiftcoder said:
VAOs are purely client-side state. They work exactly the same as making the individual glBindBuffer/glEnableVertexAttribArray/glVertexAttribPointer calls yourself.
As such they don't affect batching at all. You still have one batch per glDraw* call, regardless of how you bound the vertex buffers.
Gotcha
##### Share on other sites
In my engine I am doing skinning in compute shaders before the rendering starts. This is very nice from a shader management point of view, because I have a single skinning shader, and every model can use a regular vertex shader, regular input layout in rendering, so the amount of vertex shader permutations is minimized. From a performance point of view, this is a trickier question and maybe not always results in the same answer. For example, I spawned a little conversation on twitter one day regarding performance implications on tile based architectures. And I wrote a small blog on the subject as well, take a look if interested.
##### Share on other sites
Actually, rather than spinning up a new VBO for each batch of the same state in a frame if one gets filled, would the following be a better approach:
For batch size N MB
Using a VBO allocated with N MB
For each chunk of N MB data with the same state
Fill VBO with chunk of data using glMapBufferRange with GL_MAP_INVALIDATE_BUFFER_BIT
Make draw call
From what I have read this should safely mitigate implicit synchronization while allowing for a single VBO handle to be used
## Create an account or sign in to comment
You need to be a member in order to leave a comment
## Create an account
Sign up for a new account in our community. It's easy!
Register a new account
|
2018-03-23 17:22:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20134085416793823, "perplexity": 2333.359881793687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648404.94/warc/CC-MAIN-20180323161421-20180323181421-00578.warc.gz"}
|
https://www.physicsforums.com/threads/population-half-life-question.316177/
|
# Population half life question
## Homework Statement
A parent isotope has $$\tau_\frac{1}{2}=\delta$$. Its decays through a series of daughters to a final stable isotope. One of the daughter particles has the greatest half life of $$\tau_\frac{1}{2}=\alpha$$-- the others are less then a year. At t=0 the parent nuclei has $$N_0$$ nuclei, no daughters are present.
How long does it take for the population with the greatest half life to reach 97% its equilibrium value?
At some t, how many nuclei of the isotope with the greatest half life are present, assume no branching.
## Homework Equations
$$\frac{dN}{dt}=e^{-\lambda t}$$
## The Attempt at a Solution
So for the first one:
Its just solving the diff eq above right? The daughter is in its eq. value or do we have to worry about decay from the other daughters?
the second one:
Basically plugging in t right for the solved diff eq with initial nuclei right?
Just checking, I feel like I'm missing something.
Related Advanced Physics Homework Help News on Phys.org
Hi there,
You have the right equation: $$\frac{dN}{dt} = e^{-\lambda t}$$ But don't forget that the daughter nuclei also decay at a certain rate. Therefore, you need to consider the same equation for the long life daughter nucleus.
By the way, just a further comment, typically what half-life are you talking about here??? Because, daughter nuclei with half-life of more than a few split second are normally considered into the decay chain.
Cheers
the halflife(longest) for the daughter is 20yr. The parent is 10^4 yr.
So for the daughter nuclei(20 yr):
$$\frac{dN}{dt} = e^{-\lambda_1 t}- e^{-\lambda_2 t}$$
Where 2 is the daughter. Should 1 be the half life of the 1yr daughter?
Hi there,
When the equilibrium is reach, the decay rate of the parent nuclei is the same as the decay rate of the daughter nuclei, and it is independant of the daughters formed in the process. Therefore, you would have: $$\frac{dN_1}{dt} = \frac{dN_2}{dt}$$
If you solve this simple equation, you have the time needed to reach equilibrium.
Cheers
Hi there,
Your question really caught my attention, and with the half lives you gave me, I find that the system will reach equilibrium after 138.2 years.
Cheers
|
2020-02-28 01:42:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6981492042541504, "perplexity": 1019.9586885575524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146940.95/warc/CC-MAIN-20200228012313-20200228042313-00101.warc.gz"}
|
https://www.physicsforums.com/threads/why-is-thes-true-i-i-1-i-1.136226/
|
# Why is thes true: i=(i-1)/(i+1)
1. Oct 11, 2006
### Hacky
Of course I can see that i*(i+1)=(i-1) but is there some way (long division?) to show this in general? To show for example that i = {(2+i)(3+i)/(2-i)(3-i)}. Or to come up with these equivalencies, does one just multiply i by whatever you desire in the later expansion. I am reading about Schellbach's formulae to calculate Pi from i.
Thanks, Howard
2. Oct 11, 2006
### d_leet
The easiest way to at least see this would probably be to factor i out of the numerator.
3. Oct 11, 2006
### robphy
Note that $$z=a+bi = re^{i\theta}$$ and $$\bar z=a-bi = re^{-i\theta}$$.
So, $$\frac{z}{\bar z}=e^{i(2\theta)}$$.
In your two examples, (i-1) and (2+i)(3+i) make an angle of pi/2 with their complex conjugates.
4. Oct 15, 2006
### HallsofIvy
Or, the standard way to represent a fraction as a complex number: multiply both numerator and denominator by the complex conjugate of the denominator:
$$\frac{i- 1}{i+ 1}= \frac{i-1i}{i+ 1}\frac{-i+1}{-i+1}$$
$$= \frac{(i-1)(-i+1)}{1- i^2}= \frac{-i^2+ 2i+ 1}{1+1}= \frac{2i}{2}= i$$
|
2018-08-15 15:27:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8262600302696228, "perplexity": 1418.359651401624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210133.37/warc/CC-MAIN-20180815141842-20180815161842-00566.warc.gz"}
|
https://stats.stackexchange.com/questions/498399/test-null-hypothesis-that-the-mean-value-is-less-than-60
|
# Test null hypothesis that the mean value is less than 60
Test $$H_0$$ : $$\mu >= 60$$ using 10% significance level if $$s = 16$$, $$\bar{x} = 66$$ and $$n = 13$$. Do not forget to specify $$H_1$$.
So, $$H_1 : \mu < 60$$. With df = 12 t value for 10% significance interval is equal to 1.782. Then standard error: $$1.782 \cdot \frac{16}{\sqrt{13}} = 4.438$$. And the critical value would be $$66 - 1.1782 \cdot 4.438$$. So with 90% chance the mean value will lie below that number.I know i've done it incorrectly.
• You have a very small sample size. Do you know if you're sampling from a normal population? If you don't have this information then I wouldn't perform this hypothesis test. Nov 28 '20 at 20:32
## 2 Answers
Intuitively, the estimated $$\bar x=66$$ based on 13 observations is above $$60$$ when testing $$\mu<60$$ (not even lower), so there can't be a rejection.
Technically, the steps are:
1. Setup the Hypotheses $$H_0, H_1$$ and the significance level $$\alpha$$, which you have correctly done.
2. Calculate the test statistic. This is simply any random variable whose distribution is known if $$H_0$$ is true. We assume that the observations are i.i.d. and hence that $$\bar x \overset{a}{\sim}N(\mu,\frac{\sigma^2}{n})$$. Therefore, our test statistic is (approx) t-distributed: $$t=\frac{\bar x-\mu_0}{\frac{s}{\sqrt{n}}}\sim t(12)$$ where $$s$$ is the sample standard deviation. Here $$t=\frac{66-60}{\frac{16}{\sqrt{13}}}=1.352082$$
3. Calculate the critical value. Our test is left-tailed, since we want to get the value that fulfills $$\mu<60$$ in the most extreme cases, were extreme means only happens with $$\alpha$$ probability conditioned on $$H_0$$, i.e. only under the assumption that $$\mu\ge 60$$ is true. In other words, this is the corresponding quantile. The critical value is: $$c_\alpha$$ where $$P(t\le c_\alpha)=\alpha$$, here $$c_\alpha=-1.356217$$
4. Reject if $$t$$ is more extreme than $$c_\alpha$$. Here, $$t\le c_\alpha$$ is checked, since we have a left-tailed test. The condition is not true in this case. Hence, we don't reject $$H_0$$
In code (R):
## Left-tailed test
# Params
n = 13
alpha = 0.1
s = 16
mu = 60
mu_est = 66
# Plot
curve(dt(x,n-1),xlim=c(-5,5))
abline(v=qt(alpha,n-1),col='red')
t = (mu_est-mu)/(s/sqrt(n))
points(t,0,col='blue',pch=16)
# Test
t < qt(alpha,n-1)
According to your hypothesis setting, you should do a one tail test, and read the value that corresponds to df 12 and one-tail 0.1, which is $$1.356$$ in the following table.
Besides, the calculation $$66−1.1782⋅4.438$$ seems vague.
|
2022-01-22 08:51:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 32, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8900784254074097, "perplexity": 464.7673533009636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303779.65/warc/CC-MAIN-20220122073422-20220122103422-00586.warc.gz"}
|
http://kmeleonbrowser.org/forum/read.php?8,129657,page=4
|
Announcements : K-Meleon Forum
Pages: Previous123456Next
Current Page: 4 of 6
Re: K-Meleon 74.0 Final Release (gre 24.7)
Posted by: George Hall
Date: October 21, 2014 12:36AM
You do not nned to add speedial to K-Meleon 74. Yo can have it with the two extensions below.
There is a Speed Dial http://kmext.sourceforge.net/extensions/speeddial.7z extension which creates a homepage with 12 websites with thumbnails at K-Meleon Extensions Central.
Another extension multipagine4 NT http://kmext.sourceforge.net/extensions/multipagine4-NT.7z | multipagine4 9x http://kmext.sourceforge.net/extensions/multipagine4-9x.7z| which is similar to Speed Dila however the thimbnails are generated locally. Also it supports upto 18 pages.
Adding it speed diak to k-Meleon 74 is making mork than we need to when their are some extensions simiilar to Speed Dial.
Re: K-Meleon 74.0 Final Release (gre 24.7)
Posted by: guenter
Date: October 21, 2014 03:38AM
Quote
Meroveus
Great work!
My wishlist.
1) Speeddial
2) Clone Tab support in tab menu
3) Firefox Sync (password sync, history etc) support
4) Fully integrated Adblock with all feautures
then this done, K-meleon will be the browser of my dreams...
2.) Work around under F2.
3.) Is this an addon? Most that do not require XUL GUI support work.
I do not care what Your nightmares and dreams are - as long as I like it.
Re: K-Meleon 74.0 Final Release (gre 24.7)
Posted by: JamesD
Date: October 21, 2014 04:11AM
I do not understand what is meant by "clone tab support".
Re: K-Meleon 74.0 Final Release (gre 24.7)
Posted by: siria
Date: October 21, 2014 04:15AM
2) Clone Tab
Am not completely sure if that is what you mean, but click Edit > Configuration > Menus
then insert those lines, save and restart:
!New{
Clone Tab=macros(Go_New)
}
Note, this works only if this is the currently active tab.
Note-2: This is just a copy of the native function which is on the Go-Button:
Right-click that button and choose "Open in New Page"
Note-3: due to some old bug with the rebarmenu-plugin I'd recommend to rather add those lines at the end of the the global menus.cfg, found in folder "defaults/settings"
Re: K-Meleon 74.0 Final Release (gre 24.7)
Posted by: George Hall
Date: October 21, 2014 05:08AM
Quote
siria
2) Clone Tab
Am not completely sure if that is what you mean, but click Edit > Configuration > Menus
then insert those lines, save and restart:
!New{
Clone Tab=macros(Go_New)
}
Note, this works only if this is the currently active tab.
Note-2: This is just a copy of the native function which is on the Go-Button:
Right-click that button and choose "Open in New Page"
Note-3: due to some old bug with the rebarmenu-plugin I'd recommend to rather add those lines at the end of the the global menus.cfg, found in folder "defaults/settings"
K-Meleon Extensions Central has and extension macro called Dublicate Tab http://kmext.sourceforge.net/extensions/dublicatetab.7z at the bottom of the Enhancements Extensions Web Page.
Creatiing an extension or addon to clone tabs is not neccessary becuasue a method of cloning tabs for K-Meleon already exists.
Re: K-Meleon 74.0 Final Release (gre 24.7)
Posted by: Meroveus
Date: October 22, 2014 01:06AM
siria, George Hall, JamesD
Thank you very much for your help!
Quote
guenter
Quote
Meroveus
3) Firefox Sync (password sync, history etc) support
3.) Is this an addon? Most that do not require XUL GUI support work.
https://support.mozilla.org/en-US/kb/how-do-i-set-up-firefox-sync
When you set up Firefox Sync on your computer, all of your data and preferences (such as your bookmarks, history, passwords, open tabs and installed add-ons) gets stored securely on the Mozilla servers. You can share all this information across all your devices.
maybe LastPass can work on K-meleon or something else? This is not a complete replacement for Firefox Sync, but a little better than nothing.
Re: K-Meleon 74.0 Final Release (gre 24.7)
Posted by: George Hall
Date: October 22, 2014 01:44AM
Quote
Meroveus
siria, George Hall, JamesD
Thank you very much for your help!
Quote
guenter
Quote
Meroveus
3) Firefox Sync (password sync, history etc) support
3.) Is this an addon? Most that do not require XUL GUI support work.
https://support.mozilla.org/en-US/kb/how-do-i-set-up-firefox-sync
When you set up Firefox Sync on your computer, all of your data and preferences (such as your bookmarks, history, passwords, open tabs and installed add-ons) gets stored securely on the Mozilla servers. You can share all this information across all your devices.
maybe LastPass can work on K-meleon or something else? This is not a complete replacement for Firefox Sync, but a little better than nothing.
Firefox Symc most likely would not work with K-Meleon 74 becasue bookmarks are stored differently in K-Meleon in bookmarks.html
Forfox stores both bookmarks and history in places.sqlite.
Also firefox sync must makes sytem calls to one or both omni.ja. K-Meleon 74 does not have all the files from browser/omni.ja that firefox may need to use for Firefox Sync.
Re: K-Meleon 74.0 Final Release (gre 24.7)
Posted by: guenter
Date: October 22, 2014 01:57AM
Quote
Meroveus
https://support.mozilla.org/en-US/kb/how-do-i-set-up-firefox-sync
When you set up Firefox Sync on your computer, all of your data and preferences (such as your bookmarks, history, passwords, open tabs and installed add-ons) gets stored securely on the Mozilla servers. You can share all this information across all your devices.
maybe LastPass can work on K-meleon or something else? This is not a complete replacement for Firefox Sync, but a little better than nothing.
No idea if it works - I would not try to store my data online.
Last Pass lite is on K-Meleon Extension Pages page one. I have never tested it.
Last Pass for Firefox is an addon that does not work for K-Meleon74. XUL errors.
You can try to combine the old K-Meleon extension with this.
Make a new extension from it. Name, install.rdf... from the Firefox addon, only chrome files, kmm if needed from the old K-Meleon addon.
Else try older versions of lastpass for firefox from about k-m 1.6 or 1.5.4 times.
There was one after all used to create the K-Meleon version.
Edited 1 time(s). Last edit at 10/22/2014 02:05AM by guenter.
Re: K-Meleon 74.0 Final Release (gre 24.7)
Posted by: Meroveus
Date: October 22, 2014 02:09AM
Quote
George Hall
Firefox Symc most likely would not work with K-Meleon 74 becasue bookmarks are stored differently in K-Meleon in bookmarks.html
Forfox stores both bookmarks and history in places.sqlite.
Also firefox sync must makes sytem calls to one or both omni.ja. K-Meleon 74 does not have all the files from browser/omni.ja that firefox may need to use for Firefox Sync.
well, what is K-Meleon Sync and how does it work? i tried to find something about it at the forum but didnt find anything.
Or this just "stub" from big brother Firefox?
Re: K-Meleon 74.0 Final Release (gre 24.7)
Posted by: George Hall
Date: October 22, 2014 02:28AM
Quote
Meroveus
Quote
George Hall
Firefox Symc most likely would not work with K-Meleon 74 becasue bookmarks are stored differently in K-Meleon in bookmarks.html
Forfox stores both bookmarks and history in places.sqlite.
Also firefox sync must makes sytem calls to one or both omni.ja. K-Meleon 74 does not have all the files from browser/omni.ja that firefox may need to use for Firefox Sync.
well, what is K-Meleon Sync and how does it work? i tried to find something about it at the forum but didnt find anything.
Or this just "stub" from big brother Firefox?
That is is for Syncing a Whitelist for FlashBlock.
Re: K-Meleon 74.0 Final Release (gre 24.7)
Posted by: George Hall
Date: October 22, 2014 02:54AM
The Firfox Extnsions Password Exporter can work if do the following
4 Extract macros\pwexporter.kmm from pwexporter.7z to K-Meleon 74's macro folder.
Then Exporting Password for K-Meleon 74 works, For some reason passwordexporter.jar from the Firefox Extesnion does not work for K-Meleon 74.
Someone should figure out why Importing Paswords does not work with K-Mleeon 74,
If that is done we would have a method of Importing and Exporting Passwords for K-Meleon 74.
Instead of keeping a copy of key3.db and signons.sqlite to copy into our profile to import passwords.
Attachments: {B17C1C5A-04B1-11DB-9804-B622A1EF5492}.xpi.zip (76.1 KB)
Window Diversion inefficient
Posted by: JujuLand
Date: October 22, 2014 05:13PM
Hi,
Whatever the choice for 'Window diversion' in new tab or in a new window,
'Open Help menu options in a new tab instead of a new Windows' doesn't work.
All others option always opens in the current tab ...
Under XP and Linux
A+
Mozilla/5.0 (x11; U; Linux x86_64; fr-FR; rv:38.0) Gecko/20100101 Ubuntu/12.04 K-Meleon/76.0
Web: http://jujuland.pagesperso-orange.fr/
Mail : alain [dot] aupeix [at] wanadoo [dot] fr
Ubuntu 12.04 - Gramps 3.4.9 - Harbour 3.2.0 - Hwgui 2.20-3 - K-Meleon 76.0 rc
Edited 3 time(s). Last edit at 10/22/2014 05:15PM by JujuLand.
Re: Window Diversion inefficient
Posted by: siria
Date: October 22, 2014 05:26PM
Ah yes, now I remember having reported that earlier.
This option was introduced by desga years ago for 1.6, but the new KM seems to be mostly or completely based on 1.5.4. So this option, while surprisingly showing up in the pref sheets, wasn't copied over into the current main.kmm too. I remember it's very easy to fix, just change a few lines in main.kmm, search for "openWindow" and compare with the 1.6-main.kmm, but not enough time to look up details now myself.
Posted by: guenter
Date: October 22, 2014 07:01PM
Quote
George Hall
Someone should figure out why Importing Paswords does not work with K-Mleeon 74,
If that is done we would have a method of Importing and Exporting Passwords for K-Meleon 74.
Instead of keeping a copy of key3.db and signons.sqlite to copy into our profile to import passwords.
Why? Because it throws a Script error about an ID that it cannot find.
If that is the only thing You want someone to figure out.
I attached a version that works for 74.0 of Password Exporter for K-Meleon 1.6 to Your post as: {B17C1C5A-04B1-11DB-9804-B622A1EF5492}.xpi.zip. Download & then please rename to: {B17C1C5A-04B1-11DB-9804-B622A1EF5492}.xpi and try whether PWD import works for You now.
I used the back-wind You suggested and hot-wirred the resulting addon with 74.0+1 because it has a functioning console2.
The styles are changed. I replaced the old default CSS with <?xml-stylesheet type="text/css" href="chrome://global/skin/"?> that K-Meleon has now.
To do: change the name of the addon to something at K-Meleon.org.
BTW. The current Firefox Password Exporter Addon works out of the box for K-Meleon 74.0+1.
It adds PWDs and checks for redundant PWDs that exist.
I have not tested more items. I will not use it.
K-Meleon 74.0+1 has more Firefox chrome support already. naruman works on it.
I find to copy key3.db and signons.sqlite convenient enough.
Edited 3 time(s). Last edit at 10/23/2014 12:50AM by guenter.
Re: Window Diversion inefficient
Posted by: JujuLand
Date: October 23, 2014 03:43AM
Hi, Siria
I've been able to modify wiki and Welcome
# ----- Special Opening (based on OpenURL)
$pref_helpmenu="kmeleon.plugins.macros.helpmenu.openintab"; Open_TextAsURL{ macroinfo=_("Open the selected text as URL");$OpenURL=$SelectedText;$OpenURL==""?0:&OpenURL_Selected;
}
macroinfo=_("View basic information on the K-Meleon project");
$__l=getpref(STRING,"general.useragent.locale"); #open("file://".getfolder(RootFolder).($__l=="en-US"?"/browser": ("/locales/".$__l))."/readme.html"); if (getpref(BOOL,$pref_helpmenu))
{ $OpenURL="file://".getfolder(RootFolder).($__l=="en-US"?"/browser": ("/locales/".$__l))."/readme.html"; &OpenURL_InNew; } else { open("file://".getfolder(RootFolder).($__l=="en-US"?"/browser": ("/locales/".$__l))."/readme.html"); } } Open_KMWiki{ macroinfo=_("Go to the K-Meleon Wiki"); #open("http://kmeleon.sourceforge.net/wiki/Welcome";); if (getpref(BOOL,$pref_helpmenu)) { $OpenURL="http://kmeleon.sourceforge.net/wiki/Welcome";; &OpenURL_InNew; } else { open("http://kmeleon.sourceforge.net/wiki/Welcome";); } } But for other entries, I haven't found it. Probably in K-Meleon, or in chrome ? A+ Mozilla/5.0 (x11; U; Linux x86_64; fr-FR; rv:38.0) Gecko/20100101 Ubuntu/12.04 K-Meleon/76.0 Web: http://jujuland.pagesperso-orange.fr/ Mail : alain [dot] aupeix [at] wanadoo [dot] fr Ubuntu 12.04 - Gramps 3.4.9 - Harbour 3.2.0 - Hwgui 2.20-3 - K-Meleon 76.0 rc Options: ReplyQuote Re: Window Diversion inefficient Posted by: siria Date: October 23, 2014 07:30AM Oops... looks like the other help links are moved inside this omni.ja thing! At least a file search locates "faq" inside there. But to my surprise, in 74gre31 my 7z cannot open it anymore??! In 74.0 it still worked, and looks like there those missing help options are inside an "about.xhtml" They are now all "<!ENTITY" elements. (incl. the link to "wiki/ReleaseNotes15" ) ..... Okay, found it.... Nothing changed fundamentally, again just the KM1.6 stuff could be copied over: the block of help-items are at the end of main.kmm. Just as syntax-Example: hm_FAQ { macroinfo=_("View the K-Meleon FAQ"); if (getpref(BOOL,$pref_helpmenu)) { \$OpenURL="http://kmeleon.sourceforge.net/wiki/FAQ"; &OpenURL_InNew; } else { id(ID_LINK_KMELEON_FAQ); }
}
If the whole block from KM1.6 main.kmm gets copied into the new main.kmm again, and then in global "menus.cfg", in the "help" chapter, the ID-commands are replaced again with the macros from main.kmm (copy again from KM1.6 menus.cfg), it should work again.
-----
Something else related:
The 3 docinfo-prefs are missing among the default prefs.
Their names are listed in the beginning of "docinfo.kmm"
Adding them in about:config makes the grayed out checkboxes at the end of F2>Window Diversion accessible.
----
Edit: Oh great, the forum now auto-inserts ; after links? Huh??
Could disable bbcode, but then there could be no formatting either. Have removed the ; after the link, so now there is only 1 ; again
Edited 1 time(s). Last edit at 10/23/2014 07:36AM by siria.
Re: K-Meleon 74.0 Final Release (gre 24.7)
Posted by: JamesD
Date: October 23, 2014 09:05PM
Quote
siria
In F2>macros there are two native ones declared as "user-defined": places.kmm + tuxhelper.kmm
Quote
JujuLand
tuxhelper is a macro which allows to use some features not possible with wine under Linux (due to some protocols not implemented in wine, like mailto, for exemple...)
As it's include in the setup, I think it should be shown like others macros, and not user-defined. But it's a detail ...
It is easy to fix if you know all the languages required. I know only English.
For languages other than English the definitions are found in Root\locales[language code] \ [language code].jar\kmprefs\kplugins\ in the file named macros.properties.
For English the file is located in Root\browser\omni.ja\chrome\en-US\locale\kmprefs\kplugins\in the file named macros.properties.
Re: Window Diversion inefficient
Posted by: JujuLand
Date: October 24, 2014 02:22AM
Here are the two modified files to correct the problem:
main.kmm to put in macros
Mozilla/5.0 (x11; U; Linux x86_64; fr-FR; rv:38.0) Gecko/20100101 Ubuntu/12.04 K-Meleon/76.0
Web: http://jujuland.pagesperso-orange.fr/
Mail : alain [dot] aupeix [at] wanadoo [dot] fr
Ubuntu 12.04 - Gramps 3.4.9 - Harbour 3.2.0 - Hwgui 2.20-3 - K-Meleon 76.0 rc
Attachments: Windows_Diversion.zip (11.2 KB)
Re: K-Meleon 74.0 Final Release (gre 24.7)
Posted by: siria
Date: October 24, 2014 02:46AM
Quote
JamesD
Quote
siria
In F2>macros there are two native ones declared as "user-defined": places.kmm + tuxhelper.kmm
Quote
JujuLand
tuxhelper is a macro which allows to use some features not possible with wine under Linux (due to some protocols not implemented in wine, like mailto, for exemple...)
It is easy to fix if you know all the languages required. I know only English.
It would already be a huge difference to have at least the english fallback description instead of nothing, only "user-defined". Perhaps we should first start there? Suggestions...?
tuxhelper: Provides more features on Linux systems running KM in wine?
places: History function, allows to remember location of visited pages?
Macro description in kmpref
Posted by: JujuLand
Date: October 24, 2014 03:35AM
Hi, siria,
I have give shorten titles:
Linux helper (mailto and more)
Assistant Linux (mailto et plus)
Linux gehilfe (mailto und mehr)
Linux asistente (mailto y mas)
Browsing history
Browsing verlauf
Historial de navigación
Tell me if I'm wrong (I never talk german that I thought too much difficult)
Here are the modified files:
browser/omni.ja
locales/de
locales/es-ES
locales/fr
sorry for russian and chinese, but I'm not able to do it ...
A+
Mozilla/5.0 (x11; U; Linux x86_64; fr-FR; rv:38.0) Gecko/20100101 Ubuntu/12.04 K-Meleon/76.0
Web: http://jujuland.pagesperso-orange.fr/
Mail : alain [dot] aupeix [at] wanadoo [dot] fr
Ubuntu 12.04 - Gramps 3.4.9 - Harbour 3.2.0 - Hwgui 2.20-3 - K-Meleon 76.0 rc
Edited 1 time(s). Last edit at 10/24/2014 03:41AM by JujuLand.
Re: Macro description in kmpref
Posted by: guenter
Date: October 24, 2014 12:56PM
Quote
siria
Oops... looks like the other help links are moved inside this omni.ja thing!
At least a file search locates "faq" inside there.
But to my surprise, in 74gre31 my 7z cannot open it anymore??!
In 74.0 it still worked, and looks like there those missing help options are inside an "about.xhtml"
...snip...
Means that omni.ja was packed with the Mozilla packing method/tools. The table of content is at the begining not at the end of the archive for faster use. Improves FF's startup.
7.z 9.26 or higher support it. But even then You cannot just click on it but must use 7.z menus for Open... At least on my PC.
Quote
JujuLand
Hi, siria,
I have give shorten titles:
Linux helper (mailto and more)
Assistant Linux (mailto et plus)
Linux gehilfe (mailto und mehr)
Linux asistente (mailto y mas)
Browsing history
Browsing verlauf
Historial de navigación
Tell me if I'm wrong (I never talk german that I thought too much difficult)
Here are the modified files:
browser/omni.ja
locales/de
locales/es-ES
locales/fr
sorry for russian and chinese, but I'm not able to do it ...
A+
Yes, siria is the much better German translator.
Translation correct enough for me. Only that nouns are capitalized in written German.
Linux Gehilfe (mailto und mehr) - or maybe: Linux Helfer (Mail und mehr)
Verlauf (with capital letter is sufficient for me)
BTW. helper -> Helfer is a regular sound change from
English and Low German -> Standart German
(helped me to extend the vocabulary without learning)
End of German class.
You can give me a French lesson when I ever make it to Southern France again.
Edited 1 time(s). Last edit at 10/24/2014 01:31PM by guenter.
Re: Macro description in kmpref
Posted by: JujuLand
Date: October 24, 2014 02:52PM
Quote
guenter
Yes, siria is the much better German translator.
ok, I'll wait for siria translation
Quote
guenter
7.z 9.26 or higher support it. But even then You cannot just click on it but must use 7.z menus for Open...
No problem under Linux with file-roller, but I'm not sure that it saves with the mozilla way.
Can you confirm ?
Quote
guenter
You can give me a French lesson when I ever make it to Southern France again.
No problem ... when ?
A+
Mozilla/5.0 (x11; U; Linux x86_64; fr-FR; rv:38.0) Gecko/20100101 Ubuntu/12.04 K-Meleon/76.0
Web: http://jujuland.pagesperso-orange.fr/
Mail : alain [dot] aupeix [at] wanadoo [dot] fr
Ubuntu 12.04 - Gramps 3.4.9 - Harbour 3.2.0 - Hwgui 2.20-3 - K-Meleon 76.0 rc
Re: Macro description in kmpref
Posted by: siria
Date: October 24, 2014 03:29PM
Tsss.... I'm lousy at translations too, somehow in german literal translations tend to sound just silly, but anyway:
--------
7z:
How exactly do you open it, step-by-step??
Edited 1 time(s). Last edit at 10/24/2014 03:31PM by siria.
Re: Macro description in kmpref
Posted by: JujuLand
Date: October 24, 2014 06:47PM
Using Ubuntu
de.jar updated
A+
Mozilla/5.0 (x11; U; Linux x86_64; fr-FR; rv:38.0) Gecko/20100101 Ubuntu/12.04 K-Meleon/76.0
Web: http://jujuland.pagesperso-orange.fr/
Mail : alain [dot] aupeix [at] wanadoo [dot] fr
Ubuntu 12.04 - Gramps 3.4.9 - Harbour 3.2.0 - Hwgui 2.20-3 - K-Meleon 76.0 rc
Edited 1 time(s). Last edit at 10/24/2014 06:55PM by JujuLand.
Re: Macro description in kmpref
Posted by: JamesD
Date: October 24, 2014 08:26PM
Quote
siria
7z:
How exactly do you open it, step-by-step??
I am using 7-Zip version 9.20 on Win 7. I just did a right-click on the omni.ja file to get a menu which included "7-Zip". When I selected 7-Zip I got a menu with two "Open Archive" items. I used the second one which is a popup with file type selections. I choose the one marked "zip". That opened the 7-Zip console where I could open folders until I found the one that had the file "macros.properties". I did a right-click on macros.properties and selected "Edit" from the menu. That opened the file in Notepad. I added the two lines and then did "File > Save" to save the file. Then I exited Notepad and I was asked if I wanted to update the archive. I chose to do that. Then I exited 7-Zip.
Re: Macro description in kmpref
Posted by: rodocop
Date: October 24, 2014 08:29PM
I work with jar and ja-files in Total Commander transparently (like with zips)
ru-locale:
Linux helper (mailto and more)
Linux-ассистент (для ссылок mailto: и т.п.)
Browsing history
История навигации
Latest Release KM75.1 Latest dev KM76RC ||| Visit The K-Meleon Place and join me there!
Old good stuff: K-Meleon-1.6db+NS // KM-16-S2014 // 1.6beta2.6 // K-Meleon Twin+
Re: Macro description in kmpref
Posted by: JohnHell
Date: October 24, 2014 09:33PM
If we are talking about zip methods. WinRAR 4.11 and 7-zip 9.20. No problems at all.
To compress them better, I recommend to extract omni.ja and recrompress again as zip.
Is it slower?, I don't know, I just prefer to save disk space. And no problems until now.
Re: Macro description in kmpref
Date: October 25, 2014 12:38AM
Guys, anyone having the problem of NOT being able to find the Edit button in Facebook messages (status messages are ok)?
Re: Macro description in kmpref
Posted by: George Hall
Date: October 25, 2014 01:47AM
Quote
JohnHell
If we are talking about zip methods. WinRAR 4.11 and 7-zip 9.20. No problems at all.
To compress them better, I recommend to extract omni.ja and recrompress again as zip.
Is it slower?, I don't know, I just prefer to save disk space. And no problems until now.
If use 7zip 9.20 - 9.22 Beta you can not extract omni.ja files, However if 7zip 9.32 alpha 9.34 alpha use can extract onmi,ja files.
Otherwise you have to use a different progrma such as WinRAR to extract onm.ja files. Or rename onmi,ja to omni.zip and treated as a compressed folder to extract with Windows builtin compression zipfldr.dll.
When recompressing on omni.ja as a zip with 7zip or other programa such as WiRAR, use compression level store becuase I found storing at any other compression level can cause lag when watching some flash Videos with kMeleon 74 and some ealier versions of K-Meleon as well.
Re: Macro description in kmpref
Posted by: siria
Date: October 25, 2014 02:34AM
omni.ja:
I still can only open the omni in 74g24, but not the one in 74g31.
Tried with 9.26 and 9.30.
9.34 doesn't run anymore on win98
|
2017-02-25 15:58:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24593274295330048, "perplexity": 14073.394299270334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00460-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://codeforces.com/problemset/problem/461/D
|
time limit per test
2 seconds
memory limit per test
256 megabytes
input
standard input
output
standard output
Toastman came up with a very complicated task. He gives it to Appleman, but Appleman doesn't know how to solve it. Can you help him?
Given a n × n checkerboard. Each cell of the board has either character 'x', or character 'o', or nothing. How many ways to fill all the empty cells with 'x' or 'o' (each cell must contain only one character in the end) are there, such that for each cell the number of adjacent cells with 'o' will be even? Find the number of ways modulo 1000000007 (109 + 7). Two cells of the board are adjacent if they share a side.
Input
The first line contains two integers n, k (1 ≤ n, k ≤ 105) — the size of the board, and the number of cells that has characters initially.
Then k lines follows. The i-th line contains two integers and a character: ai, bi, ci (1 ≤ ai, bi ≤ n; ci is either 'o' or 'x'). This line means: there is a character ci in the cell that is located on the intersection of the ai-th row and bi-th column. All the given cells are distinct.
Consider that the rows are numbered from 1 to n from top to bottom. Analogically, the columns are numbered from 1 to n from left to right.
Output
Print a single integer — the answer to the problem.
Examples
Input
3 21 1 x2 2 o
Output
2
Input
4 32 4 x3 4 x3 2 x
Output
2
Note
In the first example there are two ways:
xxo xoo xox ooo oxx oox
|
2022-08-15 00:53:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20156705379486084, "perplexity": 862.5248295362211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00767.warc.gz"}
|
https://www.physicsforums.com/threads/intuition-why-area-of-a-period-of-sinx-4-area-of-square-unit-circle.700342/
|
# Intuition why area of a period of sinx =4 = area of square unit circle
1. Jul 7, 2013
### CoolFool
1. The problem statement, all variables and given/known data
This isn't really homework, but I've been reviewing calc & trig and realized that the area of one period of sin(x) = 4. Since sin(θ) can be understood as the y-value of points along a unit circle, I noticed that the area of a unit square that bounds the unit circle is also 4. Is this a relationship about squaring a circle, or just a coincidence?
2. Relevant equations
A unit circle is a circle with a radius of one.
Area of one period of sin x is $2 \int^{\pi}_{0} sin(x) dx = 4$
For a unit circle, r=1. So the area of a square bounding the unit circle is also $(2r)^{2} = 4$.
3. The attempt at a solution
I tried drawing out what the area under the curve of sin(x) means, focusing on the first quarter of the unit circle (so, from 0 to pi/2, which is 1/4 the period of sinx and has an area of 1. The square bounding the quarter of a circle also has an area of $r^{2}=1$.)
I understand that the area under sin(x) is the infinite sum of all measurements of the y-coordinate of a point on a rotating unit circle. But why does that become a square?
In other words, what does the area under sin(x) mean and what is its relationship to the square bounding the unit circle (or the 1x1 square bounding the quarter of the circle)? Why?
I hope I have conveyed this question clearly. Thank you for your help!
Last edited: Jul 7, 2013
2. Jul 7, 2013
### Simon Bridge
Niggle: The area between the x axis and sin(x) for any integer number of periods is 0.
What you did was the area of |sin(x)| ...
The relationship is to do with the way "sine" is defined on the unit circle.
You can think of it like the way Pythagoras sometimes gets demonstrated by putting squares on each side of a right-angle triangle and showing that the two smaller squares can be cut up so they fit exactly inside the biggest one.
Note: does it make a difference if the circle has unit circumference instead of unit area?
3. Jul 7, 2013
### CoolFool
I don't get what you mean about the cut up triangles for this application. My confusion is that it doesn't fit. The area under a period of |sin x| = 4, a unit circle's area is only $\pi$.
A unit circle is a circle with a radius of one, not an area of one. I've now made this explicit in the question.
4. Jul 7, 2013
### Simon Bridge
Its a simile - an analogy ...
The area under the sine from 1 to pi/2 can be cut up to fill the gap between the sin and y=1 from 0 to 1.
That was already clear. The sine is a specific length defined on the unit-radius circle... since the one was derived from the other, it is not surprising to find they have special relationships.
|
2018-02-21 15:36:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7999905347824097, "perplexity": 393.04087690278305}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813626.7/warc/CC-MAIN-20180221143216-20180221163216-00501.warc.gz"}
|
https://learn.careers360.com/ncert/question-solve-the-following-equations-a-2x-plus-4-equals-12/
|
## Filters
Q&A - Ask Doubts and Get Answers
Q
# Solve the following equations: (a) 2(x + 4) = 12
2. Solve the following equations:
(a) 2(x + 4) = 12
(b) 3(n – 5) = 21
(c) 3(n – 5) = – 21
(d) – 4(2 + x) = 8
(e) 4(2 – x) = 8
Views
(a) We have:
2(x + 4) = 12
Dividing both sides by 2, we have :
$x\ +\ 4\ =\ 6$
Transposing 4 to the RHS, we get :
$x\ =\ 6\ -\ 4\ =\ 2$
Thus $x\ =\ 2$
(b) We have:
3(n – 5) = 21
Dividing both sides by 3, we have :
$n\ -\ 5\ =\ 7$
Transposing - 5 to the RHS, we get :
$n\ =\ 7\ +\ 5\ =\ 12$
Thus $n\ =\ 12$
(c) We have :
3(n – 5) = – 21
Dividing both sides by 3, we have :
$n\ -\ 5\ =\ -\ 7$
Transposing - 5 to the RHS, we get :
$n\ =\ -\ 7\ +\ 5\ =\ -\ 2$
Thus $n\ =\ -\ 2$
(d) We have :
– 4(2 + x) = 8
Dividing both sides by - 4, we have :
$x\ +\ 2\ =\ -\ 2$
Transposing 2 to the RHS, we get :
$x\ =\ -\ 2\ -\ 2\ =\ -\ 4$
Thus $x\ =\ -\ 4$
(e) We have :
4(2 - x) = 8
Dividing both sides by 4, we have :
$2\ -\ x\ =\ 2$
Transposing x to the RHS and 2 to the LHS , we get :
$x\ =\ 2\ -\ 2\ =\ 0$
Thus $x\ =\ 0$
Exams
Articles
Questions
|
2019-11-18 06:44:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9167459607124329, "perplexity": 4688.112047618203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669454.33/warc/CC-MAIN-20191118053441-20191118081441-00525.warc.gz"}
|
https://www.scienceforums.net/topic/29473-introduction-to-calculus-differentiation/?tab=comments#comment-444140
|
# Introduction to Calculus: Differentiation
## Recommended Posts
Prerequisites
There's only so much I can do: I'm assuming that you've got a solid basis in algebra, and I will start from about the level of maths GCSE. I assume that you will understand the concept of a function (e.g. $f(x) = x^2$) and understand various concepts such as graphing techniques. For the later stages, I assume some knowledge in the area of trigonometry, mainly the sine and cosine functions. For the more advanced calculus, I will be working in radians instead of degrees for the measurement of angles.
There is one other thing: GRADIENTS - know that the definition of a gradient of a straight line between two points (x1, y1) and (x2, y2) is [imath]\frac{y_2 - y_1}{x_2 - x_1}[/imath]. (Some people know gradients as "slope." They're the same thing.)
Here's a list of the topics covered in the rest of this tutorial, with links to those posts:
Lesson 1 - The basics of limits
So what actually is a limit? It's a very hard concept to define in layman's terms (although relatively easy from a strictly analytical point of view). I think the best way to think of it is in terms of sequences.
Imagine you have a sequence of numbers that goes like this: 1, 1/2, 1/3, 1/4, 1/5, ... and so on. If we call the nth number an, then it's fairly clear to see that a1 is 1, a2 is 1/2, and so on. The mathematical definition for the nth number is obviously an = 1/n.
Now we look at what happens as we get bigger and bigger values of n. We can notice that each term in the sequence gets progressively smaller as we increase the value of n, and it doesn't take a genius to work out that as we get really big values of n, we get excruciatingly small values for an. Eventually, with incredibly huge numbers, an will be almost 0 (but it never will actually be 0). So we can say that the "limit" of an is 0 as n gets really big (i.e. as n tends to infinity).
Don't start crying just yet over how complex this all is; it's an abstract concept to understand, and it'll take some time just to understand the idea, let alone how it all works. A quick remark on this: we won't be using limits that tend to infinity much in calculus at all, I just used it as an example.
A very important idea to understand is the fact that we're not actually saying that the sequence will ever hold a value of 0 - what we are saying is that if you were to go on and extend the sequence forever, then you'd be continually getting closer and closer to zero.
Quickly, some notation. You won't be using this every day, but it's handy to know. The situation described above could be represented like this:
$\lim_{x \to \infty} \frac{1}{x} = 0$
meaning that as we put bigger and bigger numbers in to [imath]\frac{1}{x}[/imath], the answer approaches 0.
Remember, if you need help understanding any of this, you can just ask in our calculus forum.
Edited by Cap'n Refsmmat
##### Share on other sites
Lesson 2 - The basics of differentiation
So now onto first principles of differentiation: this is where I tell you how to actually go about differentiating something and what we actually mean by the term 'differentiation'.
A classic math problem is to sketch a curve out (like the classic y = x2) and then they say to you: "draw a tangent to the curve at the point x = 1, and hence find the gradient at that point". And you grudgingly scrawl out a quick graph, shove a quick tangent on and get an approximate value for the gradient. After all, we all know it's dead easy to find the exact gradient between two points on a straight line, but on a curve? Bah, impossible. You just have to approximate.
But this is not so.
Let's draw ourselves a graph of y= x2, and have a look at a better way of doing things. Take a look at this graph:
We have a point P at the position (1,1) and then a point Q at position (1+h, (1+h)2). I initially got confused here: basically, we're looking at a point where x = 1, and then a point a little bit further down the x-axis at x = 1+h, where h is some value (we don't really care all that much what it is).
(If you're wondering where (1+h)2 came from, remember the equation we're graphing: y=x2. If x = 1+h, y = (1+h)2.)
Let's suppose we're asked to find the gradient at P. We could draw a line between point P and point Q and find its gradient, which will give us a rough approximation of the gradient at point P. We could repeat this over and over, moving Q closer to P every time (and thus getting an answer closer to the real value of the gradient at x = 1).
Let's look at the gradient of the line PQ. This is equal to:
$\frac{y_2 - y_1}{x_2 - x_1} = \frac{(1+h)^2 - 1}{(1+h) - 1} = \frac{(1+h)^2 - 1}{h}$
Now just a second. We want to find the limit of this (the answer) as we decrease h to practically nothing. In other words, we want to find the gradient of the line when we move P and Q incredibly close together. So close together that they're actually the same point. That should give us the gradient at x = 1.
But we've got a silly h lying around all by itself on the bottom of the equation, meaning that if we just stick h=0 into here, we get something divided by zero - we can't do that. We don't want to just stick a tiny number in for h because that would just be approximating. But wait! There's a way to make h = 0 - which would mean that we're finding the gradient of PQ where P and Q are the same. That would tell us the gradient of the curve at x = 1, which is exactly what we're looking for.
So now we have to play around a bit with the fraction, and this is the key operation of this lesson. Make sure you watch very, very carefully and understand each step in the minutest of detail. First of all, notice that (1+h)2 = 1 + 2h + h2. So now we have the gradient equal to:
$\frac{(1+h)^2 - 1}{h} = \frac{1 + 2h + h^2 - 1}{h} = \frac{2h + h^2}{h} = \frac{h(2 + h)}{h} = 2 + h$
Hurrah! Now we have something that we can work with. Notice that if we shrink h to zero as we intended, the gradient of PQ will tend to 2+0 = 2. Or, in other words, the gradient at x = 1 is 2. That's not an approximation, that's the exact answer. To use the appropriate terminology, we have just differentiated the function to find its derivative.
So we have a method for finding the exact value of the gradient at a certain point. All those hours of drawing tangents to curves wasted whilst your teacher can work out the answer in his head...
Remember, if you need help understanding any of this, you can just ask in our calculus forum.
Edited by Cap'n Refsmmat
##### Share on other sites
Lesson 3: The formal definition of differentiation and some basic shortcuts
So now we've figured out how to differentiate a basic function using the methods above. You'll notice how tedious and boring they were to work out (if you don't think it was tedious, wait until you try more difficult functions). Surely there are some shortcuts.
There are. But first, we'll formalize what we know about differentiation into a simple equation:
$\frac{d}{dx}f(x) = \lim_{h \to 0} \left( \frac{f(x+h) - f(x)}{h} \right)$
You're probably now wondering "what does that [imath]\frac{d}{dx}[/imath] thing mean and why is it up there?" In short, that's the notation to describe a derivative of a function - it means "the derivative with respect to x of the function f(x)." (The d is not a variable, it's an operator, so you can't cancel it out of that fraction.) You may also see things like f'(x) (pronounced "f prime of x"), which also indicates a derivative.
Anyway, the fancy limit above is essentially the same as the equation we used before. You'll see why we used the limit - you can't plug in 0 for h, but you certainly can evaluate the limit as h gets infinitesimally close to 0. (Re-read lesson 2 if you don't quite understand.)
Shortcuts
In Lesson 2, we plugged numbers into an equation and differentiated it. Suppose we want to find the slope at several points on the same graph. We'd have to do all that tedious factoring and simplifying every single time - or not. It turns out that the formula above can work on the equation you're differentiating even without real values in it. In other words, you can leave the x variable in and differentiate and get an equation that will give you the slope at any given point on the curve.
Let's try it for the function f(x) = x2.
We can see that this is true:
$\frac{(x+h)^2 - x^2}{h} = \frac{x^2 + 2xh + h^2 - x^2}{h} = \frac{2xh + h^2}{h} = \frac{h(2x + h)}{h} = 2x + h$
That means that the limit simplifies rather nicely:
$\frac{d}{dx}x^2 = \lim_{h \to 0} \left( \frac{(x+h)^2 - x^2}{h} \right) = \lim_{h \to 0} (2x+h) = 2x$
(If you don't get that last step, remember that we're making h approach 0. When h is no longer on the bottom of a fraction, we can safely make it zero without "breaking" the equation.)
So what's this mean? It means that at any point on the curve x2, the slope of the curve is 2x. You could say that
$\frac{d}{dx} x^2 = 2x$
But I want to do it faster!
But wait, there are yet more shortcuts!
If you try differentiating a few simple equations, you might notice a pattern. Take a look:
$\frac{d}{dx} x^2 = 2x$
$\frac{d}{dx} x^3 = 3x^2$
$\frac{d}{dx} 2x^2 = 4x$
Notice a pattern? Basically, if you have a function of the form axn, the derivative is
$\frac{d}{dx} ax^n = anx^{n - 1}$
This rule also applies for longer equations. Suppose I have the equation [imath]f(x) = x^3 - 2x^2 + 2x - 3[/imath]. The derivative of that equation is equal to the derivatives of all the parts, added together, like so:
$f'(x) = 3x^2 - 4x + 2$
You may have noticed that the term "- 3" vanished from the equation, and you're right: it has no variable in it, so we can leave it out of the derivative. The 2x became a 2 for similar reasons. Watch what happens when we take the derivative of 2x with our rule: [imath]\frac{d}{dx} 2x^1 = 1 \times 2x^0 = 2[/imath] (because x0 = 1).
Remember, if you need help understanding any of this, you can just ask in our calculus forum.
Edited by Cap'n Refsmmat
##### Share on other sites
• 4 weeks later...
Lesson 4: The Product Rule
From this point forward, all you have left to learn is more sophisticated ways of finding derivatives. The first is called the Product Rule.
Let's say I give you this equation:
$f(x) = (x - 2)(x + 4)$
and I ask you for its derivative. You've got two choices: plug it in to the big limit in Lesson 3, or try to use our easy rule. The first choice would be a pain, and the rule just doesn't work -- this isn't a function of the form axn. You might also expand the equation out, but that gets to be a pain with more complicated functions.
In steps the Product Rule.
The Product Rule takes effect when you have two "chunks" multiplied with each other in the equation. In this case, our "chunks" are (x - 2) and (x + 4). Let's give each chunk a name to make things easier:
$u = (x - 2)$
$v = (x + 4)$
The product rule says the derivative of [imath](x - 2)(x + 4)[/imath], otherwise known as [imath]u \cdot v[/imath], is equal to [imath]\frac{d}{dx}u \cdot v + \frac{d}{dx}v \cdot u[/imath]. So to find the derivative of f(x), we'd do this:
$\frac{d}{dx} (x-2)(x+4) = \frac{d}{dx} (u \cdot v) = \frac{d}{dx}u \cdot v + \frac{d}{dx}v \cdot u$
and then find the derivatives of the parts:
$\frac{d}{dx}u \cdot v + \frac{d}{dx}v \cdot u = \frac{d}{dx}(x-2) \cdot (x + 4) + \frac{d}{dx}(x+4) \cdot (x - 2)$
And since we can find the derivative of things like (x - 2):
$1 \cdot (x + 4) + 1 \cdot (x - 2) = (x + 4) + (x - 2) = 2x + 2$
Remember, if you need help understanding any of this, you can just ask in our calculus forum.
Edited by Cap'n Refsmmat
##### Share on other sites
• 3 months later...
Lesson 5: The Quotient Rule
By now you should have a good grasp of basic differentiation. (If you don't, I suggest you try to work it out rather than plowing ahead.) However, there are still a few cases that you don't yet know how to handle.
For example, what's the derivative of this?
$f(x) = \frac{x-2}{x+4}$
None of the rules and shortcuts so far tells you how to do that. In steps the Quotient Rule. First, let's separate our function into two parts:
$u = x - 2$
$v = x + 4$
meaning that
$f(x) = \frac{u}{v}$
The quotient rule tells us that the derivative of that equation is this:
$f'(x) = \frac{u'\cdot v - v'\cdot u}{v^2}$
(Remember that u' is the shorthand for the derivative of u.)
So now you just need to find the derivatives of each of the parts -- the derivatives of u and v. You just apply the rules you learned before and find that they're both 1.
So that means that:
$f'(x) = \frac{1 (x + 4) - (1 (x - 2))}{(x + 4)^2}$
(You need to remember the parentheses after the minus sign. That negative distributes over everything in the parentheses, so remember to change the signs when you're working out the subtraction.)
There's some more math you can do to simplify that out, but it's not really necessary. You get the idea.
And that's all there is to the Quotient Rule.
Remember, if you need help understanding any of this, you can just ask in our calculus forum.
Edited by Cap'n Refsmmat
##### Share on other sites
• 3 weeks later...
Lesson 6: The Chain Rule
The Chain Rule helps you solve another important type of equation. This kind:
$g(x) = 4(x^2 - 7)^6$
You have a choice: you could expand the equation out (which would take a very long time) and apply the other rules of differentiation, or you could use the chain rule.
Let's break that above equation into two separate functions, a and b:
$a(x) = 4x^6$
$b(x) = x^2 - 7$
That means we can redefine g(x) like this:
$g(x) = a(b(x))$
For those of you who don't see how g(x) can be a(b(x)), try it. a(b(x)) is a(x) with b(x) stuck in wherever there's an x, like this:
$g(x) = a(b(x)) = 4(b(x))^6 = 4(x^2 - 7)^6$
We simply inserted x2 - 7 where there was an x.
How do you find the derivative? The rule says it's this:
$\frac{d}{dx} g(x) = a'(b(x)) \cdot b'(x)$
So it's as simple as finding the derivatives of a(x) and b(x) using all the rules we learned before and substituting them back into the problem.
$a'(x) = 24x^5$
$b'(x) = 2x$
Then we substitute those back in:
$\frac{d}{dx} g(x) = 24(b(x))^5 \cdot 2x$
$\frac{d}{dx} g(x) = 24(x^2 - 7)^5 \cdot 2x$
From there you can simplify the equation any way you'd like.
So the chain rule is as simple as breaking the equation down into parts. Try a few below. The answers are at the end of this post.
1. $g(x) = 2(x + 4)^3 + 7x$
2. $q(x) = ((x + 4)^4)^2$
1. You could split g(x) into [imath]a(x) = x + 4[/imath] and [imath]b(x) = 2x^3[/imath], making [imath]g(x) = b(a(x)) + 7x[/imath]. You can safely leave the 7x sitting around and derive it by itself because it's being added, not multiplied.
2. q(x) should be split into [imath]a(x) = x + 4[/imath], [imath]b(x) = x^4[/imath], and [imath]c(x) = x^2[/imath]. That makes [imath]q(x) = c(b(a(x)))[/imath]. How do you solve that? Easy. [imath]q'(x) = c'(b(a(x))) \cdot b'(a(x)) \cdot a'(x)[/imath]. Remember, it's "the derivative of the outer function, times the derivative of the inner function." You just have to apply the chain rule to the inner function to find its derivative.
Remember, if you need help understanding any of this, you can just ask in our calculus forum.
Edited by Cap'n Refsmmat
##### Share on other sites
• 5 months later...
Lesson 7: Derivatives of Trigonometric Functions
Often times you'll see something like this:
$f(x) = \sin(4x^2)$
and be asked to find the derivative. This leads to the obvious question: what's the derivative of a trig function? How do you derive sin?
There's no easy method to do so (save a lot of math you haven't learned yet), but it is easy to memorize:
$\frac{d}{dx} \sin x = \cos x$
$\frac{d}{dx} \cos x = -\sin x$
$\frac{d}{dx} \tan x = \sec^2 x$
$\frac{d}{dx} \sec x = \sec x \tan x$
$\frac{d}{dx} \csc x = - \csc x \cot x$
$\frac{d}{dx} \cot x = -\csc^2 x$
(Helpful memorization hint: The derivative of any trig function starting with a "c" is negative. The rest are positive.)
You'll have to memorize that and practice a bit to make sure you know them.
Ah, you ask, but what about [imath]f(x) = \sin(4x^2)[/imath]? Is the derivative just [imath]\cos(4x^2)[/imath]?
No.
It's a chain rule question again. The derivative of [imath]\sin x[/imath] is certainly [imath]\cos x[/imath], but when you put in the [imath]4x^2[/imath] it becomes a chain rule question. Think of it this way:
$f(x) = 4x^2$
$\frac{d}{dx} \sin(f(x)) = \cos(f(x)) \cdot f'(x)$
That looks a lot like the chain rule stuff from above, right? You split it into two functions, sin and x, and apply the chain rule as I explained in the previous lesson. So the answer would be:
$\frac{d}{dx} \sin(4x^2) = \cos(4x^2) \cdot 8x$
Remember, if you need help understanding any of this, you can just ask in our calculus forum.
Edited by Cap'n Refsmmat
##### Share on other sites
• 2 years later...
Lesson 8: Basic Applications: Finding the Maximum and Minimum Values of a Function:
This lesson was contributed by SFN member Daedalus, who did an excellent job putting it together. If you find this helpful, be sure to thank Daedalus!
The derivative of a function defines the slope of a line tangent to the curve. Normally, derivatives are used in calculations where the rate of change is needed. However, the derivative can also be used to find the extremum (singular) or extrema (plural) of a function.
There are two types of extrema called the global extrema and local extrema. The global extrema defines the overall maximum and / or minimum value(s) in the range of a function. The local extrema, sometimes referred to as the relative extrema, defines the maximum and / or minimum value(s) in the range of a function within a given region.
To find the extrema of a function, we must find the stationary points of a function where a horizontal line is tangent to the curve. This involves finding the roots of the first derivative which is the same as setting the first derivative equal to zero and solving for $x$.
$f'(x)=0$
A stationary point may be the minimum, maximum, or an inflection point on the curve. We can see that in this example:
The labeled points have slopes of zero (as shown by the tangent lines) and show where the function "flattens out." This happens at minima and maxima, but also at inflection points, where the function increases, stops, then increases again (or the reverse).
To make sure that we have found the extrema, we can use the first derivative test which states:
For any stationary point $x_{s}$ on a continuous function $f(x)$ we can determine if the stationary point is a minimum, maximum, or inflection point of the function according to the following rules:
1. Local Minimum (Possibly Global): $f'(x)<0$ to the left of $x_{s}$ and $f'(x)>0$ to the right.
2. Local Maximum (Possibly Global): $f'(x)>0$ to the left of $x_{s}$ and $f'(x)<0$ to the right.
3. Inflection Point: The sign of $f'(x)$ is the same on both sides of $x_{s}$.
Applying The Derivative:
Now that we have discussed how to find the extrema of a function, we will apply this knowledge to solve a problem that anyone who grows a garden will appreciate.
A farmer has decided to plant a garden and would like to build a fence around it to keep the animals from eating the crops. Being an experienced farmer, he decided to place the garden up against his barn because he only has 100 ft of fence and he wants to enclose the largest possible area. The following image illustrates the problem:
We can see that the area of the garden is defined by:
$A=x \times y$
We can also see that the perimeter of the fence is equal to:
$P=2\, x + y$
However, the equation for the area of the garden is not in a form that is useful to us. We'd like the equation in a form that only uses one variable, to make it easier to work with. Let's use the variable $x$. If we look at the equation for the perimeter of the fence, we can solve for $y$ and substitute that result into the equation for the area of the garden:
$y=P - 2\, x$
Replacing the variable $y$ with $P-2\, x$ in the equation for the area of the garden yields:
$A=x \, (P - 2\, x) = x\, P - 2\, x^2$
Now, as stated in the problem, we know the farmer has 100 feet of fence to work with, so $P=100$. We can see there are many possible areas for the garden, depending on the length of the sides -- there could be a very long, skinny garden with almost no space, or a broad garden with plenty of space. If we plot $A$, we see that it forms a parabola:
However, we are only interested in the largest possible area that can be enclosed by the fence. This means we must find the number $x$ which gives the maximum value of $A$ in our function. To do this we will locate our stationary point(s) by taking the first derivative of our function, setting it equal to zero, and solving for $x$:
The derivative of our area function as provided by the power rule:
$A'(x)=P - 4\, x$
We search for horizontal tangents by setting $A'(x)=0$:
$P - 4\, x=0$
Solving for $x$:
$x=\frac{P}{4}$
We can check this point to see if it is the maximum by using the first derivative test:
$P- 4\, \left (\frac{P}{4} - 0.01\right) = 0.04$
$P- 4\, \left (\frac{P}{4} + 0.01\right) = -0.04$
The stationary point is indeed the maximum because the slope to the left of the stationary point is positive and the slope to the right is negative. (That is, the function goes up, reaches this maximum point, and then goes down -- it doesn't go up again.) Furthermore, by looking at the graph we can conclude that this stationary point is the global maximum of the function, since it only goes down on each side.
We know that the farmer only has 100 ft. of fence which is equal to the perimeter that we have defined in the equations. With $P=100 \mbox{ft}$, we can let the farmer know that the width of his garden needs to be $25 \mbox{ft}$ :
$\frac{P}{4}=\frac{100 \mbox{ft}}{4}=25 \mbox{ft}$
The length of his garden needs to be $50\mbox{ft}$ :
$y=P - 2\, x=100 \mbox{ft} - 2\, (25 \mbox{ft})= 50 \mbox{ft}$
This gives the farmer a maximum $1250 \mbox{ft}^2$ of area to plant his garden :
$x \times y=25 \mbox{ft} \times 50 \mbox{ft}=1250 \mbox{ft}^2$
or
$x \times y=\frac{P}{4} \times \left(P-2\, \left(\frac{P}{4}\right)\right)=\frac{P^2}{8}=\frac{10000 \mbox{ft}^2}{8}=1250 \mbox{ft}^2$
If the farmer decided to place his garden away from the barn such that he had to use the entire length of fence to enclose the garden (rather than letting one wall of the barn fence in the garden), what would be the maximum area he could enclose?
Would this area be a rectangle or would it be a square?
The area enclosed by the fence:
$A=x \times y$
The perimeter of the fence:
$P=2\, x + 2\, y$
Solve for $y$ in the perimeter function:
$y=\frac{P-2\, x}{2}=\frac{P}{2} - x$
Substitute this result into the equation for the area:
$A=x \left(\frac{P}{2} - x\right)=x \left(\frac{P}{2}\right) - x^2$
Set the first derivative equal to zero and solve for $x$
$A'(x)=\frac{P}{2} - 2\, x$
$\frac{P}{2} - 2\, x = 0$
$x=\frac{P}{4}$
If we had 100 ft. of fence, the width of the garden would be:
$\frac{P}{4}=\frac{100 \mbox{ft}}{4}=25 \mbox{ft}$
The length of the garden would be:
$y=\frac{P}{2} - x=\frac{100 \mbox{ft}}{2} - 25 \mbox{ft}=25 \mbox{ft}$
The largest area possible is:
$x \times y=25 \mbox{ft} \times 25 \mbox{ft}=625 \mbox{ft}^2$
or
$x \times y=\frac{P}{4} \times \left(\frac{P}{2} - \left(\frac{P}{4}\right)\right)=\frac{P^2}{16}=\frac{10000 \mbox{ft}^2}{16}=625 \mbox{ft}^2$
The shape of the garden would be a square:
$25 \mbox{ft} \times 25 \mbox{ft}$
##### Share on other sites
• 1 year later...
Hi Cap'n Refsmmat,
At high school we studied 2 years of calculus and here are some things that I found made differentiation/integration a bit easier to understand.
In its simplest form differentiation is the application of n * x ^ (n - 1) to all elements of x in equations of the form of a * x^2 + b * x + c = 0.
The roots of the basic form are - b +/- the square root of (b^2 - 4 * a * c)/2a:
$x=\frac{-b \pm \sqrt{b^2 - 4 ac}}{2a}$
The first differential of this basic form is 2 * a * x + b = 0 and the integral of this first differential is the original basic form because integration is the exact opposite of differentiation (the application of x^(n+1) / n to all elements of x + add a constant (x^0, which may = 0)).
The units of acceleration M / second^2, speed M / second and distance travelled M all have a pure integral/differential relationship (between M with respect to time) that gets right to the heart of Newtons calculus and his mechanics. These proofs from first principles are good examples of pure applied calculus.
Finally, in the basic form the last differential always goes to 0 and the first integral is the same and also goes to 0, unless you use a modified calculus, so make sure you don't overshoot on the way down or the way up.
Edited by Cap'n Refsmmat
fix typo
##### Share on other sites
This topic is now closed to further replies.
×
|
2021-10-22 20:30:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.79964280128479, "perplexity": 340.87795046461895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585518.54/warc/CC-MAIN-20211022181017-20211022211017-00648.warc.gz"}
|
https://www.tutorialspoint.com/unique-paths-in-python
|
# Unique Paths in Python
C++Server Side ProgrammingProgramming
Suppose there is a robot is located at the top-left corner of a n x m grid (n rows and m columns). The robot can only move either down side or right side at any point in time. The robot wants to reach the bottom-right corner of the grid (marked 'END in the diagram below). So we have to find how many possible unique paths are there? For example if m = 3 and n = 2, then the grid will be like below −
Robo END
The output will be 3, So there are total 3 different ways to reach from start position to the end position. These paths are −
1. Right → Right → Down
2. Right → Down → Right
3. Down → Right → Right
Let us see the steps −
• We will use the dynamic programming approach to solve this
• let row := n and col := m, create a table DP of order n x m and fill this with -1
• DP[row – 2, col - 1] := 1
• for i in range 0 to col
• DP[n – 1, i] := 1
• for i in range 0 to row
• DP[i, col – 1] := 1
• for i in range row -2 down to -1
• for j in range col -2 down to -1
• DP[i, j] := DP[i + 1, j] + DP[i, j + 1]
• return DP[0, 0]
Let us see the following implementation to get better understanding −
## Example
Live Demo
class Solution(object):
def uniquePaths(self, m, n):
row = n
column = m
dp = [[-1 for i in range(m)] for j in range(n)]
dp[row-2][column-1] = 1
for i in range(column):
dp[n-1][i] = 1
for i in range(row):
dp[i][column-1]=1
for i in range(row-2,-1,-1):
for j in range(column-2,-1,-1):
dp[i][j] = dp[i+1][j] + dp[i][j+1]
return dp[0][0]
ob1 = Solution()
print(ob1.uniquePaths(10,3))
## Input
10
3
## Output
55
Published on 03-Feb-2020 14:00:58
|
2021-05-15 13:32:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2924076020717621, "perplexity": 2493.604322278723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991370.50/warc/CC-MAIN-20210515131024-20210515161024-00451.warc.gz"}
|
https://fenicsproject.discourse.group/t/accuracy-on-subdomain-area/7053
|
# Accuracy on subdomain area
Hello,
I am working with a level set formulation, in which the domain of interest is the zero level set of a function Phi. I need to calculate the area of this domain. However, since I’m using a binary strategy for marking the inside and outside of the domain, the area value is not accurate unless I use really refined meshes. I wonder if there is a simple way of improving this result, for exemple using intermediate values of density to each element instead of a binary approach.
Here I have a simple exemple in which I calculate the area of a circle increasing the mesh size and comparing to the expected value.
from dolfin import *
import numpy as np
from matplotlib import pyplot as plt
from tabulate import tabulate
r = 0.2
def phi(x):
return (x[0] - 0.5)**2 + (x[1]-0.5)**2 - r**2 #Zero level set is a circle centered in (0.5,0.5) with radius 0.2
class Omega(SubDomain):
def inside(self, x, on_boundary):
return phi(x) > 0
#Total domain area: 1
#Interior expected area: pi*r^2
#Exterior expected area: 1 - pi*r^2
a = 10; q = 2; length = 6
size = [a * q** (n - 1) for n in range(1, length + 1)]
a_int = []; a_ext = []; e_int = []; e_ext=[]; k = 0
for N in size:
print('Mesh size:', N)
mesh = UnitSquareMesh(N, N,'crossed')
domains = MeshFunction("size_t", mesh, mesh.topology().dim(), 0)
omega = Omega()
omega.mark(domains, 1)
dx_int = Measure("dx", domain=mesh, subdomain_data=domains)
a_int.append(assemble(Constant(1)*dx_int(0)))
a_ext.append(assemble(Constant(1)*dx_int(1)))
e_int.append(pi*r**2 - a_int[k])
e_ext.append(1 - pi*r**2 - a_ext[k])
print('Interior area:', a_int[k])
print('Error on interior area:', e_int[k])
print('Exterior area:', a_ext[k])
print('Error on extrior area:', e_ext[k])
k+=1
e_size = np.ones(6)/size
headers=['mesh size','element size', 'interior area', 'exterior area', 'interior error', 'exterior error']
table = [size, e_size,a_int,a_ext, e_int, e_ext]
Thank you!
1 Like
I’m far from an expert on level set methods, but I thought the idea was to use the function itself as a definition of the geometry of the interface \vec{x}_\text{interface} = \{\vec{x} : \phi(\vec{x}) = 0\}.
To this end I just compute (please forgive the haphazard notation)
A_\text{interior} = \int_\Omega \begin{cases} 1 & \phi < 0 \\ 0 & \text{otherwise} \end{cases} \mathrm{d}x
and
A_\text{exterior} = \int_\Omega \begin{cases} 1 & \phi \geq 0 \\ 0 & \text{otherwise} \end{cases} \mathrm{d}x.
See the following modifications:
from dolfin import *
import numpy as np
from matplotlib import pyplot as plt
from tabulate import tabulate
r = 0.2
def phi(x):
return (x[0] - 0.5)**2 + (x[1]-0.5)**2 - r**2 #Zero level set is a circle centered in (0.5,0.5) with radius 0.2
class Omega(SubDomain):
def inside(self, x, on_boundary):
return phi(x) > 0
class PhiExpr(UserExpression):
def eval(self, value, x):
value[0] = phi(x)
def value_shape(self):
return ()
#Total domain area: 1
#Interior expected area: pi*r^2
#Exterior expected area: 1 - pi*r^2
a = 10; q = 2; length = 6
size = [a * q** (n - 1) for n in range(1, length + 1)]
a_int = []; a_ext = []; e_int = []; e_ext=[]; k = 0
algebraic_int, algebraic_ext, e_algebraic_int, e_algebraic_ext = [], [], [], []
for N in size:
print('Mesh size:', N)
mesh = UnitSquareMesh(N, N,'crossed')
V = FunctionSpace(mesh, "CG", 1)
phi_function = Function(V)
phi_function.interpolate(PhiExpr())
algebraic_int.append(assemble(conditional(lt(phi_function, 0.0), 1, 0)*dx))
algebraic_ext.append(assemble(conditional(gt(phi_function, 0.0), 1, 0)*dx))
e_algebraic_int.append(pi*r**2 - algebraic_int[k])
e_algebraic_ext.append(1 - pi*r**2 - algebraic_ext[k])
domains = MeshFunction("size_t", mesh, mesh.topology().dim(), 0)
omega = Omega()
omega.mark(domains, 1)
dx_int = Measure("dx", domain=mesh, subdomain_data=domains)
a_int.append(assemble(Constant(1)*dx_int(0)))
a_ext.append(assemble(Constant(1)*dx_int(1)))
e_int.append(pi*r**2 - a_int[k])
e_ext.append(1 - pi*r**2 - a_ext[k])
print('Interior area:', a_int[k])
print('Error on interior area:', e_int[k])
print('Exterior area:', a_ext[k])
print('Error on extrior area:', e_ext[k])
print('Interior algebraic area:', algebraic_int[k])
print('Error on algebraic interior area:', e_algebraic_int[k])
print('Exterior algebraic area:', algebraic_ext[k])
print('Error on algebraic exterior area:', e_algebraic_ext[k])
k+=1
e_size = np.ones(length)/size
headers=['mesh size','element size', 'interior area', 'exterior area', 'interior error', 'exterior error']
table = [size, e_size,a_int,a_ext, e_int, e_ext]
headers2=['mesh size','element size', 'algebraic interior area', 'algebraic exterior area', 'algebraic interior error', 'algebraic exterior error']
table2 = [size, e_size,algebraic_int,algebraic_ext, e_algebraic_int, e_algebraic_ext]
where table2 gives a slightly better approximation
mesh size element size interior area exterior area interior error exterior error
----------- -------------- --------------- --------------- ---------------- ----------------
1.000e+01 1.000e-01 1.800e-01 8.200e-01 -5.434e-02 5.434e-02
2.000e+01 5.000e-02 1.500e-01 8.500e-01 -2.434e-02 2.434e-02
4.000e+01 2.500e-02 1.375e-01 8.625e-01 -1.184e-02 1.184e-02
8.000e+01 1.250e-02 1.313e-01 8.687e-01 -5.586e-03 5.586e-03
1.600e+02 6.250e-03 1.286e-01 8.714e-01 -2.930e-03 2.930e-03
3.200e+02 3.125e-03 1.271e-01 8.729e-01 -1.465e-03 1.465e-03
mesh size element size algebraic interior area algebraic exterior area algebraic interior error algebraic exterior error
----------- -------------- ------------------------- ------------------------- -------------------------- --------------------------
1.000e+01 1.000e-01 1.400e-01 8.600e-01 -1.434e-02 1.434e-02
2.000e+01 5.000e-02 1.250e-01 8.750e-01 6.637e-04 -6.637e-04
4.000e+01 2.500e-02 1.250e-01 8.750e-01 6.637e-04 -6.637e-04
8.000e+01 1.250e-02 1.253e-01 8.747e-01 3.512e-04 -3.512e-04
1.600e+02 6.250e-03 1.256e-01 8.744e-01 3.871e-05 -3.871e-05
3.200e+02 3.125e-03 1.257e-01 8.743e-01 -3.564e-07 3.563e-07
You can probably get even better results by modifying the space in which \phi lives or some fancy quadrature scheme.
I’m pretty sure you can do wonderful things with splines to get even more precise and smooth representations of level sets using splines. See for example @kamensky’s tIGAr.
1 Like
Hi Nate,
Thank you so much for your reply. This computation you mention is exactly what I was trying to do with the class Omega. But I was getting a binary choice for each element: 1 for the interior and 0 for the exterior. Your approach seems to be much wiser, although I’m not sure if I totally get it (I’m sorry, I am still a bit new to FEniCS).
Is this line phi_function.interpolate(PhiExpr()) actually interpolating Phi function inside each element as if I was getting a density value for each element? Or does it only consider nodal values of Phi on the mesh?
Thank you much!
|
2021-12-07 21:07:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8671071529388428, "perplexity": 8113.012140343127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363418.83/warc/CC-MAIN-20211207201422-20211207231422-00279.warc.gz"}
|