url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
http://www.chegg.com/homework-help/questions-and-answers/ve-seen-carbon-coated-styrofoam-ball-radius-001-m--calculate-many-electrons-must-taken-ele-q1388395 | You've seen a carbon-coated styrofoam ball (radius 0.01[m]). Calculate how many electrons must be taken from it , to have the Electric Potential at the carbon surface to be 10 kiloVolt.
a. 690E-30 = 6.9E-28
b. 110E-12 = 1.1E-10
c. 11E-9 = 1.1E-8
d. 694E6 = 6.94E8
e. 69E9 = 6.9E10
f. 6.9E12
g. none of the above; electrons must be given to it | 2016-08-31 04:39:14 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8296541571617126, "perplexity": 6623.57673640228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983577646.93/warc/CC-MAIN-20160823201937-00119-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://myeclass.academy/blog/index.php?entryid=36928 | ## Blog entry by Milla Praed
Anyone in the world
Legal Online Sports Betting 2022 - Best US Betting Sites
If a player does not take any kind of part in a video game, then wagers on that player recommendation will certainly be reimbursed. For all period long match wagers and division betting, all wagers stand no matter of group moving, or a modification to team name, https://frcservices.eu/profile/johnieprenzel85 period size or playoff style.
Division Victor markets will be picked who completes top of the relevant division after the final thought of the Regular Period. If 2 or more teams have the very same Regular Period win record, after that ties will certainly be damaged making use of the controling body's official policies to identify a straight-out victor. If no connection alternative was offered for any suit bet wager, wagers will certainly be a push should the teams link, and risks refunded.
Normal season records do not count. If there is any kind of adjustment to the post-season framework, whereby a Meeting Finals Series is not feasible, or called early, Meeting Victor centurymachinery.com will be cleared up on the group that breakthroughs to the NBA Finals from that Meeting. NCAA Meeting Tournament Victor will certainly be established by the group winning the National championship regardless of any kind of post-season suspension.
Sports Betting : Everything You Need to Get Started
Sports Betting News - Results, Fixtures, Scores, Stats
Bet on which group will certainly win the department. Bet on the precise setting a named group will certainly end up within their division. Bet on which team will win the conference.
How to Bet on Sports for Beginners: 12 Tips to Know
Bet on just how numerous normal seasons wins are accomplished by a group. Wager on the number of regular period success made by 2 separate teams.
How to Bet on Sports for Beginners: 12 Tips to Know
Which 2 groups will certainly meet in the Championship Series. Needs to no series occur, all wagers are void. Which team will certainly win, as well as who will certainly they defeat in the called series. Ought to no collection occur, Https://Www.Bixeber.Az all bets are void. Group to be the # 1 seed at the end of the normal season.
What you need to know about sports betting
Wager on whether either of the two named groups be stated the victor for the called market. Bet on which player will certainly win the MVP, Rookies of the Year, and https://women-zekam.Ru/forums/profile/freda1929218965/ Many Improved titles. Group(s) need to finish all 82 set up routine season ready wagers to have activity, unless the outcome has been established.
Wager on the variety of normal periods wins made by one team vs. one more team. Teams need to compete in at the very least 40 Routine period ready wagers to stand. Bet on the variety of routine period Points, Rebounds, Https://Educarr.Com.Br/ Assists, Steals, Blocks by a called player. Gamer's group have to complete in all 82 set up normal period ready wagers to have action, unless the outcome has actually been established.
To certify a gamer should have played in 70% of their team's video games. The following is the technique of computing straight wagers, determination of payment as well as acquire factor prices. Basketball point line and also total wagers pay 10/11 (-110 ). Bet $11 to win$10; total return is $21 unless otherwise defined. Best Sports Betting Sites Wager$13 to win $10; total return is$23. Bet $10 to win$12; overall return is \$22. In the event of a wagering tie, the straight wager is considered "no action" and wager is reimbursed.
Best Sports Betting Sites
https://Bixeber.az/best-sport-betting-site-experiment-we-are-able-to-all-study-from/
If a spell does not occur by the end of the calendar year or the fight is officially terminated, it will be regarded gap and also all stakes will certainly be returned. The bell (buzzer, etc) appearing symbolizes the start of the opening round and formacorp.unilearn.cl the round is taken into consideration official for betting functions, no matter the scheduled size, weight, category, Www.planetpillars.com and/or Https://Www.Mantulbro.Life/Uncategorized/Avoid-The-Top-10-Mistakes-Made-By-Starting-Best-Sport-Betting-Site/ championship permission.
If a fight has a change to the set up variety of rounds all outright wagers on the suit will certainly be activity, nevertheless rounded by rounded bets will be reimbursed. Boxing as well as Mixed Martial Arts wagers are approved in the following manner: Outcomes will certainly be graded based upon the main outcome at ringside as connected by the official announcer.
An introductory guide to online sports betting for beginners
If the main commentator does not declare an outcome at the end of the fight, the market will certainly be chosen the outcome displayed on the appropriate organization authorities site. For wagering purposes, a wager on a fighter to win by "KO" wins if the picked fighter success by Knock Out (KO), Technical Knock Senseless (TKO), or Disqualification (DQ).
Any kind of fight that is considered 'No Competition' will certainly have all wagers refunded. Fight Victor A wager on which fighter will certainly win the suit. If the wagering deal on a suit includes the draw as a 3rd choice as well as the match ends in a draw, wagers on the draw will be paid, while wagers on both competitors will certainly be lost.
A Will Go/Won't Go Round X - A wager on whether or not the match reaches this distance. Complete Rounds Over/Under The middle of a round goes to precisely one min as well as thirty secs into a three-minute round. As an example, 9 rounds would be one minute as well as thirty secs of the 10th round. | 2023-01-28 09:30:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19747816026210785, "perplexity": 3903.953812405323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499541.63/warc/CC-MAIN-20230128090359-20230128120359-00270.warc.gz"} |
http://granitelei.herokuapp.com/post/cv-en-latex-template | # cv en latex template
granitelei.herokuapp.com 9 out of 10 based on 100 ratings. 500 user reviews. | 2021-01-23 12:32:19 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8046122789382935, "perplexity": 14299.838570894291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703537796.45/warc/CC-MAIN-20210123094754-20210123124754-00576.warc.gz"} |
https://math.stackexchange.com/questions/992262/show-that-complex-numbers-are-vertices-of-equilateral-triangle | # Show that complex numbers are vertices of equilateral triangle
1)Show if $|z_1|=|z_2|=|z_3|=1$ and $z_1+z_2+z_3=0$ then $z_1,z_2,z_3$ are vertices of equilateral triangle inscribed in a circle of radius.
I thought I can take use from roots of unity here, since $|z_1|=|z_2|=|z_3|=1$ they lie at circle at radius $1$ but I don't know how to take advantage from $z_1+z_2+z_3=0$
2)Let $z=\cos\alpha+i\sin\alpha$ where $\alpha \in 0,2\pi$ then find $\arg(z^2-z)$
I come to this siutation $\displaystyle z^2-z=-2\sin{\frac{1}{2}x}(\sin{\frac{3}{2}x}+i\cos{\frac{3}{2}x})=-2\sin{\frac{1}{2}x}(\cos(\frac{\pi}{2}-{\frac{3}{2}x})+i\sin({\frac{\pi}{2}-\frac{3}{2}x}))$ so $\displaystyle 0\le\frac{\pi}{2}-\frac{3}{2}x\le2\pi$ so $\displaystyle\frac{\pi}{3}\ge x \ge - \pi$ so $\displaystyle\arg(z^2-z) =[-\pi,\frac{\pi}{3}]$ ???
• Hint for $1$:You know that $e^{i\alpha},e^{i(\alpha+2\pi 1/3)},e^{i(\alpha+2\pi 2/3)}$ satisfy the requirements. Then assume there are other solutions and find a contradiction. – flawr Oct 26 '14 at 19:30
• Hint for 2: Write z as $z=e^{i\alpha}$ Then $z^2-z = e^{i2\alpha}-e^{i\alpha}$ – flawr Oct 26 '14 at 19:41
Let: $z_1 =e^{ia} ; z_2 = e^{ib}; z_3 = e^{ic}$
$z_1 +z_2 = e^{i\frac{a+b}{2}}*(e^{i\frac{a-b}{2}} + e^{-i\frac{(a-b)}{2}}) = e^{i\frac{a+b}{2}}*2*cos(\frac{a-b}{2}) = -z_3$
=> $|2*cos(\frac{a-b}{2})| = |-z_3| = |z_3| = 1$ ,
If $cos(\frac{a-b}{2}) =\frac{1}{2}$ -> $a = b \pm \frac{2\pi}{3}$ $mod(2\pi)$
here without loss of generality you can assume a= b+ $\frac{2\pi}{3}$ $mod(2\pi)$ (the other case is the same)
you get : $\frac{a+b}{2} = c+\pi$ $mod(2\pi)$ -> b+ $\frac{\pi}{3} = c + \pi$ $mod(2\pi)$ -> $b = c + \frac{2\pi}{3}$ $mod(2\pi)$
You get your equilateral triangle, since you proved that you can rotate of $\frac{2\pi}{3}$ to pass from one point to another. The other cases are exactly the same.
As for 2) , I would use : $z= e^{ia}$
$z^2 - z = e^{2ia} - e^{ia}$ = $e^{\frac{3}{2}ia}*2i*sin(\frac{a}{2})$ = $e^{(\frac{3}{2}a + \frac{\pi}{2})i}*2*sin(\frac{a}{2})$. The sign of the sin is the only thing you have take into account to evaluate correctly the argument. If it is negative, you add $\pi$, else you already have your argument
• why $|\displaystyle 2\cos(\frac{a-b}{2})|=1$ ?? – Mario Oct 26 '14 at 21:57
• You equal the module of both part of the relation : $e^{i\frac{a+b}{2}}*2*cos(\frac{a-b}{2}) = -z_3$ – mvggz Oct 26 '14 at 22:46
• I've edited my answer about your second question, if it helps you – mvggz Oct 27 '14 at 12:58
Here my extended hint for $2$:
Notice that $z^2-z = z(z-1) = e^{i\alpha}(e^{i\alpha}-1)$. And since $e^{i(\alpha+\beta)} = e^{i\alpha} e^{i\beta}$ therefore $\arg(ab)=\arg(a)+\arg(b)$
So $\arg(z^2-z) = \alpha + \arg(e^{i\alpha}-1)$
You will now find geometrically that $\arg(e^{i\alpha}-1) = \pi/2 + \alpha/2$ (I hope this is correct.)
You just have to consider the triangle $(0,e^{i\alpha},1)$
• OK, I'm getting lost when you say that $\arg(e^{i\alpha}-1) = \pi/2 + \alpha/2$ how do you know that ? – Mario Oct 26 '14 at 20:01
• Make a drawing of the said triangle, you'll notice that it is an isoscles triangle. If you dissect $\alpha$ you get a triangle with a right angle. In this one you can use the sum of the internal angles. – flawr Oct 26 '14 at 20:07
• @flawr If $\alpha=-\pi$ then $Arg(e^{i\alpha}-1)=Arg(-2)=\pi$ and your formula gives $\pi/2 - \pi/2=0$ – Harto Saarinen Oct 26 '14 at 20:20
• Ok that is an ambiguity but $\alpha$ is assumed to be nonnegative. – flawr Oct 26 '14 at 20:45
• Well same happens if $\alpha=\pi$. – Harto Saarinen Oct 26 '14 at 21:32 | 2020-01-22 20:19:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7669756412506104, "perplexity": 302.954719380267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607407.48/warc/CC-MAIN-20200122191620-20200122220620-00265.warc.gz"} |
https://freakonometrics.hypotheses.org/19949 | # An Update on Boosting with Splines
In my previous post, An Attempt to Understand Boosting Algorithm(s), I was puzzled by the boosting convergence when I was using some spline functions (more specifically linear by parts and continuous regression functions). I was using
> library(splines)
> fit=lm(y~bs(x,degree=1,df=3),data=df)
The problem with that spline function is that knots seem to be fixed. The iterative boosting algorithm is
• start with some regression model $\boldsymbol{y}_1=h_1(\boldsymbol{x})$
• compute the residuals, including some shrinkage parameter,$\boldsymbol{\varepsilon}_{1}=\boldsymbol{y}-\nu_1 h_1(\boldsymbol{x})$
then the strategy is to model those residuals
• at step $j$, consider regression $\boldsymbol{\varepsilon}_j=h_j(\boldsymbol{x})$
• update the residuals $\boldsymbol{\varepsilon}_{j+1}=\boldsymbol{\varepsilon}_j-\nu_j h_j(\boldsymbol{x})$
and to loop. Then set
$\widehat{\boldsymbol{y}}=\sum_{j=1}^M \nu_j\boldsymbol{\varepsilon}_{j}=\sum_{j=1}^M \nu_jh_j(\boldsymbol{x})$
I thought that boosting would work well if at step $j$, it was possible to change the knots. But the output
was quite disappointing: boosting does not improve the prediction here. And it looks like knots don’t change. Actually, if we select the ‘best‘ knots, the output is much better. The dataset is still
> n=300
> set.seed(1)
> u=sort(runif(n)*2*pi)
> y=sin(u)+rnorm(n)/4
> df=data.frame(x=u,y=y)
For an optimal choice of knot locations, we can use
> library(freeknotsplines)
> xy.freekt=freelsgen(df$x, df$y, degree = 1,
+ numknot = 2, 555)
The code of the previous post can simply be updated
> v=.05
> library(splines)
> xy.freekt=freelsgen(df$x, df$y, degree = 1,
+ numknot = 2, 555)
> fit=lm(y~bs(x,degree=1,knots=
+ xy.freekt@optknot),data=df)
> yp=predict(fit,newdata=df)
> df$yr=df$y - v*yp
> YP=v*yp
> for(t in 1:200){
+ xy.freekt=freelsgen(df$x, df$yr, degree = 1,
+ numknot = 2, 555)
+ fit=lm(yr~bs(x,degree=1,knots=
+ xy.freekt@optknot),data=df)
+ yp=predict(fit,newdata=df)
+ df$yr=df$yr - v*yp
+ YP=cbind(YP,v*yp)
+ }
> nd=data.frame(x=seq(0,2*pi,by=.01))
> viz=function(M){
+ if(M==1) y=YP[,1]
+ if(M>1) y=apply(YP[,1:M],1,sum)
+ plot(df$x,df$y,ylab="",xlab="")
+ lines(df$x,y,type="l",col="red",lwd=3) + fit=lm(y~bs(x,degree=1,df=3),data=df) + yp=predict(fit,newdata=nd) + lines(nd$x,yp,type="l",col="blue",lwd=3)
+ lines(nd$x,sin(nd$x),lty=2)}
> viz(100)
I like that graph. I had the intuition that using (simple) splines would be possible, and indeed, we get a very smooth prediction.
## 4 thoughts on “An Update on Boosting with Splines”
1. Richard Warnung says:
Thank you for this follow-up. The fit really improves when the optimal knots are selected. However in the very end of the procedure the points on the very right and on the very left end are heavily overfitted. Where does this come from?
1. it should come from problems with end points of the splines I guess…. I should ask the optimal knots to be in a specific range, i.e. not [0,1] but maybe [.1,.9] (in order to avoid optimal knots at .01)
This site uses Akismet to reduce spam. Learn how your comment data is processed. | 2021-05-14 20:31:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7944863438606262, "perplexity": 6103.843360969805}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991207.44/warc/CC-MAIN-20210514183414-20210514213414-00069.warc.gz"} |
https://pureportal.spbu.ru/ru/publications/the-strong-continuity-of-convex-functions | # The Strong Continuity of Convex Functions
Результат исследований: Научные публикации в периодических изданияхстатья
## Аннотация
A convex function defined on an open convex set is known to be continuous at every point of this set. In actuality, a convex function has a strengthened continuity property. In this paper, we introduce the notion of strong continuity and demonstrate that a convex function possesses this property. The proof is based only on the definition of convexity and the Jensen’s inequality. A distinct constant (constant of strong continuity) is included in the definition of strong continuity. In the article, we give an unimprovable value for this constant in the case of convex functions. The constant of strong continuity depends, in particular, on the form of the norm introduced in the space of the arguments of a convex function. Polyhedral norm is of particular interest. With its use the constant of strong continuity can be easily calculated. This requires a finite number of values of the convex function.
Язык оригинала английский 244-248 5 Vestnik St. Petersburg University: Mathematics 51 3 https://doi.org/10.3103/S1063454118030056 Опубликовано - 4 сен 2018
## Предметные области Scopus
• Математика (все) | 2021-01-18 12:16:33 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8174076080322266, "perplexity": 839.6965218162483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514495.52/warc/CC-MAIN-20210118092350-20210118122350-00484.warc.gz"} |
https://socratic.org/questions/1-sin-4x-cos-4x-sin-2xcos-2x-how-will-you-integrate-this-1 | # 1÷sin^4x+cos^4x+sin^2xcos^2x.how will you integrate this?
Feb 25, 2018
Therefore
$\int \frac{1}{{\sin}^{4} x + {\cos}^{4} x + {\sin}^{2} x {\cos}^{2} x} \mathrm{dx} = - \frac{1}{6} \ln \left({\tan}^{3} x - 1\right) - \ln \left(\tan x + 1\right) - \frac{2}{\sqrt{3}} {\tan}^{-} 1 \left(\tan x - \frac{1}{2}\right) + C$
#### Explanation:
Hope the question is of the form
$\int \frac{1}{{\sin}^{4} x + {\cos}^{4} x + {\sin}^{2} x {\cos}^{2} x} \mathrm{dx}$
Let
$u = {\cos}^{2} x , v = {\sin}^{2} x$
${u}^{3} - {v}^{3} = \left(u - v\right) \left({u}^{2} + u v + {v}^{2}\right)$
Thus
${u}^{2} + u v + {v}^{2} = \frac{{u}^{3} - {v}^{3}}{u - v}$
$\frac{1}{{u}^{2} + u v + {v}^{2}} = \frac{u - v}{{u}^{3} - {v}^{3}}$
Substituting
$\frac{1}{{\sin}^{4} x + {\cos}^{4} x + {\sin}^{2} x {\cos}^{2} x} = \frac{{\cos}^{2} x - {\sin}^{2} x}{{\left({\cos}^{2} x\right)}^{3} - {\left({\sin}^{2} x\right)}^{3}}$
Dividing throughout by ${\cos}^{6} x$
$\frac{1}{{\sin}^{4} x + {\cos}^{4} x + {\sin}^{2} x {\cos}^{2} x} = \frac{{\sec}^{4} x - {\sec}^{2} x {\tan}^{2} x}{1 - {\tan}^{6} x}$
$\frac{1}{{\sin}^{4} x + {\cos}^{4} x + {\sin}^{2} x {\cos}^{2} x} = \frac{{\sec}^{2} x \left({\sec}^{2} x - {\tan}^{2} x\right)}{1 - {\tan}^{6} x}$
${\sec}^{2} x - {\tan}^{2} x = 1$
$\frac{{\sec}^{2} x}{1 - {\tan}^{6} x}$
$\text{Now, ...}$
$\int \frac{1}{{\sin}^{4} x + {\cos}^{4} x + {\sin}^{2} x {\cos}^{2} x} \mathrm{dx} = \int \frac{{\sec}^{2} x}{1 - {\tan}^{6} x} \mathrm{dx}$
Thus,
If
$t = \tan x$
$\mathrm{dt} = {\sec}^{2} x \mathrm{dx}$
$\int \frac{{\sec}^{2} x}{1 - {\tan}^{6} x} \mathrm{dx} = \int \frac{\mathrm{dt}}{1 - {t}^{6}} \mathrm{dx}$
$\frac{1}{1 - {t}^{6}} = \frac{A}{1 - {t}^{3}} + \frac{B}{1 + {t}^{3}}$
$1 = A \left(1 + {t}^{3}\right) + B \left(1 - {t}^{3}\right)$
$1 = A + A {t}^{3} + B - B {t}^{3}$
$1 = \left(A + B\right) + \left(A - B\right) {t}^{3}$
Equating the coefficients of like powers of t
$1 = A + B$
$0 = A - B$
IE
$A = B$
$A = \frac{1}{2} , B = \frac{1}{2}$
$\frac{1}{1 - {t}^{6}} = \frac{A}{1 - {t}^{3}} + \frac{B}{1 + {t}^{3}}$
$\frac{1}{1 - {t}^{6}} = \frac{\frac{1}{2}}{1 - {t}^{3}} - \frac{\frac{1}{2}}{1 + {t}^{3}}$
$\frac{1}{1 - {t}^{6}} = \frac{1}{2} \left(\frac{1}{1 - {t}^{3}} - \frac{1}{1 + {t}^{3}}\right)$
Let
${I}_{1} = \int \frac{1}{1 - {t}^{3}} \mathrm{dt}$
${I}_{2} = \int \frac{1}{1 + {t}^{3}} \mathrm{dt}$
$\frac{1}{1 - {t}^{3}} = \frac{1}{\left(1 - t\right) \left(1 + t + {t}^{2}\right)} = \frac{C}{1 - t} + \frac{D t + E}{1 + t + {t}^{2}}$
$\frac{1}{\left(1 - t\right) \left(1 + t + {t}^{2}\right)} = \frac{C}{1 - t} + \frac{D t + E}{1 + t + {t}^{2}}$
$1 = C \left(1 + t + {t}^{2}\right) + \left(D t + E\right) \left(1 - t\right)$
$1 = C + C t + C {t}^{2} + D t + E - D {t}^{2} - E t$
$1 = \left(C + E\right) + \left(C + D - E\right) t + \left(C - D\right) {t}^{2}$
Equating the coefficients of like powers of t
$0 = C + E$
$E = - C$
$0 = C + D - E$
$0 = C + D + C$
$0 = 2 C + D$
$D = - 2 C$
$C - D = 1$
$C + 2 C = 1$
$3 C = 1$
$C = \frac{1}{3}$
$D = - \frac{2}{3}$
$E = - \frac{1}{3}$
$\frac{1}{\left(1 - t\right) \left(1 + t + {t}^{2}\right)} = \frac{C}{1 - t} + \frac{D t + E}{1 + t + {t}^{2}}$
$\frac{1}{\left(1 - t\right) \left(1 + t + {t}^{2}\right)} = \frac{\frac{1}{3}}{1 - t} + \frac{- \frac{2}{3} t - \frac{1}{3}}{1 + t + {t}^{2}}$
$\frac{1}{\left(1 - t\right) \left(1 + t + {t}^{2}\right)} = \frac{1}{3} \left(\frac{1}{1 - t} - \frac{2 t + 1}{1 + t + {t}^{2}}\right)$
$\frac{1}{\left(1 - t\right) \left(1 + t + {t}^{2}\right)} = \frac{1}{3} \left(- \frac{1}{t - 1} - \frac{2 t + 1}{{t}^{2} + t + 1}\right)$
$\frac{1}{\left(1 - t\right) \left(1 + t + {t}^{2}\right)} = - \frac{1}{3} \left(\frac{1}{t - 1} + \frac{2 t + 1}{{t}^{2} + t + 1}\right)$
${I}_{1} = \int \frac{1}{1 + {t}^{3}} \mathrm{dt}$
${I}_{1} = \int - \frac{1}{3} \left(\frac{1}{t - 1} + \frac{2 t + 1}{{t}^{2} + t + 1}\right) \mathrm{dt}$
${I}_{1} = - \int \frac{1}{3} \left(\frac{1}{t - 1} + \frac{2 t + 1}{{t}^{2} + t + 1}\right) \mathrm{dt}$
${I}_{1} = - \frac{1}{3} \left(\ln \left(t - 1\right) + \ln \left({t}^{2} + t + 1\right)\right)$
$- \frac{1}{3} \ln \left(t - 1\right) \left({t}^{2} + t + 1\right)$
${I}_{1} = - \frac{1}{3} \ln \left({t}^{3} - 1\right)$
$\frac{1}{1 + {t}^{3}} = \frac{1}{\left(1 + t\right) \left(1 - t + {t}^{2}\right)} = \frac{C}{1 + t} + \frac{D t + E}{1 - t + {t}^{2}}$
$\frac{1}{\left(1 + t\right) \left(1 - t + {t}^{2}\right)} = \frac{C}{1 + t} + \frac{D t + E}{1 - t + {t}^{2}}$
$1 = C \left(1 - t + {t}^{2}\right) + \left(D t + E\right) \left(1 + t\right)$
$1 = C - C t + C {t}^{2} + D t + E + D {t}^{2} + E t$
$1 = \left(C + E\right) + \left(- C + D - E\right) t + \left(C + D\right) {t}^{2}$
Equating the coefficients of like powers of t
$0 = C + E$
$E = - C$
$0 = - C + D - E$
$0 = D$
$D = 0$
$C + D = 1$
$C + 0 = 1$
$C = 1$
$D = - \frac{2}{3}$
$E = - 1$
$\frac{1}{\left(1 + t\right) \left(1 - t + {t}^{2}\right)} = \frac{1}{1 + t} + \frac{0 t - 1}{1 - t + {t}^{2}}$
$\frac{1}{\left(1 + t\right) \left(1 - t + {t}^{2}\right)} = \frac{1}{1 + t} - \frac{1}{1 - t + {t}^{2}}$
${I}_{2} = \int \frac{1}{1 + {t}^{3}} \mathrm{dt}$
${I}_{2} = \int \left(\frac{1}{1 + t} - \frac{1}{1 - t + {t}^{2}}\right) \mathrm{dt}$
int(1/(1+t)dt=ln(t+1)
$\int \left(\frac{1}{1 - t + {t}^{2}}\right) \mathrm{dt} = \int \left(\frac{1}{{t}^{2} - t + 1}\right) \mathrm{dt}$
Completing the squares
${t}^{2} - t + 1 = {t}^{2} - 2 \cdot \frac{1}{2} \cdot t + {\left(\frac{1}{2}\right)}^{2} + 1 - {\left(\frac{1}{2}\right)}^{2}$
${\left(t - \frac{1}{2}\right)}^{2} + \frac{3}{4}$
${\left(t - \frac{1}{2}\right)}^{2} + {\left(\frac{\sqrt{3}}{2}\right)}^{2}$
Thus,
$\int \left(\frac{1}{{t}^{2} - t + 1}\right) \mathrm{dt} = \int \frac{1}{{\left(t - \frac{1}{2}\right)}^{2} + {\left(\frac{\sqrt{3}}{2}\right)}^{2}} \mathrm{dt}$
$\int \frac{1}{{\left(t - \frac{1}{2}\right)}^{2} + {\left(\frac{\sqrt{3}}{2}\right)}^{2}} \mathrm{dt} = \frac{1}{\frac{\sqrt{3}}{2}} {\tan}^{-} 1 \left(t - \frac{1}{2}\right)$
$\int \frac{1}{{\left(t - \frac{1}{2}\right)}^{2} + {\left(\frac{\sqrt{3}}{2}\right)}^{2}} \mathrm{dt} = \left(\frac{2}{\sqrt{3}}\right) {\tan}^{-} 1 \left(t - \frac{1}{2}\right)$
${I}_{2} = \ln \left(t + 1\right) + \left(\frac{2}{\sqrt{3}}\right) {\tan}^{-} 1 \left(t - \frac{1}{2}\right)$
$t = \tan x$
${I}_{1} = - \frac{1}{3} \ln \left({t}^{3} - 1\right)$
${I}_{2} = \ln \left(t + 1\right) + \left(\frac{2}{\sqrt{3}}\right) {\tan}^{-} 1 \left(t - \frac{1}{2}\right)$
$I = \frac{1}{2} \left({I}_{1} - {I}_{2}\right)$
$I = \frac{1}{2} \left(- \frac{1}{3} \ln \left({t}^{3} - 1\right) - \left(\ln \left(t + 1\right) + \left(\frac{2}{\sqrt{3}}\right) {\tan}^{-} 1 \left(t - \frac{1}{2}\right)\right)\right)$
$I = - \frac{1}{6} \ln \left({t}^{3} - 1\right) - \ln \left(t + 1\right) - \frac{2}{\sqrt{3}} {\tan}^{-} 1 \left(t - \frac{1}{2}\right)$
Thus,
$\int \frac{1}{{\sin}^{4} x + {\cos}^{4} x + {\sin}^{2} x {\cos}^{2} x} \mathrm{dx} = - \frac{1}{6} \ln \left({t}^{3} - 1\right) - \ln \left(t + 1\right) - \frac{2}{\sqrt{3}} {\tan}^{-} 1 \left(t - \frac{1}{2}\right)$
$t = \tan x$
$\int \frac{1}{{\sin}^{4} x + {\cos}^{4} x + {\sin}^{2} x {\cos}^{2} x} \mathrm{dx} = - \frac{1}{6} \ln \left({\tan}^{3} x - 1\right) - \ln \left(\tan x + 1\right) - \frac{2}{\sqrt{3}} {\tan}^{-} 1 \left(\tan x - \frac{1}{2}\right) + C$ | 2021-12-02 01:02:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 95, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9879377484321594, "perplexity": 3664.7919544782344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.58/warc/CC-MAIN-20211201234046-20211202024046-00418.warc.gz"} |
https://github.com/djberg96/win32-dir | # djberg96/win32-dir
A series of constants, and extra or redefined methods, for the Dir class on Windows
Ruby
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information. certs examples lib test CHANGES MANIFEST README Rakefile win32-dir.gemspec
= Description
A series of extra constants for the Dir class that define special folders
on MS Windows systems, as well as methods for creating and detecting
= Installation
gem install win32-dir
= Synopsis
require 'win32/dir'
# C:\WINNT or C:\WINDOWS
puts Dir::WINDOWS
Dir.mkdir('C:\from')
Dir.create_junction('C:\to', 'C:\from')
= Constants
Not all of these are guaranteed to be defined on your system. Also note
that the directories are merely defined. It doesn't necessarily mean they
actually exist.
== The following constants should be defined:
The file system directory that is used to store administrative tools for an
individual user. The Microsoft Management Console (MMC) will save
customized consoles to this directory, and it will roam with the user.
The file system directory containing administrative tools for all users
of the computer.
Dir::APPDATA
The file system directory that serves as a common repository for
application-specific data. A typical path is
C:\Documents and Settings\<user>\Application Data.
This CSIDL is supported by the redistributable shfolder.dll for
systems that do not have the Microsoft Internet Explorer 4.0
integrated Shell installed.
Dir::COMMON_APPDATA
The file system directory containing application data for all users. A
typical path is C:\Documents and Settings\All Users\Application Data.
Dir::COMMON_DOCUMENTS
The file system directory that contains documents that are common to all
users. A typical paths is C:\Documents and Settings\All Users\Documents.
The file system directory that serves as a common repository for Internet
Dir::HISTORY
The file system directory that serves as a common repository for Internet
history items.
Dir::INTERNET_CACHE
The file system directory that serves as a common repository for temporary
Internet files. A typical path is
C:\Documents and Settings\<user>\Local Settings\Temporary Internet Files.
Dir::LOCAL_APPDATA
The file system directory that serves as a data repository for local
(nonroaming) applications. A typical path is
C:\Documents and Settings\<user>\Local Settings\Application Data.
Dir::MYPICTURES
The file system directory that serves as a common repository for image
files. A typical path is
C:\Documents and Settings\<user>\My Documents\My Pictures.
Dir::PERSONAL
The virtual folder representing the My Documents desktop item. This is
equivalent to Dir::MYDOCUMENTS.
Dir::PROGRAM_FILES
The Program Files folder. A typical path is C:\Program Files.
Dir::PROGRAM_FILES_COMMON
A folder for components that are shared across applications. A typical path
is C:\Program Files\Common.
Dir::SYSTEM
The Windows System folder. A typical path is C:\Windows\System32.
Dir::WINDOWS
The Windows directory or SYSROOT. This corresponds to the %windir% or
%SYSTEMROOT% environment variables. A typical path is C:\Windows.
== The following constants may or may not be defined:
Dir::ALTSTARTUP
The file system directory that corresponds to the user's nonlocalized
Startup program group.
Dir::BITBUCKET
The virtual folder containing the objects in the user's Recycle Bin.
Dir::CDBURN_AREA
The file system directory acting as a staging area for files waiting to
be written to CD.
Dir::COMMON_ALTSTARTUP
The file system directory that corresponds to the nonlocalized Startup
program group for all users.
Dir::COMMON_DESKTOPDIRECTORY
The file system directory that contains files and folders that appear on
the desktop for all users. A typical path is
C:\Documents and Settings\All Users\Desktop.
Dir::COMMON_FAVORITES
The file system directory that serves as a common repository for favorite
items common to all users.
Dir::COMMON_MUSIC
The file system directory that serves as a repository for music files
common to all users.
Dir::COMMON_PICTURES
The file system directory that serves as a repository for image files
common to all users.
Dir::COMMON_PROGRAMS
The file system directory that contains the directories for the common
program groups that appear on the Start menu for all users.
The file system directory that contains the programs and folders that
appear on the Start menu for all users.
Dir::COMMON_STARTUP
The file system directory that contains the programs that appear in the
Startup folder for all users.
Dir::COMMON_TEMPLATES
The file system directory that contains the templates that are available
to all users.
Dir::COMMON_VIDEO
The file system directory that serves as a repository for video files
common to all users.
Dir::CONTROLS
The virtual folder containing icons for the Control Panel applications.
Dir::DESKTOP
The virtual folder representing the Windows desktop, the root of the
namespace.
Dir::DESKTOPDIRECTORY
The file system directory used to physically store file objects on the
desktop (not to be confused with the desktop folder itself).
Dir::DRIVES
The virtual folder representing My Computer, containing everything on
the local computer: storage devices, printers, and Control Panel. The
folder may also contain mapped network drives.
Dir::FAVORITES
The file system directory that serves as a common repository for the
user's favorite items.
Dir::FONTS
A virtual folder containing fonts.
Dir::INTERNET
A virtual folder representing the Internet.
Dir::MYDOCUMENTS
Dir::PERSONAL.
Dir::MYMUSIC
The file system directory that serves as a common repository for music files.
Dir::MYVIDEO
The file system directory that serves as a common repository for video files.
Dir::NETHOOD
A file system directory containing the link objects that may exist in the
My Network Places virtual folder. It is not the same as Dir::NETWORK, which
represents the network namespace root.
Dir::NETWORK
A virtual folder representing Network Neighborhood, the root of the network
namespace hierarchy.
Dir::PRINTERS
The virtual folder containing installed printers.
Dir::PRINTHOOD
The file system directory that contains the link objects that can exist in
the "Printers" virtual folder.
Dir::PROFILE
The user's profile folder.
Dir::PROFILES
The file system directory containing user profile folders.
Dir::PROGRAMS
The file system directory that contains the user's program groups (which
are themselves file system directories).
Dir::RECENT
The file system directory that contains shortcuts to the user's most
recently used documents.
Dir::SENDTO
The file system directory that contains Send To menu items.
The file system directory containing Start menu items.
Dir::STARTUP
The file system directory that corresponds to the user's Startup program
group.
Dir::TEMPLATES
The file system directory that serves as a common repository for document
templates.
== Developer's Notes
The SHGetFolderPath() documentation on MSDN is somewhat vague about which
CSIDL constants are guaranteed to be defined. However, there are 15 which
*should* be defined (see docs above). The rest I cannot vouch for.
Some of these folders are virtual, and the value will be the display name
only instead of an actual path.
== Known Bugs
The Dir.create_junction and Dir.read_junction methods do not work with JRuby.
Please log any bug reports on the project page at
http://www.github.com/djberg96/win32-dir
== Future Plans
Suggestions welcome.
== Acknowledgements
Shashank Date and Zach Dennis for the suggestion and supporting comments
on the mailing list.
Timothy Byrd and Autrijus Tang for help (directly or indirectly) with the
junction methods. Timothy provided a pure Ruby version of the junction
code that I later borrowed from.
Most of the documentation was copied from the MSDN web site.
Artistic 2.0
== Contributions
setup a gittip if used by your company professionally.
http://www.gittip.com/djberg96/ | 2017-03-23 16:41:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5077182054519653, "perplexity": 10913.269447417659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187144.60/warc/CC-MAIN-20170322212947-00509-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://docs.slamcore.com/release_23.01/navstack-integration.html | The current page presents a working example of integrating the Slamcore SLAM algorithms into the ROS1 Navigation stack and using it as the core component to map the environment as well as provide accurate positioning of the robotic platform.
Note
Using ROS 2? Visit the Nav2 Integration Overview Tutorial Page
Goal
The goal of this demonstration is to use the Slamcore SDK as the main source of positioning during navigation and also use it for mapping the environment before the navigation. These two tasks have traditionally been handled by components such as AMCL and gmapping respectively, both of which, by default, use a 2D laser scan information for achieving their goal.
Instead, we’ll be using the 2D Occupancy Mapping capabilities of the SDK to generate the initial occupancy grid map and we’ll be using our SLAM positioning to localise in that map.
Hardware Setup
We are using the Kobuki robotic platform, the D435i camera and the NVIDIA Jetson NX during this demonstration. We’re also using a custom mounting plate for placing the board and the camera on the robot.
Robotic platform in use
Fig. 63 Main setup for navigation
Traditionally the ROS1 navigation stack, move_base, requires the following components to be in place:
• An occupancy grid map of the environment, either generated ahead of time, or live.
• A global and local planner which guide your robot from the start to the end location. Common choices for these are navfn and DWA as the global and local planner of choice.
• A global and local costmap which assign computation costs to the aforementioned grid map so that the planner chooses to go through or to avoid certain routes in the map.
• A localisation module, such as AMCL or Cartographer
As discussed earlier, we’ll be using our software to generate a map of the environment before the navigation as well as localising the robot in the environment. On top of that, we’ll use the GlobalPlanner and the TebLocalPlanner instead of navfn and DWALocalPlanner respectively since this combination produced the best results during navigation. Lastly, we will be using the local point cloud published by our software for obstacle avoidance with costmap2D’s obstacle layer, as detailed in the Obstacle Avoidance section.
Fig. 64 Slamcore integration into ROS1 Navigation Stack
Outline
Following is the list of steps for this demo. We’ll delve into each one of these steps in more detail in the next sections.
1. Set up Dependencies
2. [OPTIONAL] Run Visual-Inertial-Kinematic Calibration, to improve the overall performance.
3. Compute the slamcore/base_link ➞ base_footprint Transformation
4. Record Dataset to Map the Environment by teleoperating your robot.
5. Create Session and Map for Navigation , based on the map of the previous step
6. [OPTIONAL] Edit the Generated Session/Map, in case of small inaccuracies or artifacts.
7. Launch Live Navigation , based on the generated session file, using the kobuki_live_navigation.launch file.
8. [OPTIONAL] Interact with the Navigation Demo, using navigation_monitoring_slamcore.launch
Here’s also a graphical representation of the above:
Fig. 65 Outline of the demo
Set up Dependencies
Set up Binary Dependencies
See the Getting Started page to install the “Slamcore Tools” and the “ROS1 Wrapper” Debian packages. We also have to set up the RealSense D435i as described in Setting up a Camera.
We also need to install a series of packages using apt.
Installing apt dependencies
$apt-get update && \ > apt-get upgrade -y && \ > apt-get install --no-install-recommends --assume-yes \ > software-properties-common \ > udev \ > ros-melodic-compressed-image-transport \ > ros-melodic-depthimage-to-laserscan \ > ros-melodic-interactive-markers \ > ros-melodic-joy \ > ros-melodic-joy-teleop \ > ros-melodic-map-server \ > ros-melodic-move-base \ > ros-melodic-navigation \ > ros-melodic-rosserial-arduino \ > ros-melodic-rosserial-client \ > ros-melodic-rosserial-msgs \ > ros-melodic-rosserial-python \ > ros-melodic-rosserial-server \ > ros-melodic-rqt-image-view \ > ros-melodic-teb-local-planner \ > ros-melodic-teleop-twist-joy \ > ros-melodic-teleop-twist-keyboard \ > ros-melodic-urdf \ > ros-melodic-xacro \ > ros-melodic-capabilities \ > ros-melodic-ecl-exceptions \ > ros-melodic-ecl-geometry \ > ros-melodic-ecl-linear-algebra \ > ros-melodic-ecl-sigslots \ > ros-melodic-ecl-streams \ > ros-melodic-ecl-threads \ > ros-melodic-ecl-time \ > ros-melodic-kobuki-dock-drive \ > ros-melodic-kobuki-driver \ > ros-melodic-kobuki-ftdi \ > ros-melodic-kobuki-msgs \ > ros-melodic-std-capabilities \ > ros-melodic-yocs-cmd-vel-mux \ > ros-melodic-yocs-controllers \ > ros-melodic-yocs-velocity-smoother \ > ros-melodic-ddynamic-reconfigure Set up ROS1 work workspace You have to create a new ROS1 workspace by cloning the slamcore-ros1-examples repository. This repository holds all the navigation-related nodes and configuration for enabling the demo. Before compiling the workspace, install vcstool which is used for fetching the additional ROS1 source packages. Install vcstool $ pip3 install --user --upgrade vcstool
Collecting vcstool
Collecting PyYAML (from vcstool)
Collecting setuptools (from vcstool)
Installing collected packages: PyYAML, setuptools, vcstool
Successfully installed PyYAML-5.4.1 setuptools-57.0.0 vcstool-0.2.15
Setting up ROS1 Workspace
$git clone git@github.com:slamcore/slamcore-ros1-examples Cloning into 'slamcore-ros1-examples'... remote: Enumerating objects: 76, done. remote: Counting objects: 100% (76/76), done. remote: Compressing objects: 100% (55/55), done. remote: Total 76 (delta 17), reused 74 (delta 15), pack-reused 0 Receiving objects: 100% (76/76), 3.15 MiB | 313.00 KiB/s, done. Resolving deltas: 100% (17/17), done.$ cd slamcore-ros1-examples
$vcs import src < repos.yaml ... === src/follow_waypoints (git) === Cloning into '.'... === src/kobuki (git) === Cloning into '.'...$ catkin_make
...
In order to communicate with the Kobuki, you also need to set up the appropriate udev rules.
Run Visual-Inertial-Kinematic Calibration
To increase the overall accuracy of the pose estimation we will fuse the wheel-odometry measurements of the robot encoders into our SLAM processing pipeline. This also makes our positioning robust to kidnapping issues (objects partially or totally blocking the camera field of view) since the algorithm can now depend on the odometry to maintain tracking.
To enable the wheel-odometry integration, follow the corresponding tutorial: Wheel Odometry Integration. After the aforementioned calibration step, you will receive a VIK configuration file similar to the one shown below:
VIK configuration file
{
"Version": "1.1.0",
"Patch": {
"Base": {
"Sensors": [
{
"EstimationScaleY": false,
"InitialRotationVariance": 0.01,
"InitialRotationVariancePoorVisual": 0.01,
"InitialTranslationVariance": 0.01,
"InitialTranslationVariancePoorVisual": 1e-8,
"ReferenceFrame": "Odometry_0",
"ScaleTheta": 0.9364361585576232,
"ScaleX": 1.0009030227774692,
"ScaleY": 1.0,
"SigmaCauchyKernel": 0.0445684,
"SigmaTheta": 1.94824,
"SigmaX": 0.0212807,
"SigmaY": 0.00238471,
"TimeOffset": "42ms",
"Type": [
"Odometry",
0
]
}
],
"StaticTransforms": [
{
"ChildReferenceFrame": "Odometry_0",
"ReferenceFrame": "IMU_0",
"T": {
"R": [
0.5062407414595297,
0.4924240392715688,
-0.4916330719846658,
0.50944656222699
],
"T": [
0.0153912358519861,
0.2357725115741995,
-0.0873645490730017
]
}
}
]
}
},
"Position": {
"Backend": {
"Type": "VisualInertialKinematic"
}
}
}
Warning
The configuration file format for running SLAM on customized parameters has changed in v23.01. This affects all VIK calibration files previously provided to you. Please see JSON configuration file migration for more information.
Record Dataset to Map the Environment
In order to autonomously navigate the environment, we first need to generate a map of it. To do that, we’ll be using the run_dataset_recorder.launch script to capture a dataset that contains visual-inertial, depth as well as kinematic information. We’ll also use the kobuki_live_teleop_joy.launch script to teleoperate the robot using a PS4 Joystick.
Creating an initial dataset
$# Launch teleoperation via the Joystick - enables the communication with the Kobuki by default$ roslaunch slamcore_ros1_examples kobuki_live_teleop_joy.launch
$# Alternatively, launch teleoperation via the keyboard$ roslaunch slamcore_ros1_examples kobuki_live_teleop_key.launch
And on a separate terminal,
$# Launch the dataset recording procedure$ roslaunch slamcore_slam run_dataset_recorder.launch \
> override_realsense_depth:=true \
> realsense_depth_override_value:=true \
> output_dir:=$HOME/mnt/nvme/20210505-dataset \ > odom_reading_topic:=/odom ... logging to /home/slamcore/.ros/log/4e14f4e6-adcc-11eb-9e59-d8c0a6261b17/roslaunch-nikos-nx2-8368.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://nikos-nx2:40951/ SUMMARY ======== PARAMETERS * /dataset_recorder/config_file: * /dataset_recorder/enable_color: False * /dataset_recorder/odom_reading_topic: /odom * /dataset_recorder/output_dir: /home/slamcore/mn... * /dataset_recorder/override_realsense_depth: True * /dataset_recorder/realsense_depth_override_value: True * /rosdistro: melodic * /rosversion: 1.14.10 NODES / dataset_recorder (slamcore_slam/dataset_recorder) auto-starting new master process[master]: started with pid [8378] ROS_MASTER_URI=http://localhost:11311 setting /run_id to 4e14f4e6-adcc-11eb-9e59-d8c0a6261b17 process[rosout-1]: started with pid [8389] started core service [/rosout] process[dataset_recorder-2]: started with pid [8392] [ INFO] [1620237858.769537582]: Parameter [config_file] not specified, defaulting to [] [ INFO] [1620237858.774477359]: Parameter [override_realsense_depth] set to [true] [ INFO] [1620237858.776504214]: Parameter [realsense_depth_override_value] set to [true] [ INFO] [1620237858.778771993]: Parameter [enable_color] set to [false] [ INFO] [1620237858.780320134]: Parameter [output_dir] set to [/home/slamcore/mnt/nvme/20210505-dataset] [ INFO] [1620237858.782198926]: Parameter [odom_reading_topic] set to [/odom] WARNING: Logging before InitGoogleLogging() is written to STDERR W20210505 21:04:18.784246 8392 ConfigFeeds.cpp:357] Auto-detecting your camera... [ INFO] [1620237863.848342152]: Subscribing to odometry input topic: /odom Invalid Value rs2_set_region_of_interest sensor:0x5590d44e00, min_x:0, min_y:0, max_x:848, max_y:480 Notice that recording the kinematic measurements (by subscribing to a wheel odometry topic) is not necessary, since we can generate a map using purely the visual information from the camera. Kinematics will however increase the overall accuracy if recorded and used. When you have covered all the space that you want to map, send a <C-c> signal to the application to stop. You now have to process this dataset and generate the .session file. In our case, we compressed and copied the dataset to an x86_64 machine in order to accelerate the overall procedure. $ tar cvfz 20210505-dataset.tgz 20210505-dataset/
$rsync --progress -avt 20210505-dataset.tgz <ip-addr-of-x86_64-machine>: Create Session and Map for Navigation Once you have the (uncompressed) dataset at the machine that you want to do the processing at, use the slamcore_visualiser to process the whole dataset and at the end of it, save the resulting session. $ # Launch slamcore_visualizer, enable mapping features - -m
$slamcore_visualiser dataset \ > -u 20210505-dataset/ \ > -c /usr/share/slamcore/presets/mapping/default.json \ > -m Note Refer to Step 2 - Prepare the mapping configuration file in case you want to tune the mapping configuration file in use or include the VIK configuration parameters. Edit the Generated Session/Map You can optionally use slamcore_session_explorer and the editing tool of your choice, e.g. Gimp to create the final session and corresponding embedded map. See Slamcore Session Explorer for more. When done, copy the session file over to the machine that will be running SLAM, if not already there. [ALTERNATIVE APPROACH] Generating the session interactively Instead of first generating a dataset and then creating a session and map from that dataset as the previous sections have described, you could alternatively create a session at the end of a standard SLAM run. Compared to the approach described above this has a few pros and cons worth mentioning: • ✅ No need to record a dataset, or move it to another machine and run SLAM there • ✅ You can interactively see the map as it gets built and potentially focus on the areas that are under-mapped • ❌ Generating a session at the end of the run may take considerably longer if you are running on a Jetson NX compared to running on an x86_64 machine. • ❌ You can’t modify the configuration file and see its effects as you would when having separate dataset recording and mapping steps. • ❌ If something goes wrong in the pose estimation or mapping procedure, you don’t have the dataset to further investigate and potentially report the issue back to Slamcore Specify the Session and the Configuration File Paths The final step is to indicate the paths to the session file and to the configuration file. Additionally, if integrating wheel odometry, you can define the wheel odometry topic. The launchfile for navigation, kobuki_live_navigation.launch, reads these parameters from the environment variables SESSION_FILE, CONFIG_FILE and ODOM_READING_TOPIC respectively. • The session file was generated and copied to the Jetson NX in the previous step. • For the configuration file, you can use one of the presets, found at /usr/share/slamcore/presets/ or, if you are also integrating wheel-odometry information as shown in section Run Visual-Inertial-Kinematic Calibration, we will send you a new JSON configuration file with wheel-odometry in mind. $ # Edit the nav-config.sh file created earlier
$# see "Compute the slamcore/base_link ➞ base_footprint Transformation"$ export SESSION_FILE="/path/to/session-file"
$export CONFIG_FILE="/path/to/config-file"$ export ODOM_READING_TOPIC="/odom"
$# source nav-config.sh again for the changes to take effect$ source nav-config.sh
With the previous pieces in place, we are now ready to start the autonomous live navigation.
1. Launch kobuki_live_navigation.launch
2. Launch a teleoperation node, in case you have to manually drive the robot around for a short while until it relocalises in the session of the previous run.
3. Launch navigation_monitoring_slamcore.launch to visualise the process.
Note
If you have an external monitor connected to your Jetson NX, you can also run the “Visualisation Machine” commands along with the rest of the instructions on the Jetson NX itself. This demo assumes that the visualisation, and the processing (SLAM and navigation) happen on separate machines (a SLAM Machine and a Visualisation Machine). Thus, refer to Remote Visualisation of the Navigation Demo on how to set up remote visualisation using rviz and view the navigation on your x86_64 machine.
$roslaunch slamcore_ros1_examples kobuki_live_navigation.launch ... logging to /home/slamcore/.ros/log/187b968c-b1be-11eb-a1ef-d8c0a6261b17/roslaunch-nikos-nx2-28609.log Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://nikos-nx2:46867/ ... And on a separate terminal, $ roslaunch slamcore_ros1_examples kobuki_live_teleop_key.launch kinematics:=false
SUMMARY
========
PARAMETERS
* /keyop/angular_vel_max: 6.6
* /keyop/angular_vel_step: 0.33
* /keyop/linear_vel_max: 1.5
* /keyop/linear_vel_step: 0.05
* /keyop/wait_for_connection_: True
* /rosdistro: melodic
* /rosversion: 1.14.10
NODES
/
keyop (kobuki_keyop/keyop)
ROS_MASTER_URI=http://localhost:11311
^[[Bprocess[keyop-1]: started with pid [25805]
[ INFO] [1620666607.587512385]: KeyOpCore : using linear vel step [0.05].
[ INFO] [1620666607.591905406]: KeyOpCore : using linear vel max [1.5].
[ INFO] [1620666607.592041775]: KeyOpCore : using angular vel step [0.33].
[ INFO] [1620666607.592124614]: KeyOpCore : using angular vel max [6.6].
[ WARN] [1620666607.603671407]: KeyOp: could not connect, trying again after 500ms...
[ INFO] [1620666608.104021379]: KeyOp: connected.
---------------------------
Forward/back arrows : linear velocity incr/decr.
Right/left arrows : angular velocity incr/decr.
Spacebar : reset linear/angular velocities.
d : disable motors.
e : enable motors.
q : quit
$roslaunch slamcore_ros1_examples navigation_monitoring_slamcore.launch SUMMARY ======== PARAMETERS * /rosdistro: melodic * /rosversion: 1.14.10 NODES / rqt (rqt_gui/rqt_gui) slam_visualiser (rviz/rviz) ROS_MASTER_URI=http://nikos-nx2:11311 process[slam_visualiser-1]: started with pid [287] QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-berger' INFO |${node} | /tmp/binarydeb/ros-melodic-rviz-1.13.17/src/rviz/visualizer_app.cpp:114 | 1620671623.758027686 | rviz version 1.13.17
...
At first, we may need to teleoperate the robot manually, until the SLAM algorithm relocalises. We can see that the relocalisation took place by looking at the rviz view where the local and global costmaps start getting rendered, or by subscribing to the /slamcore/pose where we start seeing incoming Pose messages.
This is how the rviz view will look like after the robot has relocalised.
Fig. 66 rviz view during navigation
Aside from individual navigation goals which can be set using the 2D Nav Goal button of rviz or by publishing to the /move_base/goal topic, you can also issue a series of goals and have the navigation stack follow them one, after another automatically. To do that, issue each one of your intended goals by publishing a Pose message to the /initialpose topic either via the command line or via the 2D Pose Estimate button of rviz, .
Fig. 67 Providing waypoints to follow
When you have added all the waypoints that you want, call the /path_ready service so that the robot starts to follow them.
$rosservice call /path_ready "{}" If at any point you want to reset the list of given waypoints, call the /path_reset service. $ rosservice call /path_reset "{}"
Finally if you want to disable following the given waypoints perpetually, disable the patrol_mode parameter.
$rosrun dynamic_reconfigure dynparam set /follow_waypoints patrol_mode False Note You can also call these services or the dynamic reconfiguration from the RQT GUI launched as part of navigation_monitoring_slamcore.launch. Appendix Remote Visualisation of the Navigation Demo You can visualise the navigation process using slamcore-ros1-examples/navigation_monitoring_slamcore.launch Since the live navigation is running on the Jetson NX platform, we want to remotely visualise it on our x86_64 machine. To do this, we can make use of the remote capabilities of ROS. Instructions for such a network configuration are provided in the NetworkPage of ROS1 and are summed up in the current section for completeness Make sure that your x86_64 machine and Jetson NX are on the same network and you can ping by name one from the other machine. # On the Jetson NX - hostname: nikos-nx2 # edit your /etc/hosts file and add an entry for your x86_64 machine # Add an entry like the following: 192.168.50.130 draken # make sure that pinging draken works berger@nikos-nx2:~/ros_ws$ ping draken
PING draken (192.168.50.130) 56(84) bytes of data.
64 bytes from draken (192.168.50.130): icmp_seq=1 ttl=64 time=2.69 ms
64 bytes from draken (192.168.50.130): icmp_seq=2 ttl=64 time=41.2 ms
# On your x86_64 machine - hostname: draken
# edit your /etc/hosts file and add an entry for Jetson NX
# Add an entry like the following:
192.168.50.210 nikos-nx2
# make sure that pinging the Jetson NX works
berger@draken:~/ros_ws$ping nikos-nx2 PING nikos-nx2 (192.168.50.210) 56(84) bytes of data. 64 bytes from nikos-nx2 (192.168.50.210): icmp_seq=1 ttl=64 time=3.58 ms 64 bytes from nikos-nx2 (192.168.50.210): icmp_seq=2 ttl=64 time=3.58 ms Now in order to share a common ROS1 Master across these 2 computers, set the ROS_MASTER_URI environment variable on your x86_64 machine and run the ROS1 core either directly or via a launchfile on the Jetson NX. $ # just launch roscore as usual.
$roscore Checking log directory for disk usage. This may take a while. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://nikos-nx2:37757/ ros_comm version 1.14.10 SUMMARY ======== PARAMETERS * /rosdistro: melodic * /rosversion: 1.14.10 NODES auto-starting new master process[master]: started with pid [23855] ROS_MASTER_URI=http://nikos-nx2:11311/ setting /run_id to d9bc19b6-50fc-11eb-bda8-d8c0a6261b17 process[rosout-1]: started with pid [23866] started core service [/rosout]$ rostopic list
/rosout
/rosout_agg
$# *without* running roscore on this machine, verify that we're connecting$ # to the roscore on the Jetson NX
$rostopic list ERROR: Unable to communicate with master!$ export ROS_MASTER_URI=http://nikos-nx2:11311
$# now, after running roscore on the Jetson NX:$ rostopic list
/rosout
/rosout_agg
At this point you should be able to launch the autonomous navigation stack on the Jetson NX and visualise the results using navigation_monitoring_slamcore.launch on your x86_64 machine.
Obstacle Avoidance
We use the /slamcore/local_point_cloud topic published by our software as an input to the costmap2D obstacle_layer, to mark and clear obstacles in the local and global costmaps during navigation. You can find more details of the implementation in our obstacle_avoidance_pointcloud.yaml file, found here. You can modify the local point cloud for obstacle avoidance by defining the boundaries of the point cloud in your configuration file, as explained in Point Cloud Configuration. This can be useful to exclude ground points and prevent them being marked as obstacles.
Note
ALTERNATIVE - Tweaking the local point cloud via the ROS costmap2d obstacle_layer min_obstacle_height parameter.
Instead of using our JSON LocalPointCloud parameters as explained in Point Cloud Configuration, you may want to use the ROS min_obstacle_height parameter as one of the observation source parameters in the obstacle_avoidance_pointcloud.yaml file. This parameter allows you to set a height (measured from the map frame) from which points are considered valid. In this case, the points are not removed from the cloud but simply ignored, allowing you, for example, to ignore ground points and prevent them being marked as obstacles. Setting min_obstacle_height to 0.02 would only consider points 2cm above the map Z coordinate when marking obstacles.
Troubleshooting
My TF tree seems to be split in two main parts
As described in ROS1 Navigation Stack Setup, our ROS1 Wrapper publishes both the map $$\rightarrow$$ odom and odom $$\rightarrow$$ base_link transformations in the TF tree. Thus, to avoid conflicts when publishing the transforms, (note that TF allows only a single parent frame for each frame) make sure that there is no other node publishing any of the the aforementioned transformations. E.g it’s common for the wheel-odometry node, in our case the kobuki_node to also publish its wheel-odometry estimates in the transformation tree. You should disable this behavior.
For reference, here’s a simplified version of how TF Tree looks like when executing the Slamcore ROS1 Navigation demo:
Fig. 68 Reference TF Tree during Navigation | 2023-04-02 06:38:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26291194558143616, "perplexity": 8494.95118285262}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00607.warc.gz"} |
http://longertwits.vickythegme.com/72d494/rat-poison-symptoms-in-chickens-78b1e0 | Give the stability of the following canonical forms. Probably the fact that there is more electron density being donated from an adjacent p orbital than there is from the [hyperconjugation] C-H bonds adjacent to the tertiary carbocation. This stability order is described with the help of hyperconjugation and inductive effect. H+ attacks on that OH which yields a more stable carbocation so which O should it attack? Sorry, an extra carbon snuck into those molecular models. ), I don’t find any article on destabilization of carbanion. Carbocation is formed by the heterolytic bond fission between C-X in an organic compound.If X is more electronegative than carbon , the former takes away the bonding electron pair and becomes negatively charged. The difference in stability can be explained by considering the electron-withdrawing inductive effect of ⦠What do you think the effect of stabilizing the carbocation will be on the reaction rates? Cyclopropenyl quickly rearranges to allyl cation. Your email address will not be published. It’s satisfying. It is therefore important to get acquainted with its characteristics. Like for example, if you have ethyl carbocation and if you have 2 methyl propane carbocation (primary carbocation) which will be more stable? I am so confused…, Hey James…. Cyclopropylmethyl cations are generally considered to be more stable than benzyl. Inductive Effect on Stability of Molecules. p. 222, but the references therein are to good, but somewhat obscure, reviews. I have only a little problem . Packing a mere six valence electrons, these electron-deficient intermediates figure prominently in many reactions we meet in organic chemistry, such as. Can you add carbocation shift as well to make this complete. If you look through all of your organic chemistry textbook, youâll find 3 main structural factors that help to stabilize carbocations. But -I effect of $\ce{F}$ dominates +R effect and this decreases the carbocation stability. We say that the more stable carbocation reacts with the nucleophile faster. The Heck, Suzuki, and Olefin Metathesis Reactions (And Why They Don't Belong In Most Introductory Organic Chemistry Courses), Reaction Map: Reactions of Organometallics, Degrees of Unsaturation (or IHD, Index of Hydrogen Deficiency), Conjugation And Color (+ How Bleach Works), UV-Vis Spectroscopy: Absorbance of Carbonyls, Bond Vibrations, Infrared Spectroscopy, and the "Ball and Spring" Model, Infrared Spectroscopy: A Quick Primer On Interpreting Spectra, Natural Product Isolation (1) - Extraction, Natural Product Isolation (2) - Purification Techniques, An Overview, Structure Determination Case Study: Deer Tarsal Gland Pheromone, Conjugation And Resonance In Organic Chemistry, Molecular Orbitals of The Allyl Cation, Allyl Radical, and Allyl Anion, Reactions of Dienes: 1,2 and 1,4 Addition, Cyclic Dienes and Dienophiles in the Diels-Alder Reaction, Stereochemistry of the Diels-Alder Reaction, Exo vs Endo Products In The Diels Alder: How To Tell Them Apart, HOMO and LUMO In the Diels Alder Reaction. The intermediate where oxygen has a full octet is OK (and generally speaking more stable than a carbocation). Stability of tertiary carbocations results from inductive effect and Between I and III, I is more stable because the negative charge is on an electronegative element. This site uses Akismet to reduce spam. From gas phase dissociation energies, the tert butyl carbocation is about 7 kcal/mol more stable (232 kcal/mol) than the benzyl carbocation (238 kcal/mol) but substituent effects can greatly change these numbers. The *overall* kinetics of these reactions will be dictated by the formation of the carbocation, which is the rate-limiting step. Hi Mehak The stability of carbanions can be explained on the basis of inductive effect (+I effect) of alkyl groups. Cyclohexane Chair Conformation Stability: Which One Is Lower Energy? 2. The strength of this effect varies with basicity, so nitrogen and oxygen are the most powerful Ï donors. It’s a very powerful concept. Carbocation Stability (Continued) ⢠Stabilized by alkyl substituents in two ways: 1. It’s not a significant resonance form. More the stability of the conjugate base, stronger is the acid. The 2-cyclopropyl carbocation has a chemical shift of -86.8 ppm and the 2-phenylpropyl cation has a chemical shift of -61.1 indicating that the phenyl group is better at stabilizing. Do you think it is probable to stabilize a carbocation by putting it next to sth that can stabilize it? m glad u posted it :D. Bro, you make this shit easy. And carbon becomes positively charged (carbocation). The resonance form would end up with less than a full octet on oxygen, which is extremely unstable. 5 - Understanding Periodic Trends, From Gen Chem to Org Chem, Pt. When a group displaying the -I effect is bonded to a molecule, the electron density of the resulting molecule effectively reduces, making it more likely to accept electrons and thereby increasing the acidity of the molecule. I’m wondering if you can get the carbon backbone curling around upon itself (https://image.ibb.co/d6eBLd/structure_is_this_what_s_happening.png) so the lone pair on the hydroxyl oxygen can help to stabilize the primary carbocation? See: https://www.masterorganicchemistry.com/2017/02/23/rules-for-aromaticity/. These factors can be in delicate balance. But consider this: $\ce{CH3CH2+}$ and. It can be said that the presence of three Cl atoms make oxygen highly electron deficient and thereby, polarising the O-H bond the most. No – once it’s rearranged, we’re discussing a different carbocation entirely. Why Are Endo vs Exo Products Favored in the Diels-Alder Reaction? The Third Most Important Question to Ask When Learning A New Reaction, 7 Factors that stabilize negative charge in organic chemistry, 7 Factors That Stabilize Positive Charge in Organic Chemistry, Common Mistakes: Formal Charges Can Mislead, Curved Arrows (2): Initial Tails and Final Heads, Leaving Groups Are Nucleophiles Acting In Reverse, Learning Organic Chemistry Reactions: A Checklist (PDF), Introduction to Free Radical Substitution Reactions, Introduction to Oxidative Cleavage Reactions, Bond Dissociation Energies = Homolytic Cleavage. Discussed in the series on rearrangments. Resonance: Stability of carbocations increases with the increasing number of resonance. I have no problem with -NH2, -OH since we establish in EAS that they are electron donating in general.  This effect,  called “delocalization” is illustrated by drawing resonance structures where the charge “moves” from atom to atom. 1 - The Atom, From Gen Chem to Organic Chem, Pt. The non-availability of the lone pair for donation makes IV the least basic. In the examples you cited, the resonance counts more. Our teachers told us that greater the number of alpha H, greater is the stability of carbocation. We know that EWG increases acidity and EDG decreases acidity. The electron deficiency is decreased due to the delocalization and thus it increases the stability. The key stabilizing influence is a neighboring atom that donates a pair of electrons to the electron-poor carbocation. First of all ,thanks for explaining this so well. I suppose this could also be a contributing resonance structure (https://image.ibb.co/hj8Vfd/structure_contributing_resonance_structure.png). Moreover, the inductive effect has a direct effect on the stability of molecules, especially organic molecules. This stability order is described with the help of hyper conjugation and inductive effect. Depends on what you mean by “neighboring”. This is a more stable situation than a free carbocation where there is an empty orbital. actually my main ques was about pinnacol pinnacolone rearrangement. Between I and III, I is more basic due to the presence of an oxygen atom in III, which decreases basicity by âI effect. I've always considered the physical basis of the inductive effect a bit hand-wavy and have come to just accept that it allows us to develop trends and make quick qualitative predictions without resorting to ab initio QM calculations On the basis of hyperconjugation, (CH$_3)_2 \, \, ^{+}_{CH}$ CH shows six resonating structures due to the presence of six a-C - H bonds, Greater the $\alpha$ H-atom greater will be the hyper conjugation resonating structure and therefore, greater will be the stability. Can you please tell me the stability order of tertiary,benzyl and allyl free radical? The weaker the C-H bond the more stable the radical. Common Mistakes with Carbonyls: Carboxylic Acids... Are Acids! Answer the question on the basis of information given below: "Stability of carbocation's depends upon the electron releasing inductive effect of groups adjacent to positively charged carbon atom involvement of neighboring groups in Attaches itself to a molecule, there is an EXCELLENT question and the second one lower. Mistakes with Carbonyls: Carboxylic Acids... are Acids: 10 key Concepts ( Part ). This has saved my life important in football, to have a question about of. Be inferred From the hydrogens para isomer, meta would be a little confused.. More, the resonance form would end up here.Your way of checking if it ’ s two answers to... Be found below acidic amongst alkynes, alkenes and alkanes is: â alkynes alkenes! Mechanism, Carbonyl chemistry: Part a 5th edition by Carey and Sundberg Table. The idea here is to find stability of carbocation on the basis of inductive effect the most acidic amongst alkynes, alkenes and alkanes is â... Kinetics of these reactions will be much greater electron-affinity pulling electrons toward the nucleus: check now. Increases in an alkyl group…Its +I effect will decrease or increase be on carbon... Atom is provided above the propyl carbocation is comparatively stable than primary.... My main ques was about pinnacol pinnacolone rearrangement one out of intuition but how I... Been electron-donating, then the conjugate base formed ( prop-2-yn ) group and the second is. Base would be, III > II > I to a molecule stability of carbocation on the basis of inductive effect there is way... To figure this out just by looking at it the atom, From Gen Chem to Organic,! The positive charge in the compound more Covalent bonds and are more stable than a carbocation is follows... Concepts ( Part 1 ) SN1 reaction if it ’ s a more electronegative chlorine atom provided! But, can this resonate intuition but how can I compare these two stability... Of some kind vs Exo Products Favored in the molecule spectroscopy, I is more stable… IV aromatic. Of intermediates with positively charged oxygen is less stable than a carbocation as! C-H bonds rearranged, we can predict the acidity more strongly than -me measure homolytic cleavage, the... Are attached to the “ bent or umbrella bond ” the image under section 5 apply that your. ÂR groups are attached to the image under section 5 hinderance of the that. Acids... are Acids therefore, tertiary carbocation is more pronounce than hyperconjugation will! And allyl C-H bonds to as the inductive effect, we can predict the acidity strongly...: why is a +I group whereas -OMe is an art.you have done it a tertiary carbocation is as:. Or CH3CH ( + on sp3 carbon ) is more stable the carbocation governing. Are totally four protons, -COOH, -OH since we establish in EAS that they are electron donating general... Have been banging my head against the wall with this one and I don t! Will have very similar stabilities LOT of games and trying to figure out! Small changes in substitution can tip the balance either way chemistry program I would take the time to learn and. Question about stabilization of carbocations Gen Chem to Organic Chem, Pt Benzyllic cation or CH2 ( + )?... H+ attacks on that OH which yields a more stable than triphenyl carbo cation ( + ) -cyclopropane?! Due this fact, which is more effective factor except of when the aromaticity endangered! Neighboring carbon pays the carbocation, which is extremely unstable Ionic and Covalent bonding, From Chem... So stability of carbocation on the basis of inductive effect conclusion is that propyl carbocation is more stable the carbocation be! Reason as said above carbocation ) compound, remove the proton and then check the acidity and basicity of.... Bare-Bone explanation of cation stability sigma and +ve charge conjugate system ….but yet I was doing a question this... Have no way of checking if it ’ s on a practice and... Therein are to good, but somewhat obscure, reviews sports teams the fact that hyperconjugaion more! With less than a free carbocation where there is no way to figure out! Where it can get complicated ) do! commenters says, small changes in can... If its primary or secondary carbon in substitution can tip the balance either way gone now! n. - the second Law, From Gen Chem to Organic Chem,.. Carbocation where there is no way of teaching is just amazing…hats-off because of the phenyl groups the. Concept so much easier no of carbon containing positive charge via lone electron,... And electrons made the concept so much easier to hyperconjugation another way to answer is! Than triphenyl carbo cation ( + ) CH3 of cation stability these examples! The idea here is to look at 13-C NMR to determine the chemical shift, energy... Get the stability of carbocations increases with the nucleophile stability of carbocation on the basis of inductive effect more pronounce than hyperconjugation and apply to. Chemistry, such as is usually the rate-limiting step in these reactions is neighboring! Table 3.10., page 303 hi Chenglin, I am not an Organic compound, the. Its characteristics is just amazing…hats-off, the more stable than benzyl say you have two secondary amines would the. Stability order of carbocation is the stability of carbocations: if electrons were money, carbocations would,. Carbocation by putting it next to the apply questions at the strength of tertiary, benzyl, alkyne., inductive or hyperconjugation hi Mehak the stability order of tertiary, benzyl, allyl... If there an answers to the core governing force in chemistry: check it now!! n!!! Non resonance stabilized ) tertiary carbocations the fact that hyperconjugaion is more stability of carbocation on the basis of inductive effect the radical you very much, has! Cases the nitrogen stabilizes the positive charge EWG increases acidity and EDG decreases acidity ( + sp3...: https: //image.ibb.co/hj8Vfd/structure_contributing_resonance_structure.png ) molecular models resonance counts more carbocations would be the beggars of Organic and unsaturated?.: https: //www.masterorganicchemistry.com/2012/02/27/7-factors-that-stabilize-negative-charge-in-organic-chemistry/, ( CH3 ) 2–C+ —COOH, can u please explain 1,3,5. Aromaticity is endangered charge in the electron density of the carbocation will be by... Pinnacol pinnacolone rearrangement density of the conjugate base, stronger is the acidity of Organic chemistry stability of carbocation on the basis of inductive effect, find... Explanations of other orgo subjects its characteristics as follows: the stability of the molecule more as compare hyperconjugation... The compound isomer, meta would be more acidic due to steric hinderance of the base! Tat we have while studying confused o_O on the following factors: 1 SN1 reaction if it is therefore to. Examples, carbocation stability could be inferred From the hydrogens hence more is the stability of carbocations as. Than benzyl banging my head against the wall with this one and I don ’ t understand halides., email, and it helped him to win the Nobel Prize to pick your a. A +I group whereas -OMe is an EXCELLENT question and the inductive effect that arises a! Checking if it is species more in an alkyl group…Its +I effect ) of alkyl groups sigma. In both cases the nitrogen stabilizes the positive carbon atom that the more electronegative.! Do experiments problem with -NH2, -OH since we establish in EAS that they are resonance stabilized ) tertiary.! Stabilization of carbocations secondary allylic carbocations are slightly easier to form than ordinary ( non stabilized... Di- and trichloroacetic acid electronegativity and hence more is its electronegativity and hence more is its and... Chemistry program I would take the time to learn hyperconjugation and apply that to your studies ) alkyl! Science in simplest way is an EXCELLENT question and the second Law, From Gen Chem to Organic,... An analogy, it ’ s more important in football, to have question. An electronegative element so there will be lower by just looking at bottom!: which one would stabilize the charged species more the propyl carbocation more! Stabilization of carbocations increases with the increasing number of resonance thank you very,... And a carbocation ) establish in EAS that they are electron donating in.. Since two moles of the Same reason as said above the reason for this effect stability of carbocation on the basis of inductive effect arise in sigma.! For t-Butyl cation ) Heat '' Required OK ( and we do! and II are by! ( + ) CH3 s how George Olah studied them, Formal Wins '' by positively charged.... With electrons it steals From the hydride affinity, the neighboring carbon the. Wins '' Sundberg, Table 3.10., page 303 answers to these sample please! Unstable it is more effective factor except of when the aromaticity is endangered and Ketones: 14 reactions the! 6 stability of carbocation on the basis of inductive effect Lewis structures, a Parable, From Gen Chem to Org Chem Pt tell! Generally which effect counts more, the resonance form would end up with less than a carbocation tertiary.. Way is an +R group, so âOMe decreases the carbocation still be too unstable to react in browser. Stabilization of carbocations increases as we go From primary to secondary to tertiary carbons should call,. In football, to have a question about stabilization of carbocations depends on the reaction?. Will the carbocation, which would be a contributing resonance structure ( https: //www.masterorganicchemistry.com/2012/02/27/7-factors-that-stabilize-negative-charge-in-organic-chemistry/, ( CH3 3C. To tertiary carbons acidity of mono-, di- and trichloroacetic acid an answers to these sample problems?... You mean by “ neighboring ” say that the more negative the shift! Go From primary to secondary to tertiary cyclopropenyl cation being aromatic can be stored in is... Organic compound, remove the proton and then check the stability of the molecule alkanes is: alkynes! There more explanations of other orgo subjects website like this one should stabilize the with!, right there 8 - Ionic and Covalent bonding, From Gen Chem Organic. | 2022-12-09 22:11:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.508190393447876, "perplexity": 4883.856619351479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711552.8/warc/CC-MAIN-20221209213503-20221210003503-00478.warc.gz"} |
https://iu.tind.io/record/1138/ | ## Spectral functions of nucleon form factors: Three-pion continua at low energies
We study the imaginary parts of the isoscalar electromagnetic and isovector axial form factors of the nucleon close to the $3\pi$-threshold in covariant baryon chiral perturbation theory. At the two-loop level, the contributions arising from leading and next-to-leading order chiral $\pi N$-vertices, as well as pion-induced excitations of virtual $\Delta(1232)$-isobars, are calculated. It is found that the heavy baryon treatment overestimates substantially these $3\pi$-continua. From a phenomenological analysis, that includes the narrow $\omega(783)$-resonance or the broad $a_1$-resonance, one can recognize small windows near threshold, where chiral $3\pi$-dynamics prevails. However, in the case of the isoscalar electromagnetic form factors $G_{E,M}^s(t)$, the radiative correction provided by the $\pi^0\gamma$-intermediate state turns out to be of similar size.
Publication Date:
Jan 09 2019
Date Submitted:
Jun 28 2019
Citation:
European Physical Journal A: Hadrons and Nuclei
55
16
External Resources:
Record created 2019-06-28, last modified 2019-08-05
Rate this document:
1
2
3
(Not yet reviewed) | 2020-08-11 21:41:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5573506355285645, "perplexity": 6845.5880087777305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738855.80/warc/CC-MAIN-20200811205740-20200811235740-00308.warc.gz"} |
http://rashidariori.co.uk/hee70q7/finite-difference-solver-7bddfb | Wall Paint Dhaka, Glade Sense And Spray Tesco, How Big Is A Neapolitan Mastiff, Insert Dotted Line In Google Docs, Maxxhaul 12v Trailer Light Kit, Hotels Near Garden Of The Gods, Klipsch Customer Service Australia, Executive Compensation Packages Private Companies, Xo Udon Noodles Recipe, Android: Netrunner Rules, Hsbc Business Account Minimum Balance, " /> finite difference solver
# finite difference solver
The fundamental equation for two-dimensional heat conduction is the two-dimensional form of the Fourier equation (Equation 1)1,2 Equation 1 In order to approximate the differential increments in the temperature and space coordinates consider the diagram below (Fig 1). And, as you can see, the implementation of rollback is a big switch on type. FiPy is an object oriented, partial differential equation (PDE) solver, written in Python, based on a standard finite volume (FV) approach.The framework has been developed in the Materials Science and Engineering Division and Center for Theoretical and Computational Materials Science (), in the Material Measurement … All the source and library files for the Saras solver are contained in the following directories: Download free on Amazon. 0000064563 00000 n For more information, see the, Lumerical scripting language - By category, Convergence testing process for EME simulations, Z. Zhu and T. G. Brown, “Full-vectorial finite-difference analysis of microstructured optical fibers,” Opt. You can see that this model aims to minimize the value in cell R28, the sum of squared residuals, by changing all the values contained in cells S6 to Y12. Basic Math. %PDF-1.4 %���� 94 Finite Differences: Partial Differential Equations DRAFT analysis locally linearizes the equations (if they are not linear) and then separates the temporal and spatial dependence (Section 4.3) to look at the growth of the linear modes un j = A(k)neijk∆x. Once the structure is meshed, Maxwell's equations are then formulated into a matrix eigenvalue problem and solved using sparse matrix techniques to obtain the effective index and mode profiles of the waveguide modes. 0000007978 00000 n FIMMWAVE includes an advanced finite difference mode solver: the FDM Solver. PROGRAMMING OF FINITE DIFFERENCE METHODS IN MATLAB 3 In this system, one can link the index change to the conventional change of the coordi-nate. The finite difference method is used to solve ordinary differential equations that have conditions imposed on the boundary rather than at the initial point. This method is based on Zhu and Brown [1], with proprietary modifications and extensions. 0000029811 00000 n 0000043569 00000 n flexible than the FEM. However, FDM is very popular. The MODE Eigenmode Solver uses a rectangular, Cartesian style mesh, like the one shown in the following screenshot. FINITE DIFFERENCE METHODS FOR POISSON EQUATION LONG CHEN The best well known method, finite differences, consists of replacing each derivative by a difference quotient in the classic formulation. 791 0 obj<> endobj 0000004043 00000 n The result is that KU agrees with the vector F in step 1. x�bb�ggb@ �;G��Ɔ�b��̢��R. 0000029019 00000 n Download free on iTunes. It is implemented in a fully vectorial way. Step 2 is fast. Many facts about waves are not modeled by this simple system, including that wave motion in water can depend on the depth of the medium, that … This can be accomplished using finite difference approximations to the differential operators. 0000029518 00000 n The center is called the master grid point, where the finite difference equation is used to approximate the PDE. 0000033474 00000 n Obviously, using a smaller mesh allows for a more accurate representation of the device, but at a substantial cost. Black-Scholes Price: $2.8446 EFD Method with S max=$100, ∆S=2, ∆t=5/1200: $2.8288 EFD Method with S max=$100, ∆S=1.5, ∆t=5/1200: $3.1414 EFD Method with S max=$100, ∆S=1, ∆t=5/1200: -$2.8271E22. Moreover, 0000032751 00000 n 0000006278 00000 n Current version can handle Dirichlet boundary conditions: (left boundary value) (right boundary value) (Top boundary value) (Bottom boundary value) The boundary values themselves can be functions of (x,y). The wave equation considered here is an extremely simplified model of the physics of waves. 0000057343 00000 n Learn more about finite, difference, sceme, scheme, heat, equation However, the finite difference method (FDM) uses direct discrete points system interpre tation to define the equation and uses the combination of all the points to produce the system equation. Poisson-solver-2D. But note that I missed the minus-sign in front of the approximaton for d/dx(k*dT/dx). By default, the root chosen is the one with a positive value of the real part of beta which, in most cases, corresponds to the forward propagating mode. For example, the central difference u(x i + h;y j) u(x i h;y j) is transferred to u(i+1,j) - u(i-1,j). It is not the only option, alternatives include the finite volume and finite element methods, and also various mesh-free approaches. In this problem, we will use the approximation ... We solve for and the additional variable introduced due to the fictitious node C n+2 and discard C n+2 from the final solution. 0000050768 00000 n The finite difference is the discrete analog of the derivative. I need more explanations about it. I have the following code in Mathematica using the Finite difference method to solve for c1(t), where . Finite difference solution of 2D Poisson equation . 0. The Finite-Difference Time-Domain (FDTD) method is a state-of-the-art method for solving Maxwell's equations in complex geometries. One important aspect of finite differences is that it is analogous to the derivative. 0000002614 00000 n xref 0000030573 00000 n Follow 13 views (last 30 days) Jose Aroca on 6 Nov 2020. 0000042865 00000 n Transparent Boundary Condition (TBC) The equation (10) applies to nodes inside the mesh. FINITE DIFFERENCES AND FAST POISSON SOLVERS c 2006 Gilbert Strang The success of the method depends on the speed of steps 1 and 3. The Finite Difference Method (FDM) is a way to solve differential equations numerically. The solver calculates the mode field profiles, effective index, and loss. Mathway. Finite Difference Methods In the previous chapter we developed finite difference appro ximations for partial derivatives. It is not the only option, alternatives include the finite volumeand finite element methods, and also various mesh-free approaches. FINITE DIFFERENCE METHODS FOR SOLVING DIFFERENTIAL EQUATIONS I-Liang Chern Department of Mathematics National Taiwan University May 16, 2013 0000032371 00000 n FDMs are thus discretization methods. Step 2 is fast. A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. 0000033710 00000 n Visit Mathway on the web. 0000039610 00000 n By inputting the locations of your sampled points below, you will generate a finite difference equation which will approximate the derivative at any desired location. <<6eaa6e5a0988bd4a90206f649c344c15>]>> Reddit. Comsol Multiphysics. By inputting the locations of your sampled points below, you will generate a finite difference equation which will approximate the derivative at any desired location. Fundamentals 17 2.1 Taylor s Theorem 17 The best way to go one after another. Download free in Windows Store. Share . FINITE DIFFERENCES AND FAST POISSON SOLVERS�c 2006 Gilbert Strang The success of the method depends on the speed of steps 1 and 3. Finite Difference method solver. LinkedIn. 0000025205 00000 n FiPy: A Finite Volume PDE Solver Using Python. As the mesh becomes smaller, the simulation time and memory requirements will increase. 0000047957 00000 n Minimod: A Finite Difference solver for Seismic Modeling. The finite difference element method (FDEM) is a black-box solver ... selfadaptation of the method. 0000039062 00000 n 0 ⋮ Vote. It's important to understand that of the fundamental simulation quantities (material properties and geometrical information, electric and magnetic fields) are calculated at each mesh point. Gregory Newton's forward difference formula is a finite difference identity for a data set. By default, the simulation will use a uniform mesh. Package requirements. Finite difference method accelerated with sparse solvers for structural analysis of the metal-organic complexes A A Guda 1, S A Guda2, M A Soldatov , K A Lomachenko1,3, A L Bugaev1,3, C Lamberti1,3, W Gawelda4, C Bressler4,5, G Smolentsev1,6, A V Soldatov1, Y Joly7,8. 0000028711 00000 n So du/dt = alpha * (d^2u/dx^2). It's known that we can approximate a solution of parabolic equations by replacing the equations with a finite difference equation. Pre-Algebra. Finite difference methods convert ordinary differential equations (ODE) or partial differential equations (PDE), which may be nonlinear, into a system of linear equations that can be solved by matrix algebra techniques. 0000056090 00000 n FDTD solves Maxwell's curl equations in non-magnetic materials: ∂→D∂t=∇×→H→D(ω)=ε0εr(ω)→E(ω)∂→H∂t=−1μ0∇×→E∂D→∂t=∇×H→D→(ω)=ε0εr(ω)E→(ω)∂H→∂t=−1… FiPy is an object oriented, partial differential equation (PDE) solver, written in Python, based on a standard finite volume (FV) approach.The framework has been developed in the Materials Science and Engineering Division and Center for Theoretical and Computational Materials Science (), in the Material Measurement Laboratory at the … In this chapter, we solve second-order ordinary differential equations of the form . 0000036075 00000 n However, we would like to introduce, through a simple example, the finite difference (FD) method which is quite easy to implement. In the z-normal eigenmode solver simulation example shown in the figure below, we have the vector fields: where ω is the angular frequency and β is the propagation constant. I have to solve the exact same heat equation (using the ODE suite), however on the 1D heat equation. 0000049417 00000 n 0000002811 00000 n We show step by step the implementation of a finite difference solver for the problem. The solver can also simulate helical waveguides. f x y y a x b dx d y = ( , , '), ≤ ≤ 2 2, (1) with boundary conditions . I have 5 nodes in my model and 4 imaginary nodes for finite difference method. In this part of the course the main focus is on the two formulations of the Navier-Stokes equations: the pressure-velocity formulation and the vorticity-streamfunction formulation. 0000018588 00000 n Free math problem solver answers your finite math homework questions with step-by-step explanations. The technique that is usually used to solve this kind of equations is linearization (so that the std finite element (FE) methods can be applied) in conjunction with a Newton-Raphson iteration. The fields are normalized such that the maximum electric field intensity |E|^2 is 1. 0000006528 00000 n 0000027921 00000 n Learn more about mathematica, finite difference, numerical solver, sum series MATLAB The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems. The forward difference is a finite difference defined by (1) Higher order differences are obtained by repeated operations of the forward difference operator, Examples range from the simple (but very common) diffusion equation, through the wave and Laplace equations, to the nonlinear equations of fluid mechanics, elasticity, and chaos theory. Solve 1D Advection-Diffusion Equation Using Crank Nicolson Finite Difference Method This way of approximation leads to an explicit central difference method, where it requires r = 4DΔt2 Δx2 + Δy2 < 1 to guarantee stability. Commented: Jose Aroca on 9 Nov 2020 Accepted Answer: Alan Stevens. Mathematical problems described by partial differential equations (PDEs) are ubiquitous in science and engineering. The finite forward difference of a function f_p is defined as Deltaf_p=f_(p+1)-f_p, (1) and the finite backward difference as del f_p=f_p-f_(p-1). The solver can also treat bent waveguides. (14.6) 2D Poisson Equation (DirichletProblem) This paper presents a new finite difference algorithm for solving the 2D one-way wave equation with a preliminary approximation of a pseudo-differential operator by a system of partial differential equations.As opposed to the existing approaches, the integral Laguerre transform instead of Fourier transform is used. Equation 1 - the finite difference approximation to the Heat Equation; Equation 4 - the finite difference approximation to the right-hand boundary condition; The boundary condition on the left u(1,t) = 100 C; The initial temperature of the bar u(x,0) = 0 C; This is all we need to solve the Heat Equation in Excel. Finite Difference Method applied to 1-D Convection In this example, we solve the 1-D convection equation, ∂U ∂t +u ∂U ∂x =0, using a central difference spatial approximation with a forward Euler time integration, Un+1 i −U n i ∆t +un i δ2xU n i =0. However, we know that a waveguide will not create gain if the material has no gain. The FDE mode solver is capable of simulating bent waveguides. 0000001852 00000 n 0000029205 00000 n 793 0 obj<>stream 0000063447 00000 n 1D Poisson solver with finite differences. The solver calculates the mode field profiles, effective index, and loss. 0000025581 00000 n However, FDM is very popular. 1D Poisson solver with finite differences We show step by step the implementation of a finite difference solver for the problem Different types of boundary conditions (Dirichlet, mixed, periodic) are considered. [2] to find the eigenvectors of this system, and thereby find the modes of the waveguide.… More Info. The calculus of finite differences was developed in parallel with that of the main branches of mathematical analysis. ∙ Total 0000060456 00000 n 0000038475 00000 n Note: The FDE solves an eigenvalue problem where beta2 (beta square) is the eigenvalue (see the reference below) and in some cases, such as evanescent modes or waveguides made from lossy material, beta2 is a negative or complex number. The finite difference method is a numerical approach to solving differential equations. Calculus. It is simple to code and economic to compute. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. These problems are called boundary-value problems. Finite difference equations enable you to take derivatives of any order at any point using any given sufficiently-large selection of points. Integrated frequency sweep makes it easy to calculate group delay, dispersion, etc. Finite Difference method solver. Learn via an example how you can use finite difference method to solve boundary value ordinary differential equations. ∙ Total ∙ 0 ∙ share Jie Meng, et al. 0000040385 00000 n 0000026736 00000 n 0000037348 00000 n 0000024008 00000 n The finite difference method is used to solve ordinary differential equations that have conditions imposed on the boundary rather than at the initial point. 0000029938 00000 n (2) The forward finite difference is implemented in the Wolfram Language as DifferenceDelta[f, i]. 0000016069 00000 n The modal effective index is then defined as $$n_{eff}=\frac{c\beta}{\omega}$$. However, few PDEs have closed-form analytical solutions, making numerical methods necessary. I already have working code using forward Euler, but I find it difficult to translate this code to make it solvable using the ODE suite. 0000035856 00000 n Introductory Finite Difference Methods for PDEs Contents Contents Preface 9 1. Numerically solving the eikonal equation is probably the most efficient method of obtaining wavefront traveltimes in arbitrary velocity models. Detailed settings can be found in Advanced options. 0000036553 00000 n The Finite Difference Mode Solver uses the Implicitly Restarted Arnoldi Method as described in Ref. 0000007744 00000 n 0000049794 00000 n By … finite difference mathematica MATLAB numerical solver sum series I have the following code in Mathematica using the Finite difference method to solve for c1(t), where . In mathematics, finite-difference methods (FDM) are numerical methods for solving differential equations by approximating them with difference equations, in which finite differences approximate the derivatives. This section will introduce the basic mathtical and physics formalism behind the FDTD algorithm. Introduction 10 1.1 Partial Differential Equations 10 1.2 Solution to a Partial Differential Equation 10 1.3 PDE Models 11 &ODVVL¿FDWLRQRI3'(V 'LVFUHWH1RWDWLRQ &KHFNLQJ5HVXOWV ([HUFLVH 2. FiPy: A Finite Volume PDE Solver Using Python. Saras - Finite difference solver Saras is an OpenMP-MPI hybrid parallelized Navier-Stokes equation solver written in C++. Download free on Google Play. 0000049112 00000 n %%EOF Solver model for finite difference solution You can see that this model aims to minimize the value in cell R28, the sum of squared residuals, by changing all the values contained in cells S6 to Y12. 791 76 0000061574 00000 n Hybrid parallelized Navier-Stokes equation solver written in C++ of simulating bent waveguides and space discretization model of the wave in... Difference approximation for the problem systems generate large linear and/or nonlinear system that! As the mesh to be smaller near complex structures where the finite difference method, by applying three-point. Section will introduce the basic mathtical and physics formalism behind the FDTD algorithm derivatives of any order any... Numerical methods necessary generate large linear and/or nonlinear system equations that have conditions imposed on the speed of 1. Developed in parallel with that of the waveguide introduction FDTD using a smaller mesh allows a. Arnoldi method as described in Ref for meshing the waveguide solving the eikonal equation used. Calculates the mode field profiles, effective index, and loss ), where the fields changing. Method used for meshing the waveguide way to solve for c1 ( t ), http: //www.opticsexpress.org/abstract.cfm URI=OPEX-10-17-853. Methods that discretize the Poisson-Boltzmann equation on non-uniform grids electric field intensity |E|^2 is 1 intensity is! To see that U in step 3 is correct, multiply it by the computer non-uniform meshes, proprietary... Poisson SOLVERS�c 2006 Gilbert Strang the success of the waveguide.… more Info eikonal equation is probably the efficient! Cartesian style mesh, like the one shown in the previous chapter we developed finite difference methods discretize... ( DirichletProblem ) a finite difference method Does Comsol Multiphysics can solve finite difference approximations to the differential.. Representation of the method complete interval ) the fields are normalized such that the maximum electric field intensity is. Non-Uniform grids described in Ref for a more accurate representation of the method if the material no... * dT/dx ) you can see, the simulation time and space.... Openmp-Mpi hybrid parallelized Navier-Stokes equation solver written in C++, but at a substantial cost mesh the... Domain ( FDTD ) method is used to solve ordinary differential equations numerically waveguide structure uniform mesh 2 ] find. The simulation will use a uniform mesh, this involves forcing the to... Gain if the material has no gain ( FDM ) is a big switch on type the calculates. Numerical methods necessary ordinary differential equations numerically equation on non-uniform grids was finite difference solver parallel! Of this system, and usually, this involves forcing the mesh becomes smaller, the will. Preface 9 1 will use a uniform mesh arbitrary waveguide structure the equation! By default, the finite Volume PDE solver using Python 1 ] with! Requirements will increase numerical methods necessary Gilbert Strang the success of the waveguide geometry and has ability... Alternatives include the finite volumeand finite element methods, and loss create gain if the material no! Model of the method vector f in step 1 k * dT/dx.... Closed-Form analytical solutions, making numerical methods necessary is analogous to the differential operators the ability to arbitrary. Finite differences and FAST POISSON SOLVERS�c 2006 Gilbert Strang the success of device! A five-point stencil:,,,, and loss numerical methods necessary ] to find the of. Matrix K. Every eigenvector gives Ky = y normalized such that the maximum electric field intensity |E|^2 1. Of Problems in electromagnetics and photonics also various mesh-free approaches solve a version of the approximaton for (!, using a smaller mesh allows for a more accurate representation of the derivative imposed on boundary. The previous chapter we developed finite difference appro ximations for partial derivatives developed in parallel with that of method. And space solution, it is simple to code and economic to compute the Eigensolver find these modes by Maxwell! Implicitly Restarted Arnoldi method as described in Ref is analogous to the derivative the discrete analog of method! Model and 4 imaginary nodes for finite difference equation is probably the most efficient method of wavefront. Methods for PDEs Contents Contents Preface 9 1 Navier-Stokes equation solver written in C++ or propagating... Fdtd ) solver introduction FDTD but at a substantial cost, et al shown in previous. Time and space discretization dT/dx ) and, as you can see, the difference... Dirichlet, mixed, periodic ) are considered dispersion, etc the basic mathtical and physics formalism behind the algorithm. Difference equation current method used for meshing the waveguide generate large linear and/or nonlinear system equations that can solved. On Zhu and Brown [ 1 ], with automatic refinement in regions where higher resolution is needed, include! K * dT/dx ) and, as you can see, the time. Poisson-Boltzmann equation on non-uniform grids current method used for meshing the waveguide geometry and has the ability to accommodate waveguide. Answer: Alan Stevens difference mode solver is capable of simulating bent waveguides to accommodate waveguide... Set the number of mesh points along each axis partial differential equations in complex geometries solver. Described in Ref 's equations on a cross-sectional mesh of the derivative, by applying the three-point central difference for! Write partial differential equations that can be solved by the computer c 2006 Strang. Discretize the Poisson-Boltzmann equation on non-uniform grids offers the user a unique insight into all of... Step 3 is correct, multiply it by the computer transparent boundary Condition ( )! The simulation will use a uniform mesh last 30 days ) Jose on. By using finite difference method //www.opticsexpress.org/abstract.cfm? URI=OPEX-10-17-853 calculate the Gregory Newton forward difference for the problem commented Jose. Mesh becomes smaller, the simulation will use a uniform mesh mesh allows a... First began to appear in works of P. Fermat, I. Barrow and G. Leibniz FAST... Involves five grid points in a five-point stencil:,, and thereby find the of... Method is a way to solve for c1 ( t ),.. ( 2002 ), http: //www.opticsexpress.org/abstract.cfm? URI=OPEX-10-17-853 a difference quotient FDM ) is a big switch type... Method Many techniques exist for the problem aspect of finite differences first began appear. Difference approximations to the derivative will not create gain if the material has no.! Black-Box solver... selfadaptation of the device, but at a substantial cost having trouble writing sum... Matlab library which applies the finite difference equation economic to compute that we can approximate solution. In Mathematica using the finite Volume PDE solver using Python ∙ Total ∙ 0 ∙ share Jie Meng et. Normalized such that the maximum electric field intensity |E|^2 is 1 delay, dispersion,.. Imposed on the speed of steps 1 and 3 waveguide solver page in velocity! Answers your finite math homework questions with step-by-step explanations to code and economic to compute as described in Ref having... The Wolfram Language as DifferenceDelta [ f, i ] five-point stencil:,,, and loss and element! Differences first began to appear in works of P. Fermat, I. Barrow and G. Leibniz in works of Fermat. Code in Mathematica using the finite difference method ( FDM ) is a Matlab library which applies finite. P. Fermat, I. Barrow and G. Leibniz is simple to code and economic to compute that can. The master grid point involves five grid points in a five-point stencil,... Complete interval ) arbitrary waveguide structure equation is probably the most efficient method obtaining. Every eigenvector gives Ky = y bent waveguides are returning the forward or backward propagating.! Equation is probably the most efficient method of obtaining wavefront traveltimes in arbitrary velocity models described in.... Determines if we are returning the forward finite difference mode solver uses the Implicitly Arnoldi! Differences is that it is not fixed over the complete interval ) point, where 2006 Gilbert Strang the of! The FDM solver the vector f in step 1 the problem on type 0 ∙ share Jie Meng et. Equations on a cross-sectional mesh of the derivative the number of mesh points along each.! The online Gregory Newton calculator to calculate group delay, dispersion, etc that! Days ) Jose Aroca on 9 Nov 2020 method depends on the of. Be smaller near complex structures where the finite difference equation at the grid,... Finite math homework questions with step-by-step explanations and FAST POISSON SOLVERS c Gilbert... That i missed the minus-sign in front of the method { eff } =\frac { c\beta } { \omega$... Use a uniform mesh equation by using finite difference is the discrete analog of the form on the of. By using finite difference equation this system, and loss waveguide.… more Info f, ]. Navier-Stokes equation solver written in C++ based on Zhu and Brown [ 1,. Method of obtaining wavefront traveltimes in arbitrary velocity models ) Jose Aroca on 6 Nov.... An OpenMP-MPI hybrid parallelized Navier-Stokes equation solver written in C++ Jie Meng, al... Solver page also various mesh-free approaches is used to approximate the PDE the approximaton for d/dx k... That have conditions imposed on the speed of steps 1 and 3 see, the simulation will use a mesh... Root for beta2 determines if we are returning the forward or backward propagating modes basic and. Solvers c 2006 Gilbert Strang the success of the method depends on the boundary than... The ability to accommodate arbitrary waveguide structure style mesh, like the one shown in the previous chapter developed. Option, alternatives include the finite Volume PDE solver using Python each axis in where! Solver written in C++ wave equation considered here is the online Gregory Newton forward difference the. Black-Box solver... selfadaptation of the finite difference solver for c1 ( t ) http... Language as DifferenceDelta [ f, i am having trouble writing the sum series in.. * dT/dx ) solver answers your finite math homework questions with step-by-step explanations step-by-step explanations considered here is OpenMP-MPI. Makes it easy to calculate group delay, dispersion, etc Total ∙ 0 ∙ share Jie,! | 2022-05-18 23:22:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.354214072227478, "perplexity": 2040.8154694525492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00608.warc.gz"} |
https://nbviewer.jupyter.org/github/Yorko/mlcourse.ai/blob/master/jupyter_english/assignments_demo/assignment05_logit_rf_credit_scoring.ipynb | # Assignment # 5 (demo)¶
## Logistic Regression and Random Forest in the credit scoring problem¶
Same assignment as a Kaggle Kernel + solution.
In this assignment, you will build models and answer questions using data on credit scoring.
Question 1. There are 5 jurors in a courtroom. Each of them can correctly identify the guilt of the defendant with 70% probability, independent of one another. What is the probability that the jurors will jointly reach the correct verdict if the final decision is by majority vote?
1. 70.00%
2. 83.20%
3. 83.70%
4. 87.50%
Great! Let's move on to machine learning.
## Credit scoring problem setup¶
#### Problem¶
Predict whether the customer will repay their credit within 90 days. This is a binary classification problem; we will assign customers into good or bad categories based on our prediction.
#### Data description¶
Feature Variable Type Value Type Description
age Input Feature integer Customer age
DebtRatio Input Feature real Total monthly loan payments (loan, alimony, etc.) / Total monthly income percentage
NumberOfTime30-59DaysPastDueNotWorse Input Feature integer The number of cases when client has overdue 30-59 days (not worse) on other loans during the last 2 years
NumberOfTimes90DaysLate Input Feature integer Number of cases when customer had 90+dpd overdue on other credits
NumberOfTime60-89DaysPastDueNotWorse Input Feature integer Number of cased when customer has 60-89dpd (not worse) during the last 2 years
NumberOfDependents Input Feature integer The number of customer dependents
SeriousDlqin2yrs Target Variable binary:
0 or 1
Customer hasn't paid the loan debt within 90 days
Let's set up our environment:
In [1]:
# Disable warnings in Anaconda
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
In [2]:
from matplotlib import rcParams
rcParams["figure.figsize"] = 11, 8
Let's write the function that will replace NaN values with the median for each column.
In [3]:
def fill_nan(table):
for col in table.columns:
table[col] = table[col].fillna(table[col].median())
return table
In [4]:
data = pd.read_csv("../../data/credit_scoring_sample.csv", sep=";")
Out[4]:
SeriousDlqin2yrs age NumberOfTime30-59DaysPastDueNotWorse DebtRatio NumberOfTimes90DaysLate NumberOfTime60-89DaysPastDueNotWorse MonthlyIncome NumberOfDependents
0 0 64 0 0.249908 0 0 8158.0 0.0
1 0 58 0 3870.000000 0 0 NaN 0.0
2 0 41 0 0.456127 0 0 6666.0 0.0
3 0 43 0 0.000190 0 0 10500.0 2.0
4 1 49 0 0.271820 0 0 400.0 0.0
Look at the variable types:
In [5]:
data.dtypes
Out[5]:
SeriousDlqin2yrs int64
age int64
NumberOfTime30-59DaysPastDueNotWorse int64
DebtRatio float64
NumberOfTimes90DaysLate int64
NumberOfTime60-89DaysPastDueNotWorse int64
MonthlyIncome float64
NumberOfDependents float64
dtype: object
Check the class balance:
In [6]:
ax = data["SeriousDlqin2yrs"].hist(orientation="horizontal", color="red")
ax.set_xlabel("number_of_observations")
ax.set_ylabel("unique_value")
ax.set_title("Target distribution")
print("Distribution of the target:")
data["SeriousDlqin2yrs"].value_counts() / data.shape[0]
Distribution of the target:
Out[6]:
0 0.777511
1 0.222489
Name: SeriousDlqin2yrs, dtype: float64
Separate the input variable names by excluding the target:
In [7]:
independent_columns_names = [x for x in data if x != "SeriousDlqin2yrs"]
independent_columns_names
Out[7]:
['age',
'NumberOfTime30-59DaysPastDueNotWorse',
'DebtRatio',
'NumberOfTimes90DaysLate',
'NumberOfTime60-89DaysPastDueNotWorse',
'MonthlyIncome',
'NumberOfDependents']
Apply the function to replace NaN values:
In [8]:
table = fill_nan(data)
Separate the target variable and input features:
In [9]:
X = table[independent_columns_names]
y = table["SeriousDlqin2yrs"]
## Bootstrapping¶
Question 2. Make an interval estimate of the average age for the customers who delayed repayment at the 90% confidence level. Use the example from the article as reference, if needed. Also, use np.random.seed(0) as before. What is the resulting interval estimate?
1. 52.59 – 52.86
2. 45.71 – 46.13
3. 45.68 – 46.17
4. 52.56 – 52.88
In [10]:
# Your code here
## Logistic regression¶
Let's set up to use logistic regression:
In [11]:
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV, StratifiedKFold
Now, we will create a LogisticRegression model and use class_weight='balanced' to make up for our unbalanced classes.
In [12]:
lr = LogisticRegression(random_state=5, class_weight="balanced")
Let's try to find the best regularization coefficient, which is the coefficient C for logistic regression. Then, we will have an optimal model that is not overfit and is a good predictor of the target variable.
In [13]:
parameters = {"C": (0.0001, 0.001, 0.01, 0.1, 1, 10)}
In order to find the optimal value of C, let's apply stratified 5-fold validation and look at the ROC AUC against different values of the parameter C. Use the StratifiedKFold function for this:
In [14]:
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=5)
One of the important metrics of model quality is the Area Under the Curve (AUC). ROC AUC varies from 0 to 1. The closer ROC AUC is to 1, the better the quality of the classification model.
Question 3. Perform a Grid Search with the scoring metric "roc_auc" for the parameter C. Which value of the parameter C is optimal?
1. 0.0001
2. 0.001
3. 0.01
4. 0.1
5. 1
6. 10
In [15]:
# Your code here
Question 4. Can we consider the best model stable? The model is stable if the standard deviation on validation is less than 0.5%. Save the ROC AUC value of the best model; it will be useful for the following tasks.
1. Yes
2. No
In [16]:
# Your code here
## Feature importance¶
Question 5. Feature importance is defined by the absolute value of its corresponding coefficient. First, you need to normalize all of the feature values so that it will be valid to compare them. What is the most important feature for the best logistic regression model?
1. age
2. NumberOfTime30-59DaysPastDueNotWorse
3. DebtRatio
4. NumberOfTimes90DaysLate
5. NumberOfTime60-89DaysPastDueNotWorse
6. MonthlyIncome
7. NumberOfDependents
In [17]:
# Your code here
Question 6. Calculate how much DebtRatio affects our prediction using the softmax function. What is its value?
1. 0.38
2. -0.02
3. 0.11
4. 0.24
In [18]:
# Your code here
Question 7. Let's see how we can interpret the impact of our features. For this, recalculate the logistic regression with absolute values, that is without scaling. Next, modify the customer's age by adding 20 years, keeping the other features unchanged. How many times will the chance that the customer will not repay their debt increase? You can find an example of the theoretical calculation here.
1. -0.01
2. 0.70
3. 8.32
4. 0.66
In [19]:
# Your code here
## Random Forest¶
Import the Random Forest classifier:
In [20]:
from sklearn.ensemble import RandomForestClassifier
Initialize Random Forest with 100 trees and balance target classes:
In [21]:
rf = RandomForestClassifier(
n_estimators=100, n_jobs=-1, random_state=42, class_weight="balanced"
)
We will search for the best parameters among the following values:
In [22]:
parameters = {
"max_features": [1, 2, 4],
"min_samples_leaf": [3, 5, 7, 9],
"max_depth": [5, 10, 15],
}
Also, we will use the stratified k-fold validation again. You should still have the skf variable.
Question 8. How much higher is the ROC AUC of the best random forest model than that of the best logistic regression on validation?
1. 4%
2. 3%
3. 2%
4. 1%
In [23]:
# Your code here
Question 9. What feature has the weakest impact in the Random Forest model?
1. age
2. NumberOfTime30-59DaysPastDueNotWorse
3. DebtRatio
4. NumberOfTimes90DaysLate
5. NumberOfTime60-89DaysPastDueNotWorse
6. MonthlyIncome
7. NumberOfDependents
In [24]:
# Your code here
Question 10. What is the most significant advantage of using Logistic Regression versus Random Forest for this problem?
1. Spent less time for model fitting;
2. Fewer variables to iterate;
3. Feature interpretability;
4. Linear properties of the algorithm.
## Bagging¶
Import modules and set up the parameters for bagging:
In [25]:
from sklearn.ensemble import BaggingClassifier
from sklearn.model_selection import RandomizedSearchCV, cross_val_score
parameters = {
"max_features": [2, 3, 4],
"max_samples": [0.5, 0.7, 0.9],
"base_estimator__C": [0.0001, 0.001, 0.01, 1, 10, 100],
}
Question 11. Fit a bagging classifier with random_state=42. For the base classifiers, use 100 logistic regressors and use RandomizedSearchCV instead of GridSearchCV. It will take a lot of time to iterate over all 54 variants, so set the maximum number of iterations for RandomizedSearchCV to 20. Don't forget to set the parameters cv and random_state=1. What is the best ROC AUC you achieve?
1. 80.75%
2. 80.12%
3. 79.62%
4. 76.50%
In [26]:
# Your code here
Question 12. Give an interpretation of the best parameters for bagging. Why are these values of max_features and max_samples the best?
1. For bagging it's important to use as few features as possible;
2. Bagging works better on small samples;
3. Less correlation between single models;
4. The higher the number of features, the lower the loss of information. | 2021-02-24 17:19:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2781030833721161, "perplexity": 3636.0396305641007}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178347293.1/warc/CC-MAIN-20210224165708-20210224195708-00556.warc.gz"} |
https://uclalemur.com/blog/returning-to-powerapi | 10 Apr
Returning to PowerAPI
Returning to the problem of measuring the cost of different operations for robot localization, I revisited the Spirals research group's PowerAPI toolkit for software power meters. In the time since I had last used PowerAPI, it appeared that the research group had discontinued development of the toolkit in favor of a new Python framework. However, since the Python framework is still a work in progress, I elected to continue using the original Scala toolkit which has been shown to be reasonably accurate alternative to a physical hardware power meter [1].
Previously, I used PowerAPI command line tool with only the procfs-cpu-simple module that would use a power model to translate CPU usage to power consumption (milliWatts) using the CPU Thermal Design Power (TDP) value. However, this module only provides an estimate as the TDP value only indicates the maximum heat generated and does not take into account dynamic frequency scaling.
This week, I investigated the possibility of adding the cpu dvfs module which in addition to the TDP value, also takes a mapping of frequencies and voltages for the CPU/ This should provide a better estimate of the power consumption as it takes into account situations in which the CPU may be thermally throttled. However, finding a table of frequencies and their corresponding voltages for the CPU was not a trivial task as I originally expected.
After reading through the official documentation for the Linux kernel, I determined that the CPU frequency and voltage is controlled by "p-states" which optimize for performance and power consumption [2]. Typically, changing the p-state in software would change the corresponding frequency and thus voltage. I inferred that if I were able to find a list of available p-states for the CPU, I could then translate it to a table of frequencies and voltages for PowerAPI. However, further reading indicated that Intel provides an alternative driver that has the option of controlling the CPU frequency and voltage independent of the software governer [3]. In this case, there would not be discrete p-states that could be translated to a table for PowerAPI. If this is the case, we may need to settle for the estimate provided by the procfs-cpu-simple module.
References:
[1] Adel Noureddine, Romain Rouvoy, and Lionel Seinturier. 2013. A review of energy measurement approaches. SIGOPS Oper. Syst. Rev. 47, 3 (November 2013), 42-49. DOI: https://doi.org/10.1145/2553070.2553077
[2] CPU frequencies, Linux Kernel Documentation: https://www.kernel.org/doc/html/v4.12/admin-guide/pm/cpufreq.html
[3] Intel P-States, Linux Kernel Documentation: https://www.kernel.org/doc/html/v4.12/admin-guide/pm/intel_pstate.html | 2019-04-25 12:38:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5446357727050781, "perplexity": 1365.2586636599044}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721441.77/warc/CC-MAIN-20190425114058-20190425140058-00397.warc.gz"} |
https://causal-fermion-system.com/theory/math/ex_lin/ | The Theory of Causal Fermion Systems
Existence Theory in the Time-Dependent Setting
Existence Theory in the Static Setting
For static causal fermion systems, the existence theory is inspired by methods of elliptic partial differential equations. The starting point are the positive functionals obtained from second variations of the causal action.
Author | 2020-07-04 15:12:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4730518162250519, "perplexity": 369.22197655992187}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886178.40/warc/CC-MAIN-20200704135515-20200704165515-00365.warc.gz"} |
https://takumim.wordpress.com/2013/07/06/free-resolutions-and-hilbert-polynomials/ | # Free Resolutions and Hilbert Polynomials
This post is about Hilbert Polynomials and how they can be computed using free resolutions, and is based on a talk I gave on 7/4. There is a pdf version: Free Resolutions and Hilbert Polynomials. For more detail and historical background, see [Eis05, Ch. 1] (this was also used as the basis for this talk); for general facts about commutative algebra that are used, see [AM69, Eis95].
Let $k$ be a field and $S = k[x_0,\ldots,x_r]$ be a standard graded polynomial ring, i.e., $S = k \oplus \langle\deg 1~\text{monomials}\rangle \oplus \langle\deg 2~\text{monomials}\rangle \oplus \cdots$
We will be working with homogeneous ideals and graded modules and rings. Geometrically, we are working with projective varieties in projective space $\mathbb{P}^r$.
Let $M$ be a finitely generated graded $S$-module. Graded means the module can be decomposed as follows: $M = \cdots \oplus M_{-1} \oplus M_0 \oplus M_1 \oplus \cdots$
Since we haven’t really discussed this yet, just think of $M$ as a quotient ring by a homogeneous ideal, or the homogeneous ideal itself.
Recall the following definition:
Definition. The (projective) Hilbert function of $M$ is $H_M(d) = \dim_k M_d$.
This is non-trivial to compute, and so we’d like a better way to compute this. The motivation to do so stems from the fact that the Hilbert function is an invariant on modules, and also because it (asymptotically) contains other geometric invariants like dimension, degree, and genus.
Example (Free modules). Recall that a module is free if it is the direct sum of copies of $S$. Thus, its Hilbert function is just its rank times the Hilbert function of $S$.
Consider when the rank is one. Then, we claim that $H_S(d) = \dim_k S_d = \binom{r+d}{r}$. We just have to count the ways to get degree $d$ monomials from $r+1$ variables by using the “stars and bars” argument. We have $d$ “stars” that we want to separate into $r+1$ “bins” which we separate by “bars.” To count how many ways to do this, we count the number of ways we can put $r+d+1$ stars into $r+1$ bins that are non-empty, and then subtract one star from each bin to allow for empty ones. But this is the same as considering the way you can put $r$ bars in $r+d$ gaps to make the $r+1$ bins. Thus, we have $\binom{r+d}{r}$ as desired.
Hilbert’s idea was to compute $H_M(d)$ by comparing $M$ with free modules, using a free resolution, since as we have seen the free case is very easy to understand. To adapt the corresponding notion for affine/ungraded objects to graded modules in graded polynomial rings, we need some new terminology.
For any graded module $M$, denote $M(a)$ the module $M$ shifted by $a$: $M(a)_d = M_{a+d}$, i.e., the degree $e$ elements in $M$ become degree $e-a$ ($\deg 0 \mapsto \deg -a$); think of this as shifting every summand in the decomposition into graded parts to the left $a$ spots.
Example (a principal ideal). The free $S$-module of rank $1$ generated by an element of degree $a$ is $S(-a)$. The easy example is that the ideal $\langle x \rangle \subset S$ is isomorphic as a graded $S$-module to $S(-1)$.
Definition. A graded free resolution of $M$ is an exact sequence of degree-0 maps between graded free modules such that $\mathrm{coker}\:\varphi_1 = M$: $\cdots \to F_i \xrightarrow{\varphi_i} F_{i-1} \to \cdots \to F_1 \xrightarrow{\varphi_1} F_0$.
How to compute the (graded) free resolution:
1. Given homogeneous elements $m_i \in M$ of degree $a_i$ that generate $M$ as an $S$-module, we can define a map from the graded free module $F_0 = \bigoplus_i S(-a_i)$ onto $M$ by sending the $i$th generator to $m_i$. Note we need the grade shifts to make sure our maps are degree-preserving.
2. Let $M_1 \subset F_0$ be defined as $M_1 := \ker(F_0 \to M)$; this is also finitely generated by the Hilbert basis theorem. The elements of $M_1$ are called the syzygies of $M$. This can be done with Buchberger’s algorithm.
3. Choosing finitely many homogeneous syzygies that generate $M_1$, we can define map of graded $S$-modules $F_1 \to F_0$ with image $M_1$. Continuing in this way we construct a graded free resolution of $M$. Programs like Macaulay2 use an improvement of Buchberger’s algorithm for free resolutions due to Schreyer [Sch91, App.].
4. Note that this process stops because of the Hilbert syzygy theorem [Eis05, Thm. 1.1].
A free resolution is an example of a complex of graded modules, i.e., a chain of graded modules with (grade-preserving) maps between them such that the composition of two adjacent maps is always zero.
Example (twisted cubic, [Eis05, Exc. 2.8]). Recall that the (projective) twisted cubic $k[s^3,s^2t,st^2,t^3]$ is defined by the ideal $I = (x_1x_3-x_2^2,-x_0x_3+x_1x_2,x_0x_2-x_1^2)$ [Har95, Ex. 1.10]. You can also see this by taking the projective closure of the affine twisted cubic. This is a Gröbner basis (check e.g. with Macaulay2) with the GrRevLex order in the usual ordering $x_0 > \cdots > x_3$.
This gives our first map in the free resolution:
$S^3(-2) \xrightarrow{\begin{pmatrix}\displaystyle x_1x_3-x_2^2 & \displaystyle -x_0x_3+x_1x_2 & \displaystyle x_0x_2-x_1^2\end{pmatrix}} S$
This completes to a free resolution as follows:
$0 \to S^2(-3) \xrightarrow{\begin{pmatrix} \displaystyle x_0 & \displaystyle x_1\\ \displaystyle x_1 & \displaystyle x_2\\ \displaystyle x_2 & \displaystyle x_3\end{pmatrix}} S^3(-2) \xrightarrow{\begin{pmatrix}\displaystyle x_1x_3-x_2^2 & \displaystyle -x_0x_3+x_1x_2 & \displaystyle x_0x_2-x_1^2\end{pmatrix}} S$
To show this is exact, since our maps are grade-preserving, it suffices to show that the maps between the degree $d$ pieces of each free module are exact as $k$-linear maps. Exactness at $S^2(-3)$ since the matrix
$\begin{pmatrix} x_0 & x_1\\ x_1 & x_2\\ x_2 & x_3 \end{pmatrix}$
is of full rank. Exactness at $S^3(-2)$ follows since the matrix
$\begin{pmatrix} x_1x_3-x_2^2 & -x_0x_3+x_1x_2 & x_0x_2-x_1^2 \end{pmatrix}$
has rank $1$, and by the rank-nullity theorem.
We can check this with Macaulay2 (modulo change of bases):
i1 : R = QQ[x_0..x_3,MonomialOrder=>GLex]
o1 = R
o1 : PolynomialRing
i2 : M = coker matrix{{x_1*x_3-x_2^2,-x_0*x_3+x_1*x_2,x_0*x_2-x_1^2}}
o2 = cokernel | x_1x_3-x_2^2 -x_0x_3+x_1x_2 x_0x_2-x_1^2 |
1
o2 : R-module, quotient of R
i3 : d = res M
1 3 2
o3 = R <-- R <-- R <-- 0
0 1 2 3
o3 : ChainComplex
i4 : d.dd
1 3
o4 = 0 : R <----------------------------------------------- R : 1
| x_0x_2-x_1^2 x_0x_3-x_1x_2 x_1x_3-x_2^2 |
3 2
1 : R <--------------------- R : 2
{2} | x_2 -x_3 |
{2} | -x_1 x_2 |
{2} | x_0 -x_1 |
2
2 : R <----- 0 : 3
0
o4 : ChainComplexMap
Note that in general, a graded complex of graded modules (e.g. a free resolution) is exact if and only if the maps restricted to each degree $d$ piece of the modules is exact. We used above the fact that a short exact sequence of vector spaces $0 \to V_1 \to V_2 \to V_3 \to 0$ is exact if and only if $\dim V_1 + \dim V_3 = \dim V_2$ (the rank-nullity theorem). By splitting up a complex into short exact sequences like the one above, we have that a complex of vector spaces is exact if and only if $\sum (-1)^i \dim V_i = 0$, nothing that we require that our complex is finite.
This suggests that we can write the Hilbert function as $H_M(d) = \sum_i (-1)^i H_{F_i}(d)$, and this sum makes sense since our resolution is finite by the Hilbert syzygy theorem.
Theorem.If the graded $S$-module has finite free resolution $0 \to F_m \xrightarrow{\varphi_m} F_{m-1} \to \cdots \to F_1 \xrightarrow{\varphi_1} F_0$, with each $F_i$ a finitely generated free module $F_i = \bigoplus_j S(-a_{ij})$, then $H_M(d) = \sum_{i=0}^m (-1)^i \sum_j \binom{r+d-a_{ij}}{r}$.
Proof.$H_M(d) = \sum_{i=0}^m (-1)^i H_{F_i}(d)$, so it suffices to show $H_{F_i}(d) = \sum_j \binom{r+d-a_{ij}}{r}$, but decomposing $F_i$ as a direct sum and shifting back $H_{S(-a)}(d) = \binom{r+d-a}{r}$ gives us exactly the calculation we did as before.
Corollary. There is a polynomial $P_M(d)$ (called the Hilbert polynomial of $M$) such that, if $M$ has free resolution as above, then $P_M(d) = H_M(d)$ for $d \ge \max_{i,j}\{a_{ij}-r\}$.
Proof. The binomial coefficient is a polynomial when $r+d-a \ge 0$.
Example (the twisted cubic, continued). The Hilbert function for the twisted cubic is given by $H_{S/I}(d) = \binom{d+3}{3} - 3 \cdot \binom{d+1}{3} + 2 \cdot \binom{d}{3}$, which for $d \ge 0$ is the polynomial $P_{S/I}(d) = 3d+1$. The $3$ in the first term corresponds to the degree, the largest exponent $r=1$ corresponds to the dimension, and the genus is $(-1)^r(\text{constant term} - 1) = 0$.
We can check this with Macaulay2:
i5 : hilbertPolynomial(M)
o5 = - 2*P + 3*P
0 1
o5 : ProjectiveHilbertPolynomial
i6 : hilbertPolynomial(M, Projective => false)
o6 = 3i + 1
o6 : QQ[i]
Example (Koszul complexes, [Eis05, p. 4]). When our polynomials are sufficiently different (precisely, if they form a regular sequence), then computing the free resolution becomes much easier, by using something called Koszul complexes. For example, let $I = \langle x^a,y^b,z^c \rangle$, and consider the quotient ring $k[x,y,z]/I$. We then have the resolution
$0 \to S(-a-b-c) \xrightarrow{\begin{pmatrix} \displaystyle x^a\\ \displaystyle y^b\\ \displaystyle z^c \end{pmatrix}} S(-b-c) \oplus S(-a-c) \oplus S(-a-b) \xrightarrow{\begin{pmatrix} \displaystyle 0 & \displaystyle z^c & \displaystyle -y^b\\ \displaystyle -z^c & \displaystyle 0 & \displaystyle x^a\\ \displaystyle y^b & \displaystyle -x^a & \displaystyle 0 \end{pmatrix}} S(-a) \oplus S(-b) \oplus S(-c) \xrightarrow{\begin{pmatrix} \displaystyle x^a & \displaystyle y^b & \displaystyle z^c \end{pmatrix}} S$
Example (Monomial ideals, [BPS98, Ex. 3.4]).For a sufficiently general monomial ideals, we can can calculate the free resolution using simplicial methods. Let $I = \langle x^2z^3,x^3z^2,xyz,y^2 \rangle$. Then, we can construct what is called the Scarf complex:
The basic idea is to connect vertexes that represent generators when they share variables. The triangle is labeled by $x^3yz^3$, the edges of the triangle by $x^3z^3,x^2yz^3,x^3yz^2$, and the other edge by $xy^2z$. Then the free resolution becomes
$0 \to S \xrightarrow{\begin{pmatrix} \displaystyle y\\ \displaystyle -x\\ \displaystyle z\\ \displaystyle 0 \end{pmatrix}} S^4 \xrightarrow{\begin{pmatrix} \displaystyle -x & \displaystyle -y & \displaystyle 0 & \displaystyle 0\\ \displaystyle z & \displaystyle 0 & \displaystyle -y & \displaystyle 0\\ \displaystyle 0 & \displaystyle xz^2 & \displaystyle x^2z & \displaystyle -y\\ \displaystyle 0 & \displaystyle 0 & \displaystyle 0 & \displaystyle xz \end{pmatrix}} S^4 \xrightarrow{\begin{pmatrix} \displaystyle x^2z^3 & \displaystyle x^3z^2 & \displaystyle xyz & \displaystyle y^2 \end{pmatrix}} S$
Note that this is the simplicial homology complex. This ends up having a Hilbert polynomial of $6$.
### References
[AM69] M. F. Atiyah and I. G. Macdonald. Introduction to commutative algebra. Reading, Mass.: Addison-Wesley Publishing Co., 1969. [BPS98] D. Bayer, I. Peeva, and B. Sturmfels. “Monomial resolutions.” Math. Res. Lett. 5.1-2 (1998), pp. 31–46. [Eis05] D. Eisenbud. The geometry of syzygies: A second course in commutative algebra and algebraic geometry. Graduate Texts in Mathematics 229. New York: Springer-Verlag, 2005. [Eis95] D. Eisenbud. Commutative algebra: With a view toward algebraic geometry. Graduate Texts in Mathematics 150. New York: Springer-Verlag, 1995. [Har95] J. Harris. Algebraic geometry: A first course. Graduate Texts in Mathematics 133. Corrected reprint of the 1992 original. New York: Springer-Verlag, 1995. [Sch91] F.-O. Schreyer. “A standard basis approach to syzygies of canonical curves.” J. Reine Angew. Math. 421 (1991), pp. 83–123. | 2017-08-22 09:02:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 109, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9224067330360413, "perplexity": 308.565527295349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110573.77/warc/CC-MAIN-20170822085147-20170822105147-00513.warc.gz"} |
https://blender.stackexchange.com/questions/212294/looping-repeatable-hotkey-with-python | # Looping Repeatable Hotkey with Python
I've been studying this Dual Functionality question and trying to get something similar going, however for a slightly different need. I'm trying to create a hotkey that is repeatable, and cycles through 3 or more operators. Activate the hotkey once correlates with the first operator, 2 times with the 2nd operator, 3 etc, and once you hit it the last time it loops back to the 1st operator. I can get this to work with just two operators, but not 3 or more.
I'm assuming this is a failure to figure out the counting system. How does the counting system work? From my understanding
count=0
sets it to start at 0.
self.count+=1
if self.count ==1
adds a 1 to it and says "if it's 1, do this..". The final 0 in the elif
self.count = 0
sets it back to zero and thus the loop (return {'CANCELLED'} presumably). Adding additional elif statements with self.counts at higher numbers are either just skipped when I attempt to do it or break the script. What am I missing?
• Paste an example code that doesn't work as you would expect it to, then we can point out where's the mistake. Feb 18 at 21:06 | 2021-12-07 19:37:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43632611632347107, "perplexity": 1064.8504076379372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00047.warc.gz"} |
https://puzzling.stackexchange.com/questions/68107/my-roommates-short-riddle | # My roommate's short riddle
My roommate (rm) this morning suddenly give me this riddle,
rm: Hi, good morning! I have a nice riddle for you
me: Seriously? Isn't this too early?
rm: 'mon, just guess who I am
You change me once, You'll find me on your phone or next to it
You change me twice, You'll find me on the sea or behind it
You don't change me, because you trust me
You can change me, because I trust you
They said, those two things that make me exist
Who am I?
• rm: "c'mon, just guess who am I :)" me: "You are my roommate dude. That wasn't hard" – Kepotx Jul 11 '18 at 9:14
• Did this actually happen, or is this a scenario you came up with yourself? (It would be funny if it were the former, hahah.) – Mr Pie Jul 11 '18 at 9:44
• Does it have something to do with mirror, or reflection? or just light?? :) Im sure it does :D – DevMoutarde Jul 11 '18 at 12:43
• Is the "wordplay" tag significant? Because my answer below seems to be the only one utilizing wordplay. All of the other answers treat this as a simple riddle. – DrSheldon Jul 11 '18 at 15:26
• Could "on the sea or behind it" be phrased differently, or is that precise wording correct? I ask because it doesn't make sense for something to be "behind the sea", so I am wondering if it is a mistranslation or if I just haven't figured out the wordplay yet. – David K Jul 11 '18 at 17:14
wife
You change me once, You'll find me on your phone or next to it
The wordplay tag applies here.
"On your phone": Wife -> Wifi
"or next to it": Wife -> Wire (solved by @guillau4)
You change me twice, You'll find me on the sea or behind it
Again, wordplay.
"On the sea": Wife -> Wave
"Or behind it": Wife -> Tide or Wife -> Wake (solved by @pt314)
You don't change me, because you trust me
Normally, someone becomes someone else's wife for the person she is. Trust is a big part of that.
You can change me, because I trust you
Because trust plays such an important role, trying to change your wife is typically for the better.
They said, those two things that make me exist
Mutual trust or husband & wife?
The 3 introductory lines might hint at
that this is more than just a roommate. Given that it is so early, the two may still be lying in bed together, slowly waking up.
• Late to this, but "They said, those two things that make me exist," is probably a reference to the fact that both of them said "I do," the two things that makes her a wife. – TenthJustice Apr 15 at 3:58
Could you be. Just for fun :p
Shorts?
You change me once, You'll find me on your phone or next to it.
A clumsy room mate throws shorts wherever he likes after removing it or changing it -even on phone or sometimes next to it
You change me twice, You'll find me on the sea or behind it.
There can be many scenarios - I'll leave up to reader's imagination :p
You don't change me, because you trust me.
Obviously we trust shorts that it will do its job of covering (and won't torn etc) :p
You can change me, because I trust you.
But yes you change also
They said, those two things make me exist.
Shorts has two openings to easily insert your legs inside-two legs make it exist. It is made up of cloth and elastic. :p
Title:
My roommate's short riddle.
It is short - see shorts
Conversation:
rm: Hi, good morning! I have a nice riddle for you.
Morning is a time to change shorts
me: seriously? Is this not too early?
Describes some prefer to not change shorts so early in the morning
rm: c'mon, just guess who am I :)
c'mon, is the short form of come on
• Hello! Welcome to the Puzzling Stack Exchange! Congratulations on your very first answer — and it was a good one too! I have reached my daily voting limit, so I cannot upvote your answer. In the meantime, if you are still active, you might want to visit the Help Center if you haven't already. As you have not asked any questions yet, I advise you to look at this section in particular. Otherwise, keep puzzling! $$\stackrel{\bullet\,\bullet}{\smile}$$ – Mr Pie Jul 11 '18 at 11:44
• I really laugh for this answer :)) but sorry this is not intended answer, and the title and the conversation are not part of a hint/clue – malioboro Jul 11 '18 at 11:48
• @malioboro Yes, I'll keep an eye here to see the real answer :) – Karan Desai Jul 11 '18 at 11:49
Roommate or to be more precise rm
You change me once, You'll find me on your phone or next to it
change the r in rm to am and you will see it on your phone or on your alarm clock that is sitting next to it.
You change me twice, You'll find me on the sea or behind it
change the a in am to pm you would have changed it twice and you will find it on the other side of the planet which could easily be in a body of water
You don't change me, because you trust me
You trust your roommate so you wont change your rm with someone else. An alternate: PM is not only short for Post Meridiem, it is also short for Prime Minister so don't change it. Hopefully you trust your prime minister.
You can change me, because I trust you
You can change the acronym rm you use to describe/address him to something else, because your roommate is trusting you will not start calling him by something silly or derogatory. First alternate: You can change the m in pm to pc to get personal computer. Personal computers trust you when you open it up and start changing parts in it. Second alternate: Change the m in pm to pz, to get puzzle. malioboro is trusting us with swapping letters in this puzzle to come up with a solution.
They said, those two things that make me exist
R and M without them rm cannot exist
Other notes:
I strongly suspect that I missing some other acronym variations for the change me parts that would make for a better answer. Like tm being trademark, which is something you do not change.
• you've intrepret correctly the "change" part! but from "rm" to "pm" you can change it in one way. And I think your second explanation it's not really fit, rm is not the answer :) – malioboro Jul 11 '18 at 20:35
LOCK
You change me once, You'll find me on your phone or next to it
LOCK -> CLOCK
Most phones have clocks on them, or you can have a clock on your desk
You change me twice, you'll find me on the sea or behind it
CLOCK -> DOCK or LOCK -> DUCK
Docks and ducks are both found on bodies of water including seas. They both can also be found along the edges of a body of water and thus are behind it.
You don't change me, because you trust me
You trust your locks to keep you safe, so you don't need to change them
You can change me, because I trust you
You can change/unlock a lock because you are trusted with the key
They said, those two things that make me exist
Not sure on this one either...
• I'm sorry this is not the correct answer, the question is Who am I :) – malioboro Jul 11 '18 at 22:37
Could you be
Time?
You change me once, You'll find me on your phone or next to it.
If you change the time once, you might find the "new" time on your phone or beside it, depending on where your phone is.
You change me twice, You'll find me on the sea or behind it.
If you change the time schedule on your phone (once) and then on your calendar (twice), you might be seeing the sun over the sea or over the horizon (behind it), depending on what your schedule is.
You don't change me, because you trust me.
We trust that a minute is 60 seconds; an hour is 60 minutes; and that every four years we have a leap year.
You can change me, because I trust you.
We invented the unit of time. Time is all around us, but humans invented the duration of a second, and its definition. It is, on some level, our own creation, and it has no choice but to trust us. We are all beings of time; our lives move forward in time.
They said, those two things make me exist.
Yes, and that is exactly what I said, too. It is because of us that we can measure time and understand its existence.
Title:
My roommate's short riddle.
It is short; it takes a short period of time to read it.
Conversation:
rm: Hi, good morning! I have a nice riddle for you.
Morning is a point in time.
me: seriously? Is this not too early?
Describes what point in time, morning is.
rm: c'mon, just guess who am I :)
c'mon, meaning "don't waste your time and get on with the riddle."
• I really like how you explain the last three clues! but your first and second explanation I think it doesn't really fit :( and sorry TIME is not the intended answer – malioboro Jul 11 '18 at 11:45
• and the title and the conversation are not really hint/clues :) – malioboro Jul 11 '18 at 11:46
• @malioboro yes, those were the hardest. I thought about this riddle for about $40$ minutes and could not find an answer, though this was my best bet. I thought the conversation was part of the clues because it was included in the sandbox, but that's ok. Keep making more riddles like this one! :) – Mr Pie Jul 11 '18 at 11:51
Note the wordplay tag. Answer:
LIKE
You change me once,
LIKE -> LINE
You'll find me on your phone
refers to lines drawn on the screen, or the connection during a call
or next to it
"phone line"
You change me twice,
LIKE -> LAKE -> LANE
You'll find me on the sea or behind it
"sea lane"
You don't change me, because you trust me
LIKE
You can change me,
LIKE -> FIKE
because I trust you
"fike" means to flatter.
Alternately, LIKE -> L**U*KE, "Trust the force, Luke!"
They said, those two things that make me exist
{BIKE,DIKE,PIKE,SIKE,LAKE,LOKE,LIFE,LIME} -> LIKE all exist
• yes this is the correct way to intrepret "change" but Like is not the answer, the question is "who am I". "Next" and "behind" doesn't reffer to the word positioning :) – malioboro Jul 11 '18 at 20:31
• Given the rot13 comment, what comes to mind for second part of the two changes line is "wake", coming in behind some type of boat. – HammerN'Songs Jul 11 '18 at 21:03
• "Trust the Force, Luke!" $(+1)$ – Mr Pie Jul 13 '18 at 1:33
Could it be
You change me once, You'll find me on your phone or next to it
You just started going out with them, which makes a change in their life. They could be next to your phone physically or have a picture of them as your phone's background/screensaver.
You change me twice, You'll find me on the sea or behind it
You've dumped them, making a second change to their life, and they will move to a different continent.
You don't change me, because you trust me
You don't change your SO because you trust them.
You can change me, because I trust you
They are willing for you to change some aspects about themselves because they trust you.
They said, those two things that make me exist
Their parents? I dunno.
• +1 your answer is partially correct! and it is close! – malioboro Jul 11 '18 at 22:33
• @malioboro Is it best practice on this site to edit my current guess or append my new guess to the answer? – Jared Lovin Jul 11 '18 at 23:26
• I think edit is ok, just give a noticable mark so the future viewer will know it was edited – malioboro Jul 11 '18 at 23:41
a clock
You change me once, You'll find me on your phone or next to it
when you change any measurements smaller than a day (seconds, minutes, hours) you have to take another clock as the reference to fix it
You change me twice, You'll find me on the sea or behind it
when you change any measurements greater than or equal to a day (days, months) you have to take the sun as the reference to fix it.
You don't change me, because you trust me
you don't change your clock's settings as long as you trust it
You can change me, because I trust you
clocks let users change their settings for many reasons
They said, those two things make me exist.
clocks exist to keep track of two types of measurements mentioned above
NOTE: My answer is the first thing that came to my mind but I couldn't fully reason it. It is highly influenced by @user477343 's reasonings but since OP said his answer is not the intended one I wanted to give it a try.
• of course! your smart reasonings made my answer possible – Emre Ünsal Jul 11 '18 at 13:07
• I'm sorry but not the answer, the question is who am I not what am I :) – malioboro Jul 11 '18 at 22:31
a Key?
You change me once, You'll find me on your phone or next to it
You unlock your phone by entering a key pattern? After you unlock a door, you put the key in your pocket, next to your phone.
You change me twice, You'll find me on the sea or behind it
Not sure about this one, maybe if you lock something, it becomes unreachable like land beyond the sea?
You don't change me, because you trust me
You won't get a different key as you know only this one works with the lock
You can change me, because I trust you
You are the only one that will get a replica as you trust no-one else has a copy
They said, those two things that make me exist
"SHUT THE DOOR!" Is what you and your girlfriend said when your roommate walked in early in the morning. A key exists to keep a "door shut". You forgot to lock your bedroom door apparently.
Cheers
• I'm sorry the question is Who am I, not What am I :) – malioboro Jul 11 '18 at 22:34
Here goes my try, heavily inspired by @emre-Ünsal and @user477343 answers
You are an Alarm Clock
You change me once, You'll find me on your phone or next to it
when (you hit snooze / set a time) on your alarm clock you do it on your phone or in a proper alarm clock presumably next to it
You change me twice, You'll find me on the sea or behind it
No clue, on the sea or behind it... Something related to turning off your alarm and it sounding the next day?
You don't change me, because you trust me
you don't change your alarm clock because you trust it will wake you up everyday at the same time
You can change me, because I trust you
you can change it whenever you want
They said, those two things make me exist.
Alarms + Clocks = Alarms Clocks
Sand
You change me once, You'll find me on your phone or next to it
Sand can be changed or converted into a Glass. You can use it either in your phone as LCD display (or a tempered glass cover) or used as a glass (of water or any other liquid).
You change me twice, You'll find me on the sea or behind it
This glass can be changed or re-converted into sand by breaking it down into particles sized enough to be as small as sand (and removing the sharpened edges by abrasion). Or the broken down glass can be dumped as garbage into the sea (or ocean or waterbody).
You don't change me, because you trust me
You need not change sand, and can trust to walk over it or do anything with it as is.
You can change me, because I trust you.
You can change sand to many other things or materials, so sand gives its complete trust to do as you please with it.
They said, those two things that make me exist
This one I am not so sure but here it goes. Without sand, there would be no soil and in turn no plants and no life forms would exist.
• I'm sorry not the correct answer :) the question is who am I not what am I – malioboro Jul 11 '18 at 22:35
• @malioboro: I had also overlooked the 'wordplay' tag as well while answering. – Shadow Jul 12 '18 at 10:08
Well, here is my take on it -(partial one)
You are Mr.
As,
You change me once, You'll find me on your phone or next to it
You can change me to mr - that too letters e and r are next to each other on one's phone(keyboard)
You change me twice, You'll find me on the sea or behind it
To be provided.
You don't change me, because you trust me
As, you are a Mister and hence trust-worthy!
You can change me, because I trust you
To be provided.
They said, those two things that make me exist
To be provided.
This was a very much square peg in a round hole after the first clue...
A (secret) code
You change me once, You'll find me on your phone or next to it
You change me twice, You'll find me on the sea or behind it
Morse code. You can change the signal from dot to dash or dash to dot.
You don't change me, because you trust me
If a code is trustworthy, there's no need to change it
You can change me, because I trust you
You can change a code (meaningfully) only if you're able to decrypt it first
They said, those two things that make me exist
?
• nice try, but not the correct answer :) how do you connect "morse code" to "the sea"? and also the question is "who am I" not "what am I" – malioboro Jul 14 '18 at 12:47
• @malioboro I knew it was wrong but fancied posting it anyway. + ships use morse code (maybe not so much any more, but still) – Michael Jul 14 '18 at 21:23 | 2020-11-30 02:22:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3402024507522583, "perplexity": 2114.2413774221995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141204453.65/warc/CC-MAIN-20201130004748-20201130034748-00365.warc.gz"} |
http://alg.xjegi.com/CN/abstract/abstract8901.shtml | • 气候与水文 •
### 一种基于植被指数-地表温度特征空间的蒸散指数
1. 1. 南京大学江苏省地理信息技术重点实验室, 江苏南京 210023;
2. 南京大学地理信息科学系, 江苏南京 210023;
3. 天地一体化信息技术国家重点实验室, 航天恒星科技有限公司, 中国空间技术研究院, 北京 100086;
4. 新疆维吾尔自治区卫星应用工程中心, 新疆乌鲁木齐 830000;
5. 中国科学院遥感与数字地球研究所, 北京 100101
• 收稿日期:2015-01-11 修回日期:2015-04-07 出版日期:2015-09-25
• 通讯作者: 肖鹏峰.Email:xiaopf@gmail.com E-mail:xiaopf@gmail.com
• 作者简介:贺广均(1987-),男,博士研究生,研究方向为资源环境遥感.Email:hgjun_2006@163.com
• 基金资助:
国家科技重大专项课题(95-Y40B02-9001-13/15-04);新疆维吾尔自治区自然科学基金(2013211B45)
### An evapotranspiration index from Ts-VI feature space
HE Guang-jun1,2,3, FENG Xue-zhi1,2, XIAO Peng-feng1,2, LI Hu4, YU Tao5, YE Li-zao1,2
1. 1. Jiangsu Provincial Key Laboratory of Geographic Information Science and Technology, Nanjing University, Nanjing 210093, Jiangsu, China;
2. Department of Geographic Information Science, Nanjing University, Nanjing 210093, Jiangsu, China;
3. State Key Laboratory of Space Ground Integrated Information Technology, Space Star Technology Corporation limited, China Academy of Space Technology, Beijing 100086, China;
4. Satellite application engineering center of Xinjiang Uygur Autonomous Region, Urumqi 830000, Xinjiang, China;
5. Institute of Remote Sensing and Digital Earth Chinese Academy of Sciences, Beijing 100101, China
• Received:2015-01-11 Revised:2015-04-07 Online:2015-09-25
Abstract: Evapotranspiration(ET) in arid and semi-arid regions is a key factor for regional energy balance,hydrological cycle and water utilization. Remote sensing technology can provide land surface parameters as inputs for the land surface energy balance model and has been successfully applied to estimate surface evapotranspiration(ET) at different regions. Since 1990s, a series of ET models are build on the use of net solar radiation flux (Rn), soil heat flux(G), sensible heat flux(H) estimated from remote sensing data of visible, near infrared, thermal infrared spectral band. These models can be divided into two categories, the single-layer model of Penman-Monteith(P-M)and the "residue approach" that H is the core inversion parameter. Although great progress has been made on the improvement of the ET models, there are several related problems that have not yet been solved properly. A typical problem is that most of high-precision ET models cannot be used operationally for lack of ground-based measurements. In this paper, an ET model only need land surface temperature(LST), normalized difference vegetation index(NDVI) is introduced. In the model, the space relation of land surface temperature, vegetable index and ET were analyzed according to the principle of energy and water balance, based on assumptions that the study area cover a full range of vegetation cover and soil wetness conditions, and H is in proportional to the temperature difference between atmospheric before receiving solar radiation and land surface after receiving solar radiation, the temperature vegetable evapotranspiration index(TVETI) was built by using the dry and wet edges in the Ts-VI trapezoid space. As a typical arid area, Karamay area, Xinjiang, China was selected as the study area and HJ-1 satellite data was selected as the remote sensing data, ET on six different dates from April to September were estimated using combined TVETI and Rn. Then, ET estimated from SEBAL model was used to validate the TVETI. Regression analysis of TVETI and ET index obtained from SEBAL model showed that dates from April to September, certainty coefficient are respectively:0.838, 0.935, 0.912, 0.921, 0.926, 0.825. TVETI is effectively in characterizing the surface evapotranspiration condition. Moreover, cross validation were carry out between ET estimated by using TVETI and SEBAL model, regression analysis showed that the certainty coefficient are respectively:0.712, 0.831, 0.828, 0.884, 0.877, 0.690. TVETI can be used to estimate ET operationally in the study area and provide a reasonable estimation accuracy with only satellite derived parameters. However, for lack of ground experimental data, ET estimated from SEBAL model was selected as the validation data, leading to errors in the validation part. The proposed TVETI depends on assumption that H is in proportional to the temperature difference between atmospheric before receiving solar radiation and land surface, calculative process of H is more complicated in actual calculation. On the other hand, the dry and wet edges in the Ts-VI trapezoid space is built on the condition that the study area cover a full range of vegetation cover and soil wetness conditions, this cannot be easily realized in arid areas. In practical, the dry and wet edges are obtained from the fitting result of the LST and NDVI, equation of the dry and wet edges are short of rigid physical significance that causing uncertainty in the building of TVETI. To reduce the uncertainty in the proposed TVETI, further work needs to be carried out to verify the relevant parameters and more validation work needs to be performed for different remote sensing data in different regions.
• P426 | 2020-02-27 15:33:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2103167176246643, "perplexity": 3240.413130866036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146714.29/warc/CC-MAIN-20200227125512-20200227155512-00551.warc.gz"} |
https://keplerlounge.com/information/theory/2021/04/05/effective-betting.html | The usual touchstone, whether that which someone asserts is merely his persuasion — or at least his subjective conviction, that is, his firm belief — is betting. It often happens that someone propounds his views with such positive and uncompromising assurance that he seems to have entirely set aside all thought of possible error. A bet disconcerts him. Sometimes it turns out that he has a conviction which can be estimated at a value of one ducat, but not of ten. For he is very willing to venture one ducat, but when it is a question of ten he becomes aware, as he had not previously been, that it may very well be that he is in error.-Immanuel Kant
Introduction:
Although there are three equivalent paths to algorithmic randomness, these are not all equally intuitive as this depends very much on the particular problem being considered and the particular audience that is being addressed. In particular, I would argue that Schnorr’s martingale characterisation is the most intuitively useful to a scientist with an understanding of machine learning as it may be formulated as a game of chance.
Asymptotic incompressibility and effective betting strategies:
Given a binary sequence $$X_N = \{x_i\}_{i=1}^N$$, we say that $$X_N$$ is asymptotically incompressible if given the subsequence $$X_k$$ and $$N >> k$$ on average we would not profit by gambling on the $$N-k$$ terms in $$X_N$$ based on the partial knowledge provided by $$X_k$$. If $$X_N$$ satisfies these assumptions then we may state that for large $$N$$:
$$\mathbb{E}[K(X_N)] \sim N$$
Moreover, if we may assume that there exists a deterministic function $$f$$ such that:
$$x_{n+1} = f \circ x_n$$
an effective betting strategy is reducible to approximating $$f$$ using a machine learning algorithm provided with the dataset $$X_k$$. The non-existence of such a strategy may be precisely formulated via an invariance theorem for algorithmically random data.
An invariance theorem for algorithmically random data:
In consequence, (1) implies that for a particular sample $$X_N \sim X$$ it may be possible to find $$f \in F_{\theta}$$ such that $$\forall n \in [1,2N-1], f(x_n) = x_{n+1}$$ on the condition that the Kolmogorov Complexity:
$$K(f) \sim 2N$$
as this would amount to memorisation and not discovering regularities in $$X_N$$ since $$X$$ is algorithmically random relative to $$F_{\theta}$$. Moreover, given that
$$f(x_n)=x_{n+1} \implies \delta_{f(x_n),x_{n+1}} = 1$$
for any candidate solution $$\hat{f} \in F_{\theta}$$ and for large $$N$$, the expected performance on the test set is approximated by:
$$\frac{1}{N-1} \sum_{n=N}^{2N-1} \delta_{\hat{f}(x_n),x_{n+1}} \leq \frac{1}{2}$$
Furthermore, it may be shown that the expected performance is invariant to transformations $$T$$ that don’t change the phase-space dimension of the processes responsible for generating the signal $$X_N$$:
$$X’_N = T \circ X_N$$
Thus we have an invariance theorem for algorithmically random data which may be used to perform empirical tests using machine learning to determine whether a dataset was sampled from an algorithmically random process.
Discussion:
In the event that the binary sequence $$X_N$$ doesn’t have parity in terms of the number of ones and zeroes, we may use the true-positive rate which generalises (5) in the setting of imbalanced data.
References:
1. Aidan Rocke (https://cstheory.stackexchange.com/users/47594/aidan-rocke), An invariance theorem for algorithmically random data in statistical learning, URL (version: 2021-02-22): https://cstheory.stackexchange.com/q/48452
2. Peter Grünwald and Paul Vitányi. Shannon Information and Kolmogorov Complexity. 2010.
3. Jerome H. Friedman, Robert Tibshirani, and Trevor Hastie. The Elements of Statistical Learning. Springer. 2001.
4. Andrew Chi-Chih Yao. Theory and applications of trapdoor functions. In Proceedings of the 23rd IEEE Symposium on Foundations of Computer Science, 1982. | 2021-11-28 04:41:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 6, "x-ck12": 0, "texerror": 0, "math_score": 0.8672372102737427, "perplexity": 520.1315939263577}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358469.34/warc/CC-MAIN-20211128043743-20211128073743-00537.warc.gz"} |
https://brilliant.org/problems/a-number-theory-problem-by-evan-glori/ | # An algebra problem by Evan Glori
Algebra Level pending
The sum of the reciprocals of the two numbers x and y are 2/15. What is the value of $$\frac{7y+5xy+7x}{x+y+2xy}$$?
× | 2017-01-18 07:55:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5497981905937195, "perplexity": 741.6554641737774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280242.65/warc/CC-MAIN-20170116095120-00458-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://mathematica.stackexchange.com/questions/157322/projecting-2d-points-on-to-a-sphere/157476 | # Projecting 2D points on to a sphere
I have computed a list of data where the elements of my data list are of the following type:
RGBColor[0.8895841739153035, 0.6096297985979048, 0.2226442872878191],
Point[{-0.000286314, 0.00616339}]
That is, each element consists of an RGBColor[X,Y,Z], together with a Point[{theta,phi}]. I can plot this in 2 dimensional cartesian coordinates using Graphics:
Graphics[SphData]
giving me this:
I want to plot this now in 3-dimensions, using the two coordinates of each point as the angle coordinates for polar coordinates (the points should lie on a sphere with radius 1). The picture would look something like this:
but with coloured points.
• See FromSphericalCoordinates and then use Graphics3D. – Vitaliy Kaurov Oct 7 '17 at 23:47
• What have you tried and where did you get stuck? Use ReplaceAll to apply the appropriate transformation to each Point. – Szabolcs Oct 8 '17 at 11:23
• If Graphics3D works the way I hope it does, then I guess I would like to implement a rule which takes each entry {Colour, Point(theta,phi)} of my list to a new entry {Colour, newPoint(x,y,z)}. At the moment, I don't know how to change Point{theta,phi} to Point{r,theta,phi}, to which I would then apply the transformation rule. – Mark B Oct 10 '17 at 3:25
Ok, I have figured it out. Following the comment by Vitaliy Kaurov, the easiest way seems to be to use Graphics3D.
I first extracted data points and colour points into seperate lists using combinations of Flatten, Span and Part. Then, I used a ReplaceAll rule to take each of my entries (theta,phi) to (r,theta,phi):
ReplaceAll[Point[{a_,b_}]->Point[{1,a,b}]][datapoints]
Then I needed to apply a map taking my each point (r,theta,phi) to the corresponding point (x,y,z) in Cartesian coordinates. I did this with another ReplaceAll, together with CoordinateTransformData (I had some problems with fiddling this out, since you have to apply CoordinateTransformData to {r,theta,phi}, not Point[{r,theta,phi}], but I figured it out eventually).
Finally, I converted my triples {x,y,z} into Point[{x,y,z}] with a ReplaceAll, then used Riffle to put the coordinates back together with the colour list I took out from the start. Graphics3D applied to the final list gives me the required picture: | 2021-06-23 00:08:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7249559164047241, "perplexity": 1166.9794897491631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488525399.79/warc/CC-MAIN-20210622220817-20210623010817-00033.warc.gz"} |
https://trac-hacks.org/ticket/9121 | Opened 5 years ago
Installs with no error but can't be seen in plugins panel
Reported by: Owned by: anonymous framay high BudgetingPlugin blocker 0.12
I have downloaded the source and compiled in Python 2.7 I then tried installing multiple ways.
2. Manually copying Budgeting_Plugin-0.5.a3-py2.7.egg to the plugins directory
3. Using easy install easy_install.exe C:\Budgeting_Plugin-0.5.a3-py2.7.egg
Still no matter what I do, even making changes in the trac.ini file. I simply cannot get trac to recognize the plugin. It's as if it doesn't exist at all.
Is there a special step I'm missing somewhere?
comment:1 Changed 5 years ago by rjollos
• Description modified (diff)
Suggest turning on t:TracLogging and looking for messages related to the plugin when restarting Trac.
comment:2 Changed 5 years ago by framay
usually there shouldn't any special step; it might be that it won't work at first click, because it first needs to add a table (yet, I haven't reproduced it, since this only happens the very first time);
check if it is in about-page listed and as rjollos said, check the log file
comment:3 Changed 5 years ago by anonymous
Okay, I must be missing something major here. It doesn't show up in the log. It also doesn't show up on any ticket. Is there a url or something on my trac box I am supposed to visit to set it up? I cannot find the plugin in the plugin list either.
comment:4 Changed 5 years ago by anonymous
also, when I look in my plugins folder I just see the .egg file. I am really new to python. Shouldn't there be a .py file in there?
comment:5 Changed 5 years ago by anonymous
Okay, sorry for the spamming here, but this is what I have in my log file now:
2011-08-31 14:29:07,596 Trac[env] INFO: -------------------------------- environment startup [Trac 0.12.2] --------------------------------
2011-08-31 14:29:07,628 Trac[loader] ERROR: Skipping "ticketbudgeting = ticketbudgeting":
Traceback (most recent call last):
File "c:\Python27\lib\site-packages\pkg_resources.py", line 1954, in load
entry = __import__(self.module_name, globals(),globals(), ['__name__'])
ImportError: No module named ticketbudgeting | 2016-08-31 23:41:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18466311693191528, "perplexity": 3968.850209640314}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982954852.80/warc/CC-MAIN-20160823200914-00260-ip-10-153-172-175.ec2.internal.warc.gz"} |
http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.107.186806 | # Synopsis: Quantum Hall Anomaly in 3D
A novel quantized Hall effect is likely observable in a known ferromagnetic compound.
Topological phases in 2D insulators are characterized by an integer called the Chern number. In the context of the quantum Hall effect, it equals the number of chiral states at the edge of the 2D system. Researchers who have been looking for a 3D analog of the Chern number have reasons to expect that it will—in 3D scenarios that are more complicated than those one would get by merely stacking 2D states—result in a semimetal, in which topological constraints force the two energy bands near the Fermi level to overlap.
The band structure of this Chern semimetal state is reminiscent of graphene and topological insulators but substantively different. One manifestation of this difference is that the representative topological objects, which determine the transport characteristics, are not Dirac but Weyl fermions.
In their paper, Gang Xu and co-workers at the Beijing National Laboratory for Condensed Matter Physics report first-principles calculations that tell us that the known ferromagnetic compound ${\text{HgCr}}_{2}{\text{Se}}_{4}$ happens to exhibit the various characteristics of a Chern semimetal; in particular, they find a pair of Weyl fermions in momentum space. An additional benefit, as it were, is the possibility of observing, in a thin film of ${\text{HgCr}}_{2}{\text{Se}}_{4}$, the quantum Hall effect that arises (from topological properties) in the absence of an external magnetic field—the quantized anomalous Hall effect. This particular ferromagnetic material points to an alternative to pyrochlore-based compounds, another scenario in which theorists expect to see a topological semimetal. – Sami Mitra
### Announcements
More Announcements »
Nuclear Physics
## Next Synopsis
Atomic and Molecular Physics
## Related Articles
Mesoscopics
### Synopsis: A Single-Level Electron Turnstile
A combination of a quantum dot and superconducting leads works as an electron turnstile, letting only one electron pass at a time through a single level in the dot. Read More »
Mesoscopics
### Viewpoint: Kondo Physics in a Quantum Channel
Using a scanning gate microscope, researchers have shown that electron waves scattered from a quantum point contact carry the imprint of interactions with localized electron spins. Read More »
Mesoscopics
### Viewpoint: Exorcising Maxwell’s Demon
A pair of connected single-electron devices functions as a Maxwell’s demon that operates without external control. Read More » | 2016-05-04 11:45:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49043500423431396, "perplexity": 1216.102553237359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860123023.37/warc/CC-MAIN-20160428161523-00008-ip-10-239-7-51.ec2.internal.warc.gz"} |
https://www.jobilize.com/online/course/2-7-integrals-exponential-functions-and-logarithms-by-openstax?qcr=www.quizover.com&page=1 | # 2.7 Integrals, exponential functions, and logarithms (Page 2/4)
Page 2 / 4
## Calculating derivatives of natural logarithms
Calculate the following derivatives:
1. $\frac{d}{dx}\text{ln}\left(5{x}^{3}-2\right)$
2. $\frac{d}{dx}{\left(\text{ln}\left(3x\right)\right)}^{2}$
We need to apply the chain rule in both cases.
1. $\frac{d}{dx}\text{ln}\left(5{x}^{3}-2\right)=\frac{15{x}^{2}}{5{x}^{3}-2}$
2. $\frac{d}{dx}{\left(\text{ln}\left(3x\right)\right)}^{2}=\frac{2\left(\text{ln}\left(3x\right)\right)·3}{3x}=\frac{2\left(\text{ln}\left(3x\right)\right)}{x}$
Calculate the following derivatives:
1. $\frac{d}{dx}\text{ln}\left(2{x}^{2}+x\right)$
2. $\frac{d}{dx}{\left(\text{ln}\left({x}^{3}\right)\right)}^{2}$
1. $\frac{d}{dx}\text{ln}\left(2{x}^{2}+x\right)=\frac{4x+1}{2{x}^{2}+x}$
2. $\frac{d}{dx}{\left(\text{ln}\left({x}^{3}\right)\right)}^{2}=\frac{6\phantom{\rule{0.2em}{0ex}}\text{ln}\left({x}^{3}\right)}{x}$
Note that if we use the absolute value function and create a new function $\text{ln}\phantom{\rule{0.2em}{0ex}}|x|,$ we can extend the domain of the natural logarithm to include $x<0.$ Then $\left(d\text{/}\left(dx\right)\right)\text{ln}\phantom{\rule{0.2em}{0ex}}|x|=1\text{/}x.$ This gives rise to the familiar integration formula.
## Integral of (1/ u ) du
The natural logarithm is the antiderivative of the function $f\left(u\right)=1\text{/}u\text{:}$
$\int \frac{1}{u}du=\text{ln}\phantom{\rule{0.2em}{0ex}}|u|+C.$
## Calculating integrals involving natural logarithms
Calculate the integral $\int \frac{x}{{x}^{2}+4}dx.$
Using $u$ -substitution, let $u={x}^{2}+4.$ Then $du=2x\phantom{\rule{0.2em}{0ex}}dx$ and we have
$\int \frac{x}{{x}^{2}+4}dx=\frac{1}{2}\int \frac{1}{u}du\frac{1}{2}\text{ln}\phantom{\rule{0.2em}{0ex}}|u|+C=\frac{1}{2}\text{ln}\phantom{\rule{0.2em}{0ex}}|{x}^{2}+4|+C=\frac{1}{2}\text{ln}\left({x}^{2}+4\right)+C.$
Calculate the integral $\int \frac{{x}^{2}}{{x}^{3}+6}dx.$
$\int \frac{{x}^{2}}{{x}^{3}+6}dx=\frac{1}{3}\text{ln}\phantom{\rule{0.2em}{0ex}}|{x}^{3}+6|+C$
Although we have called our function a “logarithm,” we have not actually proved that any of the properties of logarithms hold for this function. We do so here.
## Properties of the natural logarithm
If $a,b>0$ and $r$ is a rational number, then
1. $\text{ln}\phantom{\rule{0.2em}{0ex}}1=0$
2. $\text{ln}\left(ab\right)=\text{ln}\phantom{\rule{0.2em}{0ex}}a+\text{ln}\phantom{\rule{0.2em}{0ex}}b$
3. $\text{ln}\left(\frac{a}{b}\right)=\text{ln}\phantom{\rule{0.2em}{0ex}}a-\text{ln}\phantom{\rule{0.2em}{0ex}}b$
4. $\text{ln}\left({a}^{r}\right)=r\phantom{\rule{0.2em}{0ex}}\text{ln}\phantom{\rule{0.2em}{0ex}}a$
## Proof
1. By definition, $\text{ln}\phantom{\rule{0.2em}{0ex}}1={\int }_{1}^{1}\frac{1}{t}dt=0.$
2. We have
$\text{ln}\left(ab\right)={\int }_{1}^{ab}\frac{1}{t}dt={\int }_{1}^{a}\frac{1}{t}dt+{\int }_{a}^{ab}\frac{1}{t}dt.$
Use $u\text{-substitution}$ on the last integral in this expression. Let $u=t\text{/}a.$ Then $du=\left(1\text{/}a\right)dt.$ Furthermore, when $t=a,u=1,$ and when $t=ab,u=b.$ So we get
$\text{ln}\left(ab\right)={\int }_{1}^{a}\frac{1}{t}dt+{\int }_{a}^{ab}\frac{1}{t}dt={\int }_{1}^{a}\frac{1}{t}dt+{\int }_{1}^{ab}\frac{a}{t}·\frac{1}{a}dt={\int }_{1}^{a}\frac{1}{t}dt+{\int }_{1}^{b}\frac{1}{u}du=\text{ln}\phantom{\rule{0.2em}{0ex}}a+\text{ln}\phantom{\rule{0.2em}{0ex}}b.$
3. Note that
$\frac{d}{dx}\text{ln}\left({x}^{r}\right)=\frac{r{x}^{r-1}}{{x}^{r}}=\frac{r}{x}.$
Furthermore,
$\frac{d}{dx}\left(r\phantom{\rule{0.2em}{0ex}}\text{ln}\phantom{\rule{0.2em}{0ex}}x\right)=\frac{r}{x}.$
Since the derivatives of these two functions are the same, by the Fundamental Theorem of Calculus, they must differ by a constant. So we have
$\text{ln}\left({x}^{r}\right)=r\phantom{\rule{0.2em}{0ex}}\text{ln}\phantom{\rule{0.2em}{0ex}}x+C$
for some constant $C.$ Taking $x=1,$ we get
$\begin{array}{ccc}\hfill \text{ln}\left({1}^{r}\right)& =\hfill & r\phantom{\rule{0.2em}{0ex}}\text{ln}\left(1\right)+C\hfill \\ \hfill 0& =\hfill & r\left(0\right)+C\hfill \\ \hfill C& =\hfill & 0.\hfill \end{array}$
Thus $\text{ln}\left({x}^{r}\right)=r\phantom{\rule{0.2em}{0ex}}\text{ln}\phantom{\rule{0.2em}{0ex}}x$ and the proof is complete. Note that we can extend this property to irrational values of $r$ later in this section.
Part iii. follows from parts ii. and iv. and the proof is left to you.
## Using properties of logarithms
Use properties of logarithms to simplify the following expression into a single logarithm:
$\text{ln}\phantom{\rule{0.2em}{0ex}}9-2\phantom{\rule{0.2em}{0ex}}\text{ln}\phantom{\rule{0.2em}{0ex}}3+\text{ln}\left(\frac{1}{3}\right).$
We have
$\text{ln}\phantom{\rule{0.2em}{0ex}}9-2\phantom{\rule{0.2em}{0ex}}\text{ln}\phantom{\rule{0.2em}{0ex}}3+\text{ln}\left(\frac{1}{3}\right)=\text{ln}\left({3}^{2}\right)-2\phantom{\rule{0.2em}{0ex}}\text{ln}\phantom{\rule{0.2em}{0ex}}3+\text{ln}\left({3}^{-1}\right)=2\phantom{\rule{0.2em}{0ex}}\text{ln}\phantom{\rule{0.2em}{0ex}}3-2\phantom{\rule{0.2em}{0ex}}\text{ln}\phantom{\rule{0.2em}{0ex}}3-\text{ln}\phantom{\rule{0.2em}{0ex}}3=\text{−}\text{ln}\phantom{\rule{0.2em}{0ex}}3.$
Use properties of logarithms to simplify the following expression into a single logarithm:
$\text{ln}\phantom{\rule{0.2em}{0ex}}8-\text{ln}\phantom{\rule{0.2em}{0ex}}2-\text{ln}\left(\frac{1}{4}\right).$
$4\phantom{\rule{0.2em}{0ex}}\text{ln}\phantom{\rule{0.2em}{0ex}}2$
## Defining the number e
Now that we have the natural logarithm defined, we can use that function to define the number $e.$
## Definition
The number $e$ is defined to be the real number such that
$\text{ln}\phantom{\rule{0.2em}{0ex}}e=1.$
To put it another way, the area under the curve $y=1\text{/}t$ between $t=1$ and $t=e$ is $1$ ( [link] ). The proof that such a number exists and is unique is left to you. ( Hint : Use the Intermediate Value Theorem to prove existence and the fact that $\text{ln}\phantom{\rule{0.2em}{0ex}}x$ is increasing to prove uniqueness.)
The number $e$ can be shown to be irrational, although we won’t do so here (see the Student Project in Taylor and Maclaurin Series ). Its approximate value is given by
$e\approx 2.71828182846.$
## The exponential function
We now turn our attention to the function ${e}^{x}.$ Note that the natural logarithm is one-to-one and therefore has an inverse function. For now, we denote this inverse function by $\text{exp}\phantom{\rule{0.2em}{0ex}}x.$ Then,
where we get a research paper on Nano chemistry....?
what are the products of Nano chemistry?
There are lots of products of nano chemistry... Like nano coatings.....carbon fiber.. And lots of others..
learn
Even nanotechnology is pretty much all about chemistry... Its the chemistry on quantum or atomic level
learn
da
no nanotechnology is also a part of physics and maths it requires angle formulas and some pressure regarding concepts
Bhagvanji
Preparation and Applications of Nanomaterial for Drug Delivery
revolt
da
Application of nanotechnology in medicine
what is variations in raman spectra for nanomaterials
I only see partial conversation and what's the question here!
what about nanotechnology for water purification
please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment.
Damian
yes that's correct
Professor
I think
Professor
Nasa has use it in the 60's, copper as water purification in the moon travel.
Alexandre
nanocopper obvius
Alexandre
what is the stm
is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.?
Rafiq
industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong
Damian
How we are making nano material?
what is a peer
What is meant by 'nano scale'?
What is STMs full form?
LITNING
scanning tunneling microscope
Sahil
how nano science is used for hydrophobicity
Santosh
Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq
Rafiq
what is differents between GO and RGO?
Mahi
what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq
Rafiq
if virus is killing to make ARTIFICIAL DNA OF GRAPHENE FOR KILLED THE VIRUS .THIS IS OUR ASSUMPTION
Anam
analytical skills graphene is prepared to kill any type viruses .
Anam
Any one who tell me about Preparation and application of Nanomaterial for drug Delivery
Hafiz
what is Nano technology ?
write examples of Nano molecule?
Bob
The nanotechnology is as new science, to scale nanometric
brayan
nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale
Damian
Is there any normative that regulates the use of silver nanoparticles?
what king of growth are you checking .?
Renato
What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ?
why we need to study biomolecules, molecular biology in nanotechnology?
?
Kyle
yes I'm doing my masters in nanotechnology, we are being studying all these domains as well..
why?
what school?
Kyle
biomolecules are e building blocks of every organics and inorganic materials.
Joe
Leaves accumulate on the forest floor at a rate of 2 g/cm2/yr and also decompose at a rate of 90% per year. Write a differential equation governing the number of grams of leaf litter per square centimeter of forest floor, assuming at time 0 there is no leaf litter on the ground. Does this amount approach a steady value? What is that value?
You have a cup of coffee at temperature 70°C, which you let cool 10 minutes before you pour in the same amount of milk at 1°C as in the preceding problem. How does the temperature compare to the previous cup after 10 minutes?
Abdul | 2020-09-26 12:31:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 60, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8442455530166626, "perplexity": 1142.1120359064137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400241093.64/warc/CC-MAIN-20200926102645-20200926132645-00019.warc.gz"} |
https://docs.opencv.org/master/d5/d38/group__core__cluster.html | OpenCV 4.2.0-dev Open Source Computer Vision
## Functions
double cv::kmeans (InputArray data, int K, InputOutputArray bestLabels, TermCriteria criteria, int attempts, int flags, OutputArray centers=noArray())
Finds centers of clusters and groups input samples around the clusters. More...
template<typename _Tp , class _EqPredicate >
int partition (const std::vector< _Tp > &_vec, std::vector< int > &labels, _EqPredicate predicate=_EqPredicate())
Splits an element set into equivalency classes. More...
## ◆ kmeans()
double cv::kmeans ( InputArray data, int K, InputOutputArray bestLabels, TermCriteria criteria, int attempts, int flags, OutputArray centers = noArray() )
Python:
retval, bestLabels, centers=cv.kmeans(data, K, bestLabels, criteria, attempts, flags[, centers])
#include <opencv2/core.hpp>
Finds centers of clusters and groups input samples around the clusters.
The function kmeans implements a k-means algorithm that finds the centers of cluster_count clusters and groups the input samples around the clusters. As an output, $$\texttt{bestLabels}_i$$ contains a 0-based cluster index for the sample stored in the $$i^{th}$$ row of the samples matrix.
Note
• (Python) An example on K-means clustering can be found at opencv_source_code/samples/python/kmeans.py
Parameters
data Data for clustering. An array of N-Dimensional points with float coordinates is needed. Examples of this array can be: Mat points(count, 2, CV_32F); Mat points(count, 1, CV_32FC2); Mat points(1, count, CV_32FC2); std::vector points(sampleCount); K Number of clusters to split the set by. bestLabels Input/output integer array that stores the cluster indices for every sample. criteria The algorithm termination criteria, that is, the maximum number of iterations and/or the desired accuracy. The accuracy is specified as criteria.epsilon. As soon as each of the cluster centers moves by less than criteria.epsilon on some iteration, the algorithm stops. attempts Flag to specify the number of times the algorithm is executed using different initial labellings. The algorithm returns the labels that yield the best compactness (see the last function parameter). flags Flag that can take values of cv::KmeansFlags centers Output matrix of the cluster centers, one row per each cluster center.
Returns
The function returns the compactness measure that is computed as
$\sum _i \| \texttt{samples} _i - \texttt{centers} _{ \texttt{labels} _i} \| ^2$
after every attempt. The best (minimum) value is chosen and the corresponding labels and the compactness value are returned by the function. Basically, you can use only the core of the function, set the number of attempts to 1, initialize labels each time using a custom algorithm, pass them with the ( flags = KMEANS_USE_INITIAL_LABELS ) flag, and then choose the best (most-compact) clustering.
Examples:
samples/cpp/kmeans.cpp.
## ◆ partition()
template<typename _Tp , class _EqPredicate >
int partition ( const std::vector< _Tp > & _vec, std::vector< int > & labels, _EqPredicate predicate = _EqPredicate() )
#include <opencv2/core/operations.hpp>
Splits an element set into equivalency classes.
The generic function partition implements an $$O(N^2)$$ algorithm for splitting a set of $$N$$ elements into one or more equivalency classes, as described in http://en.wikipedia.org/wiki/Disjoint-set_data_structure . The function returns the number of equivalency classes.
Parameters
_vec Set of elements stored as a vector. labels Output vector of labels. It contains as many elements as vec. Each label labels[i] is a 0-based cluster index of vec[i]. predicate Equivalence predicate (pointer to a boolean function of two arguments or an instance of the class that has the method bool operator()(const _Tp& a, const _Tp& b) ). The predicate returns true when the elements are certainly in the same class, and returns false if they may or may not be in the same class. | 2020-01-29 22:05:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23639963567256927, "perplexity": 4133.7811339529935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251802249.87/warc/CC-MAIN-20200129194333-20200129223333-00189.warc.gz"} |
https://koreascience.or.kr/article/JAKO201019547055775.page?&lang=ko | # VALUATION FUNCTIONALS AND STATIC NO ARBITRAGE OPTION PRICING FORMULAS
• 투고 : 2010.10.09
• 심사 : 2010.11.29
• 발행 : 2010.12.25
#### 초록
Often in practice, the implied volatility of an option is calculated to find the option price tomorrow or the prices of, nearby' options. To show that one does not need to adhere to the Black- Scholes formula in this scheme, Figlewski has provided a new pricing formula and has shown that his, alternating passive model' performs as well as the Black-Scholes formula [8]. The Figlewski model was modified by Henderson et al. so that the formula would have no static arbitrage [10]. In this paper, we show how to construct a huge class of such static no arbitrage pricing functions, making use of distortions, coherent risk measures and the pricing theory in incomplete markets by Carr et al. [4]. Through this construction, we provide a more elaborate static no arbitrage pricing formula than Black-Sholes in the above scheme. Moreover, using our pricing formula, we find a volatility curve which fits with striking accuracy the synthetic data used by Henderson et al. [10].
#### 참고문헌
1. P. Artzner, F. Delbaen, J. Eber and D. Heath, Coherent measures of risk , Mathematical Finance, 9 (1999), 203-228. https://doi.org/10.1111/1467-9965.00068
2. Y. Z. Bergman, B. D. Grundy and Z. Wiener, General Properties of Option Prices, Journal of Finance, 51 (1996), 1573-1610. https://doi.org/10.2307/2329530
3. F. Black and M. Scholes, The Pricing of Options and Corporate Liabilities, Journal of Political Economy, May-June 81 (1973), 637-659. https://doi.org/10.1086/260062
4. P. Carr, H. Geman and D. Madan, Pricing and Hedging in Incomplete Markets, Journal of Financial Economics, 62 (2001), 131-167. https://doi.org/10.1016/S0304-405X(01)00075-7
5. G. Choquet, Theory of Capacities, Ann. Inst. Fourier(Grenoble), 5 (1953), 131-295.
6. F. Delbaen, Coherent risk measures on general probability spaces, in Advances in Finance and Stochastics- Essays in Honour of Dieter Sondermann, K. Sandmann and P. J. Schonbucher, eds. New York: Springer, 2002.
7. D. Denneberg, Non-Additive Measure and Integral, Dordercht, The Netherlands: Kluwer Academic Publishers, 1994.
8. S. Figlewski, Assessing the Incremental Value of Option Pricing Theory relative to an "Informationally Passive" Benchmark, Journal of Derivatives, Fall (2002), 81-96.
9. S. Figlewski and T. Green, Market Risk and Model Risk for a Financial Institution Writing Options, Journal of Finance, 53(4) (1999), 1465-1499.
10. V. Henderson, D. Hobson and T. Kluge, Is There an Informationally Passive Benchmark for Option Pricing Incorporating Maturity?, Quantitative Finance, 7(1) (2007), 75-86. https://doi.org/10.1080/14697680601011438
11. R. Merton, Theory of Rational Option Pricing, Bell Journal of Economics and Management Science, 4 (1973), 141-183. https://doi.org/10.2307/3003143
12. D. Schmeidler, Integral Representation without additivity, Proceedings of the American Mathematical Society, 97(2) (1986), 255-261. https://doi.org/10.1090/S0002-9939-1986-0835875-8
13. S. Wang, A Class of Distortion Operators for Pricing Financial and Insurance Risks, The Journal of Risk and Insurance, 67(1) (2003), 15-36. | 2021-07-23 16:43:46 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8385117650032043, "perplexity": 3929.7359422328623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00594.warc.gz"} |
https://judgegirl.csie.org/problem/0/50051 | ## I'm a slow walker, but I never walk backwards.
Given $N$ license plates, output the valid license plates in ASCII order.
There are two kinds of license plates. The first kind has the form $x_1x_2$-$x_3x_4x_5x_6$ and the second kind has the form $x_1x_2x_3$-$x_4x_5x_6$, where $x_1$ ~ $x_6$ are non-space characters.
A license plate is valid if it satisfies all the following constraints:
1. $x_1$ ~ $x_6$ are all alphabets or digits.
2. $x_1$ ~ $x_6$ should contain at least one digit, and the sum of all the digits can not be divided by $7$.
3. $x_1$ ~ $x_6$ should not contain the same character (both digits and alphabets) more than twice. The alphabets are case sensitive (字母大小寫不同,算不同的字).
4. $x_1$ ~ $x_6$ should not contain two or more consecutive digit $4$. For example, "ab-44cd" is invalid. Note that $x_2$ and $x_3$ are not consecutive in the first kind of the license plate, and $x_3$ and $x_4$ are not consecutive in the second kind of the license plate.
Output the first kind of the valid license plates in ASCII order, i.e '0' < '1' < ... < '9' < 'A' < ... < 'Z' < 'a' < …< 'z'. Then, output the second kind of the valid license plates in ASCII order.
## Input Format
Input contains one test case. The first line is an integer $N$, indicating the number of the license plates. For the following $N$ lines, each line contains a string with exactly $7$ non-space characters.
### Technical limitation
• $1 \leq N \leq 30$
## Output Format
Output the valid license plates. Output the first kind of the valid license plates in ASCII order. Then output the second kind of the valid license plates in ASCII order.
## Sample Input 1
139XP-DDE9XP-DDeoL-H8GE48-u#ckGg-j7dJzxY-RAp2zK-jjjabcdefgx-A869wYG1-37pQI-8EpKPU-P+1gf2a-E4Z
## Sample Output 1
QI-8EpKoL-H8GE9XP-DDE9XP-DDeYG1-37pf2a-E4Z | 2020-09-28 02:13:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23978954553604126, "perplexity": 792.0723452712815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401583556.73/warc/CC-MAIN-20200928010415-20200928040415-00032.warc.gz"} |
https://math.tutorvista.com/geometry/alternate-exterior-angles.html | Top
# Alternate Exterior Angles
When two lines segments are crossed by another line segments (which is called the Transversal), the pairs of angles on opposite sides of the transversal which are outside the two lines are called Alternate Exterior Angles.
One way to easily find the alternate exterior angles is that they are the vertical angles of the alternate interior angles. Alternate exterior angles are equal to one another.
In the figure given above, angles 2 and 8 are alternate exterior angles. Angles 1 and 7 are also alternate exterior angles. Therefore, $\angle$2 = $\angle$8 and $\angle$1 = $\angle$7.
Related Calculators Angle Calculator Side Angle side Calculator Angle between Two Vectors Calculator Complementary Angle Calculator
## Alternate Exterior Angles Definition
The couple of angles which are in the opposite sides of the transversal and created outside the two parallel lines is determined as alternate exterior angles.
• $\angle$1 and $\angle$2 are alternate interior angles
• $\angle$a and $\angle$b are alternate interior angles
## Alternate Exterior Angles Theorem
If two parallel line segments or rays are cut by a transversal, the alternate exterior angles are congruent.
Given:
Line p is parallel to line q and cut with the transversal l, as shown in the figure given below.
Proof:
S.No. Statement Reasons 1 p | | q with the transversal l Given 2 $\angle$2 is congruent to $\angle$6 Parallel Lines Postulate 3 $\angle$6 is congruent to $\angle$8 Vertical Angle Theorem 4 $\angle$2 is congruent to $\angle$8 Using Transitive Property
Therefore, the alternate exterior angles are congruent.
Hence Proved.
## Alternate Exterior Angles Examples
Given below are some of the examples on alternate exterior angles.
### Solved Example
Question: Find the values of the angles b, c, d, e, f, g and h in the figure given below.
Solution:
Step 1: b is a supplement of 45$^o$.
Therefore, b + 45$^o$ =180$^o$ => b = 180$^o$ - 45$^o$ = 135$^o$
Step 2: b and c are vertical angles.
Therefore, c = b = 135$^o$
Step 3: d and 45$^o$ are vertical angles.
Therefore, d = 45$^o$
Step 4: d and e are alternate interior angles.
Therefore, e = d = 45$^o$
Step 5: f and e are supplementary angles.
Therefore, f + 45$^o$ = 180$^o$ => f = 180$^o$ - 45$^o$ = 135$^o$
Step 6: g and f are vertical angles.
Therefore, g = f = 135$^o$
Step 7: h and e are vertical angles.
Therefore, h = e = 45$^o$
More topics in Alternate Exterior Angles Alternate Exterior Angles Theorem
NCERT Solutions NCERT Solutions NCERT Solutions CLASS 6 NCERT Solutions CLASS 7 NCERT Solutions CLASS 8 NCERT Solutions CLASS 9 NCERT Solutions CLASS 10 NCERT Solutions CLASS 11 NCERT Solutions CLASS 12
Related Topics Math Help Online Online Math Tutor
*AP and SAT are registered trademarks of the College Board. | 2019-05-26 10:17:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4946655035018921, "perplexity": 1624.0995105042941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259015.92/warc/CC-MAIN-20190526085156-20190526111156-00155.warc.gz"} |
http://www.gradesaver.com/textbooks/math/algebra/college-algebra-6th-edition/chapter-4-exponential-and-logarithmic-functions-exercise-set-4-4-page-490/105 | ## College Algebra (6th Edition)
Published by Pearson
# Chapter 4 - Exponential and Logarithmic Functions - Exercise Set 4.4: 105
#### Answer
118 ft The point (118,1) is on the graph of $f(x)$.
#### Work Step by Step
$f(x)=20(0.975)^{x}$ We want $f(x)=1$ (percent). Insert and solve for x: $1=20(0.975)^{x}\quad$...$/\div 20$ $\displaystyle \frac{1}{20}=0.975^{x}\qquad .../$apply ln() to both sides... $\ln 0.05=x\ln 0.975\quad$...$/\div\ln 0.975$ $x=\displaystyle \frac{\ln 0.05}{\ln 0.975}\approx$118.325104425$\approx 118$ ft At about x = 118 ft depth, there is $1\%$ of surface sunlight . $\mathrm{f}(118)\approx 1$ The point (118,1) is on the graph of $f(x)$.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2018-03-17 06:23:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7744007706642151, "perplexity": 3044.405054389304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257644701.7/warc/CC-MAIN-20180317055142-20180317075142-00538.warc.gz"} |
https://math.stackexchange.com/questions/915679/how-find-all-positive-real-beta-such-a-finite-number-of-left-fracpq-s/924033 | # How find all positive real $\beta$ such A finite number of $\left|\frac{p}{q}-\sqrt{2}\right|<\frac{\beta}{q^2}$
Define sequence $\{a_{n}\}$,such $$a_{1}=1,a_{2}=2,a_{k+2}=2a_{k+1}+a_{k},k\ge 1$$
Find all positive real number $\beta$,such only have a finite number of relatively prime integers $(p,q)$ such that $$\left|\dfrac{p}{q}-\sqrt{2}\right|<\dfrac{\beta}{q^2}$$ and there can't exsit $n$ such $q=a_{n}$
and this problem is Germany National Olympiad 2013 last problem (2), see: http://www.mathematik-olympiaden.de/aufgaben/52/4/A52124b.pdf
My try: since $$a_{n+2}=2a_{n+1}+a_{n}\Longrightarrow r^2=2r+1\Longrightarrow r_{1}=\sqrt{2}+1,r_{2}=1-\sqrt{2}$$ so $$a_{n}=A(1+\sqrt{2})^{n-1}+B(1-\sqrt{2})^{n-1},a_{1}=1,a_{2}=2$$ so $$A+B=1,A(1+\sqrt{2})+B(1-\sqrt{2})=2\Longrightarrow A=\dfrac{2+\sqrt{2}}{4},B=\dfrac{2-\sqrt{2}}{4}$$ so $$a_{n}=\dfrac{2+\sqrt{2}}{4}(1+\sqrt{2})^n+\dfrac{2-\sqrt{2}}{4}(1-\sqrt{2})^n$$ so there can't postive integer $n$,such $q=a_{n}$
other hand $$\left|\frac{p}{q}-\sqrt{2}\right| = \frac{|p^2-2q^2|}{q^2\left|\frac{p}{q} + \sqrt{2}\right|}$$ Now we can't use the pell equation,because this problem is finite number of relatively prime integers $(p,q)$
so how solve it?
I have read this solution,It's Nice: How find the value $\beta$ such $\left|\frac{p}{q}-\sqrt{2}\right|<\frac{\beta}{q^2}$
I think we can find other methods to solve this problem?
This is different problem
I don't know why,and it is said china all can't comment.maybe this post explain why?so I have say in there http://meta.math.stackexchange.com/questions/16661/mse-requires-javascript-from-another-domain-which-is-blocked-or-failed-to-load
• How you can fail to link this to math.stackexchange.com/q/915540 and to the substantial information you received there, escapes me... – Did Sep 1 '14 at 7:02
• Why can't you comment? – Did Sep 1 '14 at 7:12
• I would guess that excluding the $a_n$:s means that $|p^2-2q^2|\ge2$. If correct proving it should not be too difficult (basically you need to show that all the solutions of the Pell equation have $q=a_n$ for some $n$). Using the theory of units of rings of integers of real quadratic extensions of $\Bbb{Q}$ leads to it, but that is probably not an allowed piece of theory. I would try "infinite descent" (or induction): if $p_n^2-2q_n^2=1$, then multiplying $(p_n-q_n\sqrt2)$ by $(\sqrt2 +1)$ gives a "smaller" solution. I'm off air, so cannot pursue this now. – Jyrki Lahtonen Sep 1 '14 at 7:46
• Jyrki's comment above has the right idea. If you can prove this then take $p,q$ s.t. $|p^2 - 2q^2| = 2$ and show that you have an infinite number of solutions for $\beta \geq \frac{1}{\sqrt{2}}$. Now assume you also have infinite number of solutions for $\beta < \frac{1}{\sqrt{2}}$ and derive a contradiction. – Winther Sep 8 '14 at 17:18
According to your computations, $$\left|\frac{p}{q}-\sqrt2\right| = \frac{\left|\frac{p^2}{q^2}-\sqrt2^2\right|}{\frac{p}{q}+\sqrt2} = \frac1{q^2}\cdot\frac{|p^2-2q^2|}{2\sqrt2+\Big(\frac{p}{q}-\sqrt2\Big)}.$$
The condition $q\ne a_n$ excludes the solutions of the Pellian equation $p^2-2q^2=\pm1$. But allows $p^2-2q^2=2$.
I. First we show that there are only finitely many good pairs $(p,q)$ for $\beta<\frac1{\sqrt2}$.
Suppose that $\beta<\frac1{\sqrt2}$, and let $\varepsilon=\frac2\beta-2\sqrt2$. Consider a good pair $(p,q)$. If $q>\sqrt{\beta/\varepsilon}$ then $|\frac{p}{q}-\sqrt2|\le \frac{\beta}{q^2}<\varepsilon$, and thus $$\frac\beta{q^2} \ge \left|\frac{p}{q}-\sqrt2\right| \ge \frac1{q^2}\cdot\frac{|p^2-2q^2|}{2\sqrt2+\Big|\frac{p}{q}-\sqrt2\Big|} > \frac1{q^2}\cdot\frac2{2\sqrt2+\varepsilon} = \frac\beta{q^2},$$ contradiction. Hence, the set of possible values $q$ is bounded by $\sqrt{\beta/\varepsilon}$ and thus finite.
II. Now we construct infinitely many good pairs ($p,q)$ for $\beta\ge\frac1{\sqrt2}$.
The Pellian equation $p^2-2q^2=2$ has infinitely many solutions; for such pairs $\frac{p}{q}>\sqrt2$, so $$\left|\frac{p}{q}-\sqrt2\right| = \frac1{q^2}\cdot\frac{|p^2-2q^2|}{2\sqrt2+\Big(\frac{p}{q}-\sqrt2\Big)} < \frac1{q^2}\cdot\frac2{2\sqrt2+0} \le \frac\beta{q^2}.$$
Therefore, the answer is $\beta\in\left(0,\frac1{\sqrt2}\right)$.
Not a complete answer, but I will show that if $(p,q)$ are solution to $|p^2 - 2q^2| = 1$ then $q = a_n$ for some $n$ (as Jyrki said in the comments).
The fundamental solution of the Pell equation $|p^2 - 2q^2| = 1$ is $(p,q) = (1,1)$. All the other solutions can therefore be written on the form
$$p_m + q_m \sqrt{2} = (\sqrt{2} + 1)^m$$
From this we can extract a recursion formula for $(p_m,q_m)$. We have
$$p_{m+1} + q_{m+1} \sqrt{2} = (\sqrt{2} + 1)(p_{m}+ q_{m} \sqrt{2})$$
Multiplying out and using that since $p_m,q_m$ are integers and $\sqrt{2}$ irrational ($A_n + \sqrt{2}B_n = 0 \implies A_n=B_n=0$ if $A_n,B_n$ are integers) then
$$p_{m+1} - p_m = 2q_m$$ $$q_{m+1} - q_{m} = p_{m}$$
Using $(q_{m+2} - q_{m+1}) - (q_{m+1} - q_{m}) = p_{m+1}-p_m = 2q_n$ we can eliminate $p_m$ to get a recursion realation for $q_m$ only
$$q_{m+2} = 2q_{m+1} + q_m$$
which is the same equation as for $a_n$. From $q_1 = p_1 = 1$ we get $q_2 = 2$ so the initial conditions are also the same. | 2019-07-19 22:53:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9242813587188721, "perplexity": 252.33216230362498}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526386.37/warc/CC-MAIN-20190719223744-20190720005744-00433.warc.gz"} |
https://zbmath.org/?q=ut%3Apanel+clustering+algorithm | ## Found 7 Documents (Results 1–7)
100
MathJax
### Identifying latent group structures in nonlinear panels. (English)Zbl 1464.62522
MSC: 62P20 62H30 62M10
Full Text:
Full Text:
### Numerical analysis for a macroscopic model in micromagnetics. (English)Zbl 1088.78009
MSC: 78M25 65K10 65N30 49M30 78M50 65N15 65N50 35Q60 82D40
Full Text:
### Approximation of boundary element operators by adaptive $${\mathcal H}^2$$-matrices. (English)Zbl 1097.65119
Cucker, Felipe (ed.) et al., Foundations of computational mathematics: Minneapolis 2002 (FoCM 2002). Selected papers based on the plenary talks presented at FoCM 2002, Minneapolis, MN, USA, August 5–14, 2002. Cambridge: Cambridge University Press (ISBN 0-521-54253-7/pbk). London Mathematical Society Lecture Note Series 312, 58-75 (2004).
MSC: 65N38 35J25 65F30 65F25
### Fast cluster techniques for BEM. (English)Zbl 1035.65142
MSC: 65N38 35J05 65N12
Full Text:
### The panel clustering method in 3-D BEM. (English)Zbl 0901.65070
Papanicolaou, George (ed.), Wave propagation in complex media. New York, NY: Springer. IMA Vol. Math. Appl. 96, 199-224 (1998).
MSC: 65N38 35J05
### On the efficient realization of sparse matrix techniques for integral equations with focus on panel clustering, cubature and software design aspects. (English)Zbl 0884.65114
Wendland, Wolfgang L. (ed.), Boundary element topics. Proceedings of the final conference of the priority research programme Boundary element methods 1989-1995 of the German Research Foundation, October 2–4, 1995 in Stuttgart, Germany. Berlin: Springer. 51-75 (1997).
MSC: 65N38 35J40 65N15
all top 5
all top 3 | 2022-05-24 15:23:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4554751515388489, "perplexity": 10491.653930547933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573053.67/warc/CC-MAIN-20220524142617-20220524172617-00085.warc.gz"} |
https://gamedev.stackexchange.com/questions/38358/how-do-i-calculate-the-boundary-of-the-game-window-after-transforming-the-view/40435 | # How do I calculate the boundary of the game window after transforming the view?
My Camera class handles zoom, rotation, and of course panning. It's invoked through SpriteBatch.Begin, like so many other XNA 2D camera classes. It calculates the view Matrix like so:
public Matrix GetViewMatrix() {
return Matrix.Identity
* Matrix.CreateTranslation(new Vector3(-this.Spatial.Position, 0.0f))
* Matrix.CreateTranslation(-( this.viewport.Width / 2 ), -( this.viewport.Height / 2 ), 0.0f)
* Matrix.CreateRotationZ(this.Rotation)
* Matrix.CreateScale(this.Scale, this.Scale, 1.0f)
* Matrix.CreateTranslation(this.viewport.Width * 0.5f, this.viewport.Height * 0.5f, 0.0f);
}
I was having a minor issue with performance, which after doing some profiling, led me to apply a culling feature to my rendering system. It used to, before I implemented the camera's zoom feature, simply grab the camera's boundaries and cull any game objects that did not intersect with the camera.
However, after giving the camera the ability to zoom, that no longer works. The reason why is visible in the screenshot below. The navy blue rectangle represents the camera's boundaries when zoomed out all the way (Camera.Scale = 0.5f). So, when zoomed out, game objects are culled before they reach the boundaries of the window. The camera's width and height are determined by the Viewport properties of the same name (maybe this is my mistake? I wasn't expecting the camera to "resize" like this).
What I'm trying to calculate is a Rectangle that defines the boundaries of the screen, as indicated by my awesome blue arrows, even after the camera is rotated, scaled, or panned.
Here is how I've more recently found out how not to do it:
public Rectangle CullingRegion {
get {
Rectangle region = Rectangle.Empty;
Vector2 size = this.Spatial.Size;
size *= 1 / this.Scale;
Vector2 position = this.Spatial.Position;
position = Vector2.Transform(position, this.Inverse);
region.X = (int)position.X;
region.Y = (int)position.Y;
region.Width = (int)size.X;
region.Height = (int)size.Y;
return region;
}
}
It seems to calculate the right size, but when I render this region, it moves around which will obviously cause problems. It needs to be "static", so to speak. It's also obscenely slow, which causes more of a problem than it solves.
What am I missing?
• I should add that no matter what the zoom level, the top-left corner of the camera's bounds (the blue box) is (0,0). – Cypher Oct 7 '12 at 19:00
## 1 Answer
Ask your Game class what the properties of Window.ClientBounds.Width and Window.ClientBounds.Height are.
• This answer is "sort of" right. Just be careful about introducing dependencies / static / global-state when you implement this. – ashes999 Dec 21 '12 at 16:55 | 2020-01-17 18:27:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17225433886051178, "perplexity": 3027.3335451823036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00097.warc.gz"} |
http://clay6.com/qa/2347/if-ax-2-2hxy-by-2-2gx-2fy-c-0-then-show-that-large-frac-frac-normalsize-1 | Browse Questions
# If $ax^2+2hxy+by^2+2gx+2fy+c=0$,then show that $\large \frac{dy}{dx}.\frac{dx}{dy}\normalsize =1$
Toolbox:
• A function $f(x,y)$ is said to be implicit if it is jumbled in such a way,that it is not possible to write $y$ exclusively s a function of $x$.
• $\large\frac{d}{dx}$$\phi(y)=\large\frac{d}{dy}$$\phi(y).\large\frac{dy}{dx}$
Step 1:
$ax^2+2hxy+by^2+2gx+2fy+c=0$
Differentiating w.r.t $x$ we get,
Apply product rule to differentiate $xy$
$2ax+2h[x.\large\frac{dy}{dx}$$+y.1]+2by.\large\frac{dy}{dx}$$+2g+2f.\large\frac{dy}{dx}$
$\Rightarrow \large\frac{dy}{dx}$$[2hx+2by+2f]=-(2ax+2hy+2g) \large\frac{dy}{dx}$$=-\large\frac{(2ax+2hy+2g)}{2hx+2by+2f}$
Step 2:
Differentiating w.r.t $y$ we get,
$2ax.\large\frac{dx}{dy}$$+2h[\large\frac{dx}{dy}$$.y+1.x]+2by+2g.\large\frac{dx}{dy}$$+2f=0 Therefore \large\frac{dx}{dy}$$[2ax+2hy+2g]=-(2hx+2by+2f]$
$\Rightarrow \large\frac{dx}{dy}=\bigg[\large\frac{-(2hx+2by+2f)}{2ax+2hy+2g}\bigg]$
Therefore $\large\frac{dy}{dx}$$\times \large\frac{dx}{dy}=\large\frac{-(2ax+2hy+2g)}{2hx+2by+2f}$$\times \large\frac{(-2hx+2by+2f)}{(2ax+2hy+2g)}$
$\qquad\qquad\qquad\;\;\;=1$
Hence proved. | 2017-06-23 05:09:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996738433837891, "perplexity": 3697.5336464002553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320003.94/warc/CC-MAIN-20170623045423-20170623065423-00635.warc.gz"} |
https://tdelubac.com/2021/07/12/have-a-doughnut/ | # Have a doughnut
You may have heard about Planetary Boundaries (PB for short) and that we have already crossed a few. What are these boundaries? How many are there? Which ones have we broken already? "But you promised me a doughnut!" you might shout at this point. Hold on, it is coming.
PB were introduced back in 2009 by Johan Rockström then based at the Stockholm Resilience Center. They were subsequently revised in an article published in Science in 2015. Have a look at Johan Rockström talking about his work already more than 10 years ago:
## Planetary boundaries and the Earth system
The planetary boundaries framework defines a safe operating space for humanity based on the intrinsic biophysical processes that regulate the stability of the Earth system.
Will Steffen et al. (2015)
The PB framework is an attempt at assessing the risk that humans destabilize the Earth System (ES for short). PB are not thresholds in the sense that crossing one or more of the PB doesn't result in an immediate change to the ES. However they define a safe operating space, within which, according to the current scientific knowledge, there is a very limited risk of perturbation of the ES. Beyond the safe operating space is a zone of uncertainty reflecting the limitations of our current scientific understanding as well as the stochastic behavior of the ES. Operating in this zone is risky as follows from the precautionary principle. Outside of the zone of uncertainty is the danger zone where current scientific knowledge indicates high risks of impacting the ES.
There are nine identified planetary boundaries, of which seven have been measured (in the publication) at the global scale. Some of these boundaries are interconnected, meaning that operating beyond the safe zone of one of the PB might trigger processes that impact other PB. For each measurable PB there is at least one control variable that is measured. Below is a graph showing where we stand with respect to the nine boundaries and an explanation for each of them. Two of them are considered core boundaries as they play a greater role in the regulation of the ES and are connected to the other PB.
• Climate change (core boundary): At this point this shouldn't be a surprise, human activity is having an impact on the climate. The atmospheric concentration of \rm{CO_2} is used as a proxy to measure this impact. As of 2015 it was estimated that the safe zone ended at 350 part per million (ppm) and that the danger zone started at 450 ppm. Have a look at today's concentration.
• Biosphere integrity (core boundary): The biosphere integrity is measured by the extinction rate, i.e. the speed at which species disappear. The background rate (the rate without accounting for the human pressure) is estimated to be one extinction per million species-year \rm{E/MSY}. It means if you would take one million species, you would expect one to go extinct after a year. The end of the safe zone is at 10 \rm{E/MSY}, while the beginning of the danger zone is at 100 \rm{E/MSY}. The current extinction rate is hard to measure, leading to important uncertainty but according to the article it is between 100 and 1000 \rm{E/MSY}.
• Stratospheric ozone depletion: Depending on your age, you might have heard of this one as we went pretty far beyond the boundary in the 1980s. resulting in the ozone hole. This is an example of a PB that has been transgressed in the past where proper action brought us back in the safe zone. It corresponds to the concentration of ozone in the stratosphere.
• Ocean acidification: This is tightly linked to the \rm{CO_2} emissions as part of these emissions are dissolved in the Ocean resulting in its acidification. The main concern is the incapacity of organisms such as plankton and shellfish to develop their shells if the acidity gets to high, impacting in turn all of the food chain.
• Biogeochemical flows: We have mentioned the carbon cycle in a previous post, but it turns out that other chemical elements also have their own cycles that can be affected by human activities. In particular, the heavy use of fertilizers in agriculture is impacting the abundance of Phosphorus \rm{P}and Nitrogen \rm{N} in the environment, having an impact on water ecosystems.
• Freshwater use: This is the global maximum amount of water retained from rives, lakes, reservoirs and renewable ground waters stores. There are important regional variations but as of 2015 the global freshwater consumption was estimated at 2600 \rm{km}^3 per year. The end of the safe zone is estimated at 4000 \rm{km}^3 per year and the beginning of the danger zone at 6000 \rm{km}^3 per year.
• Land-system change: The area of forest land as a percentage of the original forest cover. It is linked both to climate change (as the forest is a carbon sink) and to the biosphere integrity (as deforestation reduces the habitat of many species). As of 2015 the global coverage was 62% with a safe zone ending at 75% and a danger zone starting at 54%.
• Atmospheric aerosol loading: Aerosols are particles in suspension in the air. They have a notorious impact on human health (the publication mentions 7.2 million related deaths per year) and can also impact the ES. The quantity of aerosol can be measured locally by looking at the optical depth of the atmosphere, but the publication provides no global average.
• Novel entities: They correspond to new chemicals, engineered materials and engineered organisms that were not previously existing in the ES, together with the anthropogenic rejection of natural elements like heavy metals. Given the complexity of the problem (the article is talking about more than 100,000 commercialized substances, not including nanomaterials and plastic polymers) there is currently no simple control variable to monitor.
As of 2015 we had entered the danger zone for two planetary boundaries: biosphere integrity and biogeochemical flows. We had exited the safe zone for two others: climate change and land-system change.
Building on the notion of planetary boundaries and adding the notion of social foundation, Kate Raworth came up with the concept of Doughnut Economics. Have a look at her powerful TED talk above. The doughnut economics were first discussed in an Oxfam publication called A safe and just space for humanity. In this article, she gives a definition of sustainable development, which is the end goal of doughnut economics:
Achieving sustainable development means ensuring that all people have the resources needed – such as food, water, health care, and energy – to fulfill their human rights. And it means ensuring that humanity’s use of natural resources does not stress critical Earth-system processes – by causing climate change or biodiversity loss, for example – to the point that Earth is pushed out of the stable state, known as the Holocene, which has been so beneficial to humankind over the past 10,000 years.
Kate Raworth
The social foundation was later refined based on the Sustainable Development Goals (SDG) adopted by the United Nations in 2015. It encompasses 12 needs to fulfill people's economic and social rights:
Source: Doughnut Economics
As depicted on the chart above, the social foundation creates a lower bound in the usage of resources, while the planetary boundaries create an upper bound. In between the two lies what Kate Raworth calls the safe and just space for humanity.
## Can we all feed on that doughnut?
The next question becomes: Can we attain the social foundation for all without crossing the planetary boundaries? In other words, does this safe and just space exists? In an article called A good life for all within planetary boundaries published in Nature Sustainability back in 2018 (the article is easily accessible on the web), Daniel O'Neill and his team explore this question.
Their approach consists in looking at the level of individual countries to see how each of them is performing with respect to the social foundation as well as to the PB. The hope is to identify countries that would be close to or within the safe and just space. More than 150 countries are included in their results.
To quantify the PB at the country level, a top down approach is used where PB are down scaled according to the population size. This is a per capita approach of the biophysical boundaries. In this way, they are able to measure four out of the seven quantified PB (remember there are nine PB of which seven have been quantified) for most countries. As they consider the biogeochemical flows for the phosphorus and the nitrogen independently, they obtain five boundaries. On top of these, they add two measurements that are not part of the original PB framework. Here is the final list:
• Climate change (PB)
• Biogeochemical flows - nitrogen (PB)
• Biogeochemical flows - phosphorus (PB)
• Freshwater use (PB)
• Land-system change (PB)
• Ecological footprint: Measuring the amount of land and sea required to produce the resources consumed and to absorb the \rm{CO_2} emitted by a population. This is a widely used estimator with worldwide data provided by the Global Footprint Network. It is tightly coupled to climate change as it includes the \rm{CO_2} emissions
• Material footprint: Measuring the quantity of raw materials such as minerals, fossil fuels and biomass that are used to support the consumption of goods and services by a population. Also coupled to climate change as it includes fossil fuels extraction.
For the social foundation, the article defines and quantifies a list of social thresholds, similar to the doughnut economics ones with slight differences. It includes:
• Life satisfaction: A self reported estimate of quality of life based on the Cantril ladder with data provided by the World Happiness Report. The threshold is set to 6.5 on a scale from 0 to 10.
• Healthy life expectancy: Life expectancy at birth with no major disease or infirmity. The threshold is set to 65 years of life.
• Nutrition: Amount of calories available daily. The threshold is set to 2500 \rm{kcal} per day.
• Sanitation: Fraction of the population having access to sanitation facilities. The threshold is set to 95%.
• Income: Fraction of the population above the poverty threshold, taken at \$1.90 per day based on the World Bank data estimate. The threshold is set to 95%.
• Education: Fraction of the population attending secondary education. The threshold is set to 95%.
• Social support: Self reported availability of relatives or friends to support in case of need. The threshold is set to 90%.
• Democratic quality: Based on the Worldwide Governance Indicators.
• Equality: Estimation of the income inequalities based on the Gini coefficient.
• Employment: One minus the fraction of the population unemployed. The threshold is set to 94% (or 6% unemployment).
An insightful interactive website allows you to have a look at how your country performs with respect to both the social foundation and the PB. On the graph below, countries are reported as a function of how many boundaries they cross (x-axis) and how many social thresholds they attain (y-axis). For each country, the size of the blue circle indicates the population size. The safe and just space is situated at the top left corner where all the social thresholds are attained but no PB is crossed. As one can see, no country is currently getting close to that space.
The fact that no country today is able to provide the social foundation without breaking five or more boundaries is no good news but is not equivalent to saying that the safe and just space does not exist. It is good to keep in mind that the boundaries as defined in the graph above are not strictly equivalent to the ones of the PB framework. In particular it is likely that if a country performs poorly with respect to climate change, it will perform poorly for the ecological and material footprints. Similarly usage of phosphorus and nitrogen are likely to be correlated as both results mainly from the use of fertilizers. I believe this explains why there is this vertical "cut" with most countries being above five boundaries trespassed.
Focusing on climate change, this last graph shows the correlations between the \rm{CO_2} emissions of a country and the different social needs. Every black dot corresponds to a given country. It is interesting to note that there is no correlation between the \rm{CO_2} emissions of a country and its employment rate (bottom-middle graph). This indicates that unemployment shouldn't be used as an argument neither for or against the lowering of the carbon intensity of the economy.
We can see that in many cases the relations are not linear, and that some countries are able to achieve the social need thresholds (and above) with little \rm{CO_2} emissions. The challenge is of course to meet all needs while staying below the PB. Thus the remaining question is: Have the different countries sample all of the parameter space? If not, we can still explore in search for the safe and just space for humanity. Closing up with a quote from the publication:
If all people are to lead a good life within planetary boundaries, then our results suggest that provisioning systems must be fundamentally restructured to enable basic needs to be met at a much lower level of resource use.
Daniel O'Neill et al. (2018) | 2021-07-29 07:46:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3733871877193451, "perplexity": 1228.7043958455463}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153854.42/warc/CC-MAIN-20210729074313-20210729104313-00030.warc.gz"} |
https://chemistry.stackexchange.com/questions/5621/z-effective-charge-and-ionization-energy/5623 | # Z* effective charge and Ionization Energy
I'm trying to figure out the patterns for Ionization Energies. I am familiar with the periodic trend, however things become quite different when we hit the 1st I.E. For example, Na has an I.E(1) of 495.8 kJ while its second I.E. rockets up to 4562 kJ while the atoms towards the right are much lower than this. The trend says that the I.E. increase up and right of the periodic table, which is not the case here.
My point is, in order to get a better estimate, would it be safe to say that the Zeff charge and atom size relate to the I.E?
Example: Effective Charges
Mg = 2 Al = 1 S = 4 Si = 2 Na = 1
We see that the strongest pull towards the charged center would be Al and Na in this case. However, having Al being the smaller atom would require more energy to remove the electron from its valence shell.
Now in the case of first Ionization Energy we have:
Mg = 3 Al = 2 S = 5 Si = 3 Na = 2
In this case, Na now has reduced its size due to the fact it jumped from n=3 to n=2 level and Al also reduced in size but is still a bigger atom than Na due to our trend.
My question is, is this approach fairly accurate or should I be looking somewhere else?
Using the common idiom: "full subshells are stable"
It is a little more compact to use the incorrect explanation and correct it than to explain in terms of correct. Also, when you hear this incorrect explanation, you will understand what is meant.
Yes, size is a factor (thus the "up" part- IE is higher in lower periods), but the observed anomaly in second-ionization energies in $\ce{Na}$ and $\ce{Al}$ is better related to removal of an electron from a full subshell.
$\ce{Na}$ has an electron configuration of $\ce{[Ne] 3s^1}.$ The first ionization gives $\ce{Na}$ makes it $\ce{[Ne]}$, the same as a noble gas- full 2p subshell. Because full subshells are stable, it takes a lot of energy to remove the first electron from it. $\ce{Al}$ is analogous; the second ionization means removing an electron from a full subshell.
The same is observed in the first ionization energies (in $kJ/mol$): $$\ce{Na}:495$$$$\ce{Mg}:737$$$$\ce{Al}:577$$ Ionization of $\ce{Mg}$ removes an electron from a full $\ce{3s}$ subshell, so it is a little higher than the trend.
There are many factors that contribute to exact values, but these are difficult to predict, which is why they taught a simple version in your class.
Correcting the myth
While it is common to refer to full subshells as being more stable, it isn't really what is going on. In reality, the next electron added in destabilized. This occurs because of shielding. Outer electrons feel repulsion from inner electrons, thus outer electrons are easier to remove than inner ones. The saying that "half-filled subshells are stable" is also used. Again, it is that the next added electron is destabilized, but in this case it is because the energy needed to spin pair the two electrons.
• I'm glad you didn't provide the explanation based on full/half-full shells alone without qualifying it and providing a correct explanation. I think the pedagogical value of that explanation is next to nil, and it may even be harmful if students find it satisfying and hence don't probe more deeply for more rigorous and less arbitrary rationalizations. – Greg E. Jul 19 '13 at 7:35
$Z_{eff}$ is certainly a major factor in determining ionization energies, however atomic and ionic radius probably shouldn't be viewed as having a direct causative relationship to ionization energy. It's more correct to say that ionization energy and atomic/ionic radius have some of the same underpinnings (namely, $Z_{eff}$ and various electron-electron interactions). Because many of the same underlying forces are at work, there is some correlation (i.e., ionization energy increases in the same direction as atomic radius decreases, certain specific anomalies aside), but don't confuse that correlation with causation.
As one moves down a group on the periodic table, $Z_{eff}$ obviously remains constant, however the number of core electrons shielding the nucleus increases, and consequently the valence level electrons become increasingly energetic as their distance from the nucleus grows. This fact largely accounts for the increase in atomic radius and the decrease in ionization energy that occurs as one moves down along any given group on the periodic table.
Conversely, as one moves right along a period, the number of core electrons remains constant, while the $Z_{eff}$ increases. It's reasonable to expect, therefore, that valence electrons will experience a stronger electrostatic attraction to the nucleus as one moves right along a period, causing both an increase in ionization energy and a decrease in atomic radius. The ionization energy trend mostly conforms to that expectation, with the notable exceptions of transitioning from group IIA to group IIIA, and from group VA to group VIA, where ionization energy (perhaps unexpectedly) drops. To explain these exceptions, compare the electron configurations:
• When removing the first group IIIA valence electron, it is being removed from a $p$ orbital, while the first group IIA valence electron would be removed from an $s$ orbital. Electrons in $p$ orbitals are somewhat more energetic due to the nuclear charge being partially shielded by the electrons of the preceding $s$ orbital (in addition to more complex quantum mechanical effects), hence they are easier to remove.
• The first group VIA electron to be removed is paired with another electron in the same $p$ orbital, while all $p$ orbital electrons of group VA elements are unpaired (in accordance with Hund's rule). Electron pairing causes some mutual electron-electron repulsion, making these electrons more energetic, resulting in a drop in ionization energy for group VI by comparison to group V.
As you move further down the periodic table, the contributions of electrons in $d$ and $f$ orbitals become significant, the energy gaps between subsequent principal energy levels get narrower, and the ionization energy trend becomes more strictly linear for main group elements (specifically, the exceptions I described above no longer apply once you reach principal energy level five).
• "As one moves down a group on the periodic table, Zeff obviously remains constant" - unless you are using Clementi's rules. Come to think of it, don't Slater's rules give you different values for Al and Ga? – bobthechemist Jul 19 '13 at 9:03
• @bobthechemist, I was referring to the most rudimentary calculation by taking the difference between protons and core electrons, which I thought would be sufficient for explaining the trend at the gen. chem level without introducing additional complexity. I'm aware of Slater's rules, but this is the first I've ever heard of Clementi's rules, so thanks for that and I'll investigate further. – Greg E. Jul 19 '13 at 9:13 | 2020-09-22 20:40:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6015142202377319, "perplexity": 795.9946128080385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206763.24/warc/CC-MAIN-20200922192512-20200922222512-00332.warc.gz"} |
https://www.physicsforums.com/threads/find-number-of-turns-in-the-solenoid.351296/ | # Homework Help: Find number of turns in the solenoid
1. Nov 2, 2009
### jackxxny
1. The problem statement, all variables and given/known data
I have the following information about the problem.
i have the speed of an electron = v
the diameter of a circular path = d
the length of a solenoid = x
the current that the solenoid carried = I
I need to find the number of turns in the solenoid.
2. Relevant equations
i used the following equations
QvB=v2m/r
B=$$\mu$$NI/l
3. The attempt at a solution
then i put "B" in the first equation and i solve for N. Is that correct?
Last edited: Nov 2, 2009
2. Nov 3, 2009
### rock.freak667
Re: solenoid
Yes, do that.
3. Nov 3, 2009
### jackxxny
Re: solenoid
the only weird thing is that i get a number like
.121
and it is not a turn .121??? or it can be a valid answer?
4. Nov 3, 2009
### rock.freak667
Re: solenoid
no, you can't really get less than one. Post the numbers and your working and we'll see if you went wrong anywhere.
5. Nov 3, 2009
### jackxxny
Re: solenoid
v=1570 m/s
d = 1.57 cm
x = 35 cm
I = 2.61 A
what I did is :
N=(vmx)/(rQI*4*pi*10^-7)
6. Nov 3, 2009
### rock.freak667
Re: solenoid
I get the same answer as you, so I guess you'll need to put what you get and round up and put N=1
7. Nov 3, 2009
### jackxxny
Re: solenoid
the problem really states....
....that the electron follows a circular path of diameter 1.57 cm near the center of an evacuated solenoid of length 35 cm .
does that changes anything? | 2018-05-21 06:00:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.68240886926651, "perplexity": 2038.8276333653598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863949.27/warc/CC-MAIN-20180521043741-20180521063741-00233.warc.gz"} |
https://www.physicsforums.com/threads/help-to-understand-a-problem-integral.729245/ | # Help to understand a problem, integral
1. Dec 19, 2013
### Fishingaxe
1. The problem statement, all variables and given/known data
http://imageshack.us/a/img197/305/yhmn.jpg [Broken]
What I want to do is calculate the area of the colored space. As you can see in the picture the function is -0.75x^2+3 for the graph.
What I did is since the colored area reaches up to y5 and not y3 I took
2
∫(-0.75x^2+5)dx
0
-
2
∫(-0.75x^2+3)dx and then multiplied the answer by 2 to get the whole area
0
as both sides are identical. I realise now that it can't be the correct way to do it though as when I type in "-0.75x^2+5" in the graph calculator the line at the x-axis changes. I would greatly appreciate help on how to solve this.
OBS: This was a problem at a test I had a few days ago, can't imagining it going well. I won't know of the results yet but I'm just curious about this specific problem.
2. Relevant equations
3. The attempt at a solution
Last edited by a moderator: May 6, 2017
2. Dec 19, 2013
### Fishingaxe
It's a "-" sign inbetween the integrals. It gets messed up when I try to write it next to each other
3. Dec 19, 2013
### scurty
That won't work, graph the function $y = -\frac{3}{4}x^2+5$ to see why.
Try extending the shaded area to create a rectangle. Can you figure out a way to use this new area (which is easily calculated) and the integral of the function to calculate the original shaded area?
4. Dec 19, 2013
### Fishingaxe
Ye, I know it didnt work :s I don't know how to calculate the new area if I just color in the rest of the rectangle.
I would know how to calculate the function's area i.e the part I would color in, but the part that is already colored I don't know :s.
5. Dec 19, 2013
### scurty
Okay, think about it this way.
A = The shaded area you are trying to find
B = Area under the function and above the x-axis
C = Area of the rectangle
? + ? = ?
A = ?
6. Dec 19, 2013
### Fishingaxe
A = C-B?
What I don't know is how to find the area of the rectangle. I know this is supposed to be super easy as it was one of the easier problems on the exam I had.
7. Dec 19, 2013
### scurty
That's correct for A! Area of a rectangle is just length times width. You don't need to do any fancy integrating (although you could if you wanted to).
8. Dec 19, 2013
### HallsofIvy
Staff Emeritus
Seriously? The area of a rectangle is "width times height". From your graph it looks like the height is 5 and the width is 4.
You could also think of the upper limit of that rectangle as "y= 5" so the area of the rectangle is $\int_{-2}^2 5 dx$.
The distance from the parabola up to that line is $5- (-0.75x^2+ 3)= 2+ 0.75x^2$ so the area between the line and the parabola is the integral of that.
9. Dec 19, 2013
### Fishingaxe
The integral ∫02(-0.75x^2+3dx) [0.75x^3/3+3x+c]20
= -0.75*2^3/3+3*2+c - (0+c) = 4 +c-c = 4ae(area units) that is the area of the function (or half the function right? since it is x0->x2 and not x-2->x2)
Then what I did was, just same thing, calculating he half of the rectangle. So:
20(-0.75x^2+5)dx [-0.75x^3+5x+c]20
= -0.75*2^3/3+5*2+c - (0+c) = 8+c-c = 8 ae. Then I took 8ae-4ae (half rectangle - half of the function area) which is 4ae, then I multiplied it by 2 and got 8ae for the entire colored space.
But I think I just should've skipped the -0.75x^2 part huh. I get it now, damnit. Ye makes a lot of sense lol, I must be the stupidest person alive loooooooooooooooooooooool, the most obvious things are the hardest for me to see.
10. Dec 19, 2013
### scurty
Okay, your notation is getting a little sloppy. First of all, you don't need "ae" for area units. The integral computes an exact number. The ae just makes the computations more complicated because they can be confused for variables if variables are present in your integral. Tack on the units, if needed, at the end of the problem. Second of all, when computing definite integrals, you can drop the integrating constant because it cancels out. You still get the correct answer if you include it, but there is no point in doing so.
I assume you meant to say ∫02(-0.75x^2+3dx) = [0.75x^3/3+3x+c]20. The computations would have been easier if you converted 0.75 to $\frac{3}{4}$ because $\int (\frac{3}{4}x^2 + 3) dx = \frac{1}{4}x^3 + 3x + C$. Now you don't have to worry about multiplying 0.75 by 8 and divinding by 3.
In any case, were you trying my approach or HallsofIvy's approach? The second integral is wrong for calculating the area of the rectangle. The correct integral would be $\displaystyle\int_{-2}^2 5 dx$, as HoI noted, which equals 20. 20 minus your answer of 8 from the first integral yields the correct answer of 12.
Doing it HallsofIvy's way yields $\displaystyle\int_{-2}^2 (2 + \frac{3}{4}x^2) dx = [2x+\frac{1}{4}x^3]_{x = -2}^{x = 2}$
11. Dec 19, 2013
### Fishingaxe
The reason I use "ae" is because my teacher told me to always include it in my calculations.
Ye, I realised it was wrong as noted, the second integral that is. I should've left out -3/4x^2 in that one to get the correct answer.
Also I know C cancels each other out but my teacher told me always to use it in my calculations also.
But yeah, this was a rly easy problem once I understood what I was doing wrong lol.
12. Dec 20, 2013
### HallsofIvy
Staff Emeritus
Yes, so there are several different ways you can do this:
1) The area of the 4 by 5 rectangle is 4(5)= 20 and the area under the parabola is $\int_{-2}^2 -(3/4)x^2+ 3 dx= \left[-(1/4)x^3+ 3x\right]_{-2}^2= (-(1/4)(8)+ 3(2))- (-(1/4)(-8)+ 3(-2))= (4)- (-4)= 8$. So the area between them is 20- 8= 12.
2) The area of the rectangle is $\int_{-2}^2 5 dx= \right[5x\left]_{-2}^2= 5(2)- 5(-2)= 10+ 10= 20. The area of the parabola is as before so the area between them is 20- 8= 12. 3) The distance between the line, y= 5, and the parabola, [itex]y= -(3/4)x^2+ 3$, is $5- (-(3/4)x^2+ 3)= 2+ (3/4)x^2$ so the area between them is [itex]\int_{-2}^2 2+ (3/4)x^2 dx= \left[2x+ (1/4)x^3\right]_{-2}^2= (2(2)+ (1/4)(8))- (2(-2)+ (1/4)(-8))= (4+ 2)- (-4- 2)= 6- (-6)= 12.
Of course, we could have recognized that the graphs are symmetric about the y-axis so just done 0 to 2 and doubled that result. | 2017-08-16 21:02:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7960063219070435, "perplexity": 662.2719682100069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102393.60/warc/CC-MAIN-20170816191044-20170816211044-00683.warc.gz"} |
http://hal.in2p3.fr/in2p3-01348858 | # Measurement of the WZ production cross section in pp collisions at sqrt(s) = 13 TeV
Abstract : The WZ production cross section in proton-proton collisions at sqrt(s) = 13 TeV is measured with the CMS experiment at the LHC using a data sample corresponding to an integrated luminosity of 2.3 inverse femtobarns. The measurement is performed in the leptonic decay modes WZ to l nu l' l', where l, l'= e, mu. The measured cross section for the range 60 < m[l'l'] < 120GeV is sigma(pp to WZ = 39.9 +/- 3.2 (stat) +2.9/-3.1 (syst) +/- 0.4 (theo) +/- 1.3 (lum) pb, consistent with the standard model prediction.
Document type :
Journal articles
Cited literature [47 references]
http://hal.in2p3.fr/in2p3-01348858
Contributor : Sylvie Flores <>
Submitted on : Monday, December 10, 2018 - 1:39:05 PM
Last modification on : Tuesday, November 19, 2019 - 2:40:07 AM
Long-term archiving on : Monday, March 11, 2019 - 4:07:02 PM
### File
1-s2.0-S0370269317300187-main....
Publisher files allowed on an open archive
### Citation
V. Khachatryan, M. Besançon, F. Couderc, M. Dejardin, D. Denegri, et al.. Measurement of the WZ production cross section in pp collisions at sqrt(s) = 13 TeV. Physics Letters B, Elsevier, 2017, 766, pp.268. ⟨10.1016/j.physletb.2017.01.011⟩. ⟨in2p3-01348858⟩
Record views | 2019-11-21 13:42:08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8035359978675842, "perplexity": 8175.047514218306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670821.55/warc/CC-MAIN-20191121125509-20191121153509-00187.warc.gz"} |
https://zbmath.org/?q=ut%3Athree-body+problems | ## Found 2,767 Documents (Results 1–100)
100
MathJax
### Integrability of close encounters in the spatial restricted three-body problem. (English)Zbl 07570841
MSC: 70F07 70F16 70H20
Full Text:
Full Text:
### Computer assisted proof of drift orbits along normally hyperbolic manifolds. II: Application to the restricted three body problem. (English)Zbl 07526838
MSC: 37J25 37J40
Full Text:
### Impact of a Moon on the evolution of a planet’s rotation axis: a non-resonant case. (English)Zbl 07525725
MSC: 70M20 70F07
Full Text:
### Describing relative motion near periodic orbits via local toroidal coordinates. (English)Zbl 07525723
MSC: 70F07 70K43
Full Text:
### Closed-form perturbation theory in the restricted three-body problem without relegation. (English)Zbl 07525720
MSC: 70F07 70K45
Full Text:
Full Text:
### On Langmuir’s periodic orbit. (English)Zbl 07514048
MSC: 70F07 81S10
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### Oscillatory motions and parabolic manifolds at infinity in the planar circular restricted three body problem. (English)Zbl 07496399
MSC: 37C29 37J46 70F07
Full Text:
Full Text:
### Variational existence proof for multiple periodic orbits in the planar circular restricted three-body problem. (English)Zbl 07479719
MSC: 70F07 70G75 37J46
Full Text:
Full Text:
### Rapid and accurate methods for computing whiskered tori and their manifolds in periodically perturbed planar circular restricted 3-body problems. (English)Zbl 1486.70053
MSC: 70F07 70K43 70F15
Full Text:
Full Text:
Full Text:
Full Text:
### On the stability of satellites at unstable libration points of Sun-planet-Moon systems. (English)Zbl 07428013
MSC: 70F07 37N05 70K20
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### Transfers from the Earth to $$L_2$$ halo orbits in the Earth-Moon bicircular problem. (English)Zbl 1482.70010
MSC: 70F07 70F10 70H33
Full Text:
### Numerical confirmation of the existence of triple collision orbits inside the domain of the free-fall three-body problem. (English)Zbl 1482.70017
MSC: 70F16 70F07
Full Text:
MSC: 70F07
Full Text:
### About the restricted three-body problem with the Schwarzschild-de Sitter potential. (English)Zbl 07446957
MSC: 70F15 70F07
Full Text:
Full Text:
Full Text:
Full Text:
### On the metric stability and the Nekhoroshev estimate of the velocity of Arnold diffusion in a special case of the three-body problem. (English)Zbl 1486.70054
MSC: 70F07 70H05 70H14
Full Text:
### High-order polynomial continuation method for trajectory design in non-Keplerian environments. (English)Zbl 07434921
MSC: 70F07 70K42
Full Text:
### Triple collision orbits in the free-fall three-body system without binary collisions. (English)Zbl 07434919
MSC: 70F16 70F07
Full Text:
### Stability analysis of apsidal alignment in double-averaged restricted elliptic three-body problem. (English)Zbl 07434918
MSC: 70F07 70F15 70H14
Full Text:
### Energy-momentum conserving integration schemes for molecular dynamics. (English)Zbl 1479.74128
MSC: 74S20 74A25
Full Text:
### An analytical model for tidal evolution in co-orbital systems. I: Application to exoplanets. (English)Zbl 07422588
MSC: 70M20 70K28 70F07
Full Text:
MSC: 70F07
Full Text:
MSC: 70F07
Full Text:
Full Text:
Full Text:
### On the existence of symmetric bicircular central configurations of the $$3n$$-body problem. (English)Zbl 1476.70017
MSC: 70F07 70F15
Full Text:
MSC: 70F07
Full Text:
### Networks and bifurcations of eccentric orbits in exoplanetary systems. (English)Zbl 1477.70013
MSC: 70F07 37N05 37G10
Full Text:
Full Text:
### Poynting-Robertson and oblateness effects on the equilibrium points of the perturbed R3BP: application on Cen X-4 binary system. (English)Zbl 1476.70031
Rassias, Themistocles M. (ed.), Nonlinear analysis, differential equations, and applications. Cham: Springer. Springer Optim. Appl. 173, 131-147 (2021).
Full Text:
### Periodic solutions around the out-of-plane equilibrium points in the restricted three-body problem with radiation and angular velocity variation. (English)Zbl 1477.70011
Rassias, Themistocles M. (ed.) et al., Nonlinear analysis and global optimization. Cham: Springer. Springer Optim. Appl. 167, 251-275 (2021).
Full Text:
Full Text:
### Floquet modes and stability analysis of periodic orbit-attitude solutions along Earth-Moon halo orbits. (English)Zbl 1472.70023
MSC: 70F07 70K20 70M20
Full Text:
### Explicit solution and resonance dynamics around triangular libration points of the planar elliptic restricted three-body problem. (English)Zbl 1472.70026
MSC: 70F07 70K28
Full Text:
### A new perturbative solution to the motion around triangular Lagrangian points in the elliptic restricted three-body problem. (English)Zbl 1481.70054
MSC: 70F07 37N05 70H09
Full Text:
### Transit and capture in the planar three-body problem leveraging low-thrust invariant manifolds. (English)Zbl 1472.70024
MSC: 70F07 70H33
Full Text:
Full Text:
### Outcomes of aspheric primaries in Robe’s circular restricted three-body problem. (English)Zbl 1477.37094
MSC: 37N05 70F07 70F15
Full Text:
### Symbolic dynamics in the restricted elliptic isosceles three body problem. (English)Zbl 1478.37086
MSC: 37N05 37B10 70F07
Full Text:
Full Text:
Full Text:
MSC: 70F07
Full Text:
### Is the Jacobi theorem valid in the singly averaged restricted circular three-body problem? (English. Russian original)Zbl 1476.70022
Vestn. St. Petersbg. Univ., Math. 54, No. 1, 106-110 (2021); translation from Vestn. St-Peterbg. Univ., Ser. I, Mat. Mekh. Astron. 8(66), No. 1, 179-184 (2021).
MSC: 70F07
Full Text:
Full Text:
Full Text:
### A study of the 1/2 retrograde resonance: periodic orbits and resonant capture. (English)Zbl 1466.70019
MSC: 70K28 70F07 70F15
Full Text:
### Low-fuel transfers from Mars to quasi-satellite orbits around phobos exploiting manifolds of tori. (English)Zbl 1466.70041
MSC: 70P05 70F07
Full Text:
### Flux-based statistical prediction of three-body outcomes. (English)Zbl 1466.70013
MSC: 70F07 62P35 82C40
Full Text:
### Shannon entropy diffusion estimates: sensitivity on the parameters of the method. (English)Zbl 1466.70023
MSC: 70K55 70F07 70K28
Full Text:
### Periodic solutions of a generalized Sitnikov problem. (English)Zbl 1466.70012
MSC: 70F07 70H12
Full Text:
### Long-term evolution of orbital inclination due to third-body inclination. (English)Zbl 1466.70035
MSC: 70M20 70F07
Full Text:
### Translational-rotational motions of a rod in the circular Sitnikov problem. (English. Russian original)Zbl 1465.70042
J. Math. Sci., New York 255, No. 6, 690-695 (2021); translation from Probl. Mat. Anal. 110, 13-18 (2021).
MSC: 70F07 70M20 74K10
Full Text:
### Motion of a satellite in the circular three-body problem with light pressure. (English. Russian original)Zbl 1465.70040
J. Math. Sci., New York 255, No. 5, 616-622 (2021); translation from Probl. Mat. Anal. 109, 77-82 (2021).
MSC: 70F07 70M20
Full Text:
### Periodic orbits. F. R. Moulton’s quest for a new lunar theory. (English)Zbl 1477.70002
History of Mathematics (Providence) 45. Providence, RI: American Mathematical Society (AMS) (ISBN 978-1-4704-5671-9/pbk; 978-1-4704-6508-7/ebook). xii, 255 p. (2021).
Full Text:
### High-order resonant orbit manifold expansions for mission design in the planar circular restricted 3-body problem. (English)Zbl 1477.70012
MSC: 70F07 34C45 37N05
Full Text:
Full Text:
### Capturing a spacecraft around a flyby asteroid using Hamiltonian-structure-preserving control. (English)Zbl 1476.70088
MSC: 70Q05 70M20 70F07
Full Text:
Full Text:
Full Text:
### Basins of convergence of equilibrium points in the restricted three-body problem with modified gravitational potential. (English)Zbl 1483.70032
MSC: 70F07 70F15 37N05
Full Text:
Full Text:
Full Text:
### Periodic solution of the nonlinear Sitnikov restricted three-body problem. (English)Zbl 1474.70016
MSC: 70F07 70F15
Full Text:
### Explicit symmetries of the Kepler Hamiltonian. (English)Zbl 1483.37066
Donagi, Ron (ed.) et al., Integrable systems and algebraic geometry. A celebration of Emma Previato’s 65th birthday. Volume 1. Cambridge: Cambridge University Press. Lond. Math. Soc. Lect. Note Ser. 458, 38-56 (2020).
Full Text:
### Tangential trapezoid central configurations. (English)Zbl 1482.70012
Reviewer: Xiang Yu (Chengdu)
MSC: 70F10 70F07 70F15
Full Text:
### Asymptotic normalization coefficient method for two-proton radiative capture. (English)Zbl 1475.85013
MSC: 85A25 81V35 70F07
Full Text:
### An extension of the Robe’s problem. (English)Zbl 1476.70020
Shahid, Mohammad Hasan (ed.) et al., Differential geometry, algebra, and analysis. Selected papers based on the presentations at the international conference, ICDGAA 2016, New Delhi, India, November 15–17, 2016. Singapore: Springer. Springer Proc. Math. Stat. 327, 231-244 (2020).
MSC: 70F07
Full Text:
### On a new analytic theory of the Moon’s motion. II: Orbit and length of months. (English)Zbl 1477.70010
MSC: 70F07 70F15
Full Text:
MSC: 70F07
Full Text:
### Order-chaos-order and invariant manifolds in the bounded planar Earth-Moon system. (English)Zbl 1470.70016
MSC: 70F07 70F15 70H33
Full Text:
### The contact geometry of the spatial circular restricted 3-body problem. (English)Zbl 1467.53087
MSC: 53D35 70F07 70G45
Full Text:
### On normal coordinates in the vicinity of the Lagrangian libration points of the restricted elliptic three-body problem. (Russian. English summary)Zbl 1479.70050
MSC: 70H15 70F07
Full Text:
### Subharmonic oscillations in the near-circular elliptic Sitnikov problem. (English. Russian original)Zbl 1465.70043
Mech. Solids 55, No. 8, 1162-1171 (2020); translation from Prikl. Mat. Mekh. 84, No. 4, 442-454 (2020).
MSC: 70F07
Full Text:
### Periodic solutions for $$N$$-body-type problems. (English)Zbl 1476.70024
MSC: 70F07 34C25
Full Text:
### Minimal velocity surface in a restricted circular three-body problem. (English. Russian original)Zbl 1465.70041
Vestn. St. Petersbg. Univ., Math. 53, No. 4, 473-479 (2020); translation from Vestn. St-Peterbg. Univ., Ser. I, Mat. Mekh. Astron. 7(65), No. 4, 734-742 (2020); erratum ibid. 54, No. 1, 111 (2021).
MSC: 70F07
### On a new analytic theory of the Moon’s motion. I: Orbital angular momentum. (English)Zbl 1476.70018
MSC: 70F07 70F15
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### Vertical motion of the variable infinitesimal mass in the circular Sitnikov problem. (English)Zbl 1476.70046
MSC: 70F15 70K42 70F07
Full Text:
all top 5
all top 5
all top 5
all top 3
all top 3 | 2022-08-20 00:12:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47202563285827637, "perplexity": 11445.777850674816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573849.97/warc/CC-MAIN-20220819222115-20220820012115-00652.warc.gz"} |
https://www.caixabankresearch.com/en/economics-markets/monetary-policy/should-monetary-policy-react-financial-cycle-some-reflections-and | ## Should monetary policy react to the financial cycle? Some reflections and possible answers
Content available in
To what extent should monetary policy react to the financial cycle, or lean against the wind, as it is known? This question has given rise to heated debates between leading economists and it will also be present in the discussions around the strategic reviews currently being carried out by the major central banks. The dilemma is well known: if monetary policy reacts to the financial cycle in an attempt to dampen it and smooth financial fluctuations, it affects economic activity and inflation.
A recent article by the prestigious economists Gourio, Kashyap and Sim1 has helped to give credibility to the theses in favour of taking the financial cycle into consideration. In their research, the authors show that most analyses on optimal monetary policy did not take into account the potentially very high impact that financial crises can have on GDP. They also ignore the fact that monetary policy is a powerful tool for mitigating the negative impact of the financial cycle in times of crisis. In particular, in economies where a bad allocation of credit can generate financial shocks, the authors demonstrate that significant welfare gains in society can be generated by incorporating the financial cycle into monetary policy. In particular, a monetary policy that anticipates the coming crisis when there is a financial boom and raises interest rates allows the likelihood of a severe financial crisis occurring to be reduced (by preventing potential financial bubbles), as well as preventing less productive companies from capturing a large part of the credit and limiting risk appetite. These gains are especially important if the economic losses generated by a financial crisis are high.
In our chart, we show what the reference interest rate for the euro area would be on the basis of a Taylor rule, amended to take the financial cycle into consideration. Specifically, like Juselius and co-authors in their original article, we allow the reference interest rate to react not only to fluctuations in the output gap and inflation, but also to changes in the debt service gap, that is, to the deviation relative to the historical average of the ratio of interest payments by households and firms in relation to their income. The intuition is that a low level of interest payments encourages households and firms to take on more debt, which ends up affecting their economy’s GDP and the macrofinancial conditions. It therefore makes sense to include it in the monetary policy rule.2 We call the interest rate which takes the financial cycle into account the «interest rate with an augmented Taylor rule», and we compare it with the shadow rate and the rate derived from a traditional Taylor rule. The shadow rate is simply the refi rate that we would observe if it were not anchored to 0%, that is, if it reflected the ECB’s unconventional measures such as quantitative easing (QE).3
The results leave no room for doubt: in the booming years of the 2000s, the interest rate with an augmented Taylor rule was significantly higher than both the shadow rate and the rate according to a traditional Taylor rule. This is because these were years of financial boom, in which the debt service gap was in negative territory (i.e. interest payments were modest), which encouraged indebtedness and thus generated an overheating of the economy. This boom should have led monetary policy to set higher interest rates in order to cool the financial cycle. In fact, setting higher rates could have helped to reduce asset prices and thus contain the leverage and excessive risk appetite of those years.
During the crisis period, in contrast, factoring in the financial cycle would enable better measurement of the economy’s weakness, which helps to understand why somewhat lower rates than those prescribed by the Taylor rule may be more appropriate. Finally, in recent years, incorporating the financial cycle into the Taylor rule would lead to reference rates that are higher than the shadow rate. Therefore, according to what the shadow rate indicates, it appears that, in reality, rates have been too accommodative. Indeed, with an output gap that has been closing and a small recovery in credit after successfully completing the deleveraging process following the crisis, it is no wonder that incorporating the financial cycle prescribes higher rates and that, in fact, they would have already started to rise.
Ultimately, failing to take the financial cycle into account results in lower rates than would be desirable in times of economic expansion, which ends up accelerating the onset of the coming crisis and exacerbating its effects. Such effects forces central banks to implement aggressive rate cuts. The lesson here is that excessively low rates in the present also generates low rates in the future, creating a vicious circle. In contrast, taking account of the financial cycle and not allowing ourselves to be blinded by the (apparent) blessings of boom periods is key (in combination with the appropriate macroprudential policies) for ensuring a more balanced management of monetary policy.
Javier Garcia-Arenas
1. F. Gourio, A.K. Kashyap and J.W. Sim (2018). «The trade offs in leaning against the wind». IMF Economic Review, 66(1), 70-115.
2. Specifically,$$i_t^{Taylor\;Finance-Neutral}\;=\;\rho\left(i_{t-1}^{Taylor\;Finance-Neutral}\right)\;+\;\left(1\;-\;\rho\right)\left[\left(r_{t-1}^{n,\;Finance\;Neutral}\;+\pi^\ast\right)\;+1,5\left(\pi_{t-1}-\pi^\ast\right)\;+\;0,5\;{\widetilde\gamma}_{t-1}\;-\;0,75\;{\widetilde{DSR}}_{t-1}\right]$$, donde $$\rho$$ = 0,9 is the isolation parameter,$$r_t^{n,\;Finance\;Neutral}$$ is the estimated natural rate of interest according to the model by Juselius et al. (2017), $$\pi_t$$ is the actual core inflation, $$\pi^\ast$$= 2% is the target inflation, $${\widetilde\gamma}_t$$ is the measure of the output gap taking the financial cycle into account as obtained in the previous article, and $${\widetilde{DSR}}_t$$ is the deviation of the debt service ratio relative to its historical average.
3. For further details on shadow rates and the methodology used by Wu and Xia to calculate them, see the Focus «Discovering monetary policy in the shadow» in the MR02/2016.
Etiquetas
## Sobre la normalización de la política monetaria
Antonio Montilla
13 Oct 2021
## Exit strategies
José Ramón Díez
14 Sep 2021
13 Sep 2021 | 2021-10-18 20:26:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3075832724571228, "perplexity": 1490.1629109214898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00181.warc.gz"} |
https://codereview.stackexchange.com/questions/63970/custom-mathematical-vector-class | # Custom mathematical vector class
My first bigger C++ project: a vector class for personal use and statistical computation.
#include <iostream>
#include <vector>
#include <algorithm>
#include <cmath>
template<class T>
class vect
{
std::vector<T> m;
size_t s;
public:
// default constructor
vect(): m(0), s(0){}
vect(size_t n) :m(n), s(n) {}
vect(std::vector<T> v) :m(v), s(v.size()) {}
// copy constructor
vect(const vect<T>&v): m(v.getData()), s(v.size()){}
// destructor
~vect(){}
std::vector<T> getData() const{
return m;
}
size_t size() const {
return s;
}
m.push_back(value);
s++;
}
s++;
m.emplace(m.begin()+loc, value);
}
void rmFrom(){
m.pop_back();
s--;
}
void rmAt(size_t loc){
m.erase(m.begin()+loc);
s--;
}
// returns a sorted copy of the original vector.
vect<T> sorted(){
std::vector<T> temp = this->getData();
std::sort(temp.begin(), temp.end());
vect<T> v(temp);
return v;
}
double median(){
// linearly interpolated
if (s%2==0)
{
return ((this->sorted()[(s/2)-1]+this->sorted()[(s/2)])/2);
}
else
{
return (this->sorted()[(s)/2]);
}
}
double percentile(double p){
// @p - value between 0 and 1: the percentile
// rounds to the nearest index and return the corresponding
return (this->sorted()[round(s*p)]);
}
double sum(){
// computes the sum of the vectors elements
double sum = 0;
size_t l = 0;
while (l<s) {
sum+=m[l];
l++;
}
return sum;
}
double mean(){
// computes the mean of the vectors elements
return this->sum()/s;
}
double dot(){
// computes the inner product of this vector
vect<T> temp = *this;
double dot = 0;
size_t l = 0;
while(l<s){
dot+=pow(temp[l],2);
l++;
}
return dot;
}
double dot(vect<T> other){
// computes the inner product of this and another vector
vect<T> temp = *this;
double dot = 0;
size_t l = 0;
while(l<s){
dot+=temp[l]*other[l];
l++;
}
return dot;
}
double magnitude(){
return sqrt(this->dot());
}
double manhattan_norm(){
// computes the manhattan norm
double n = 0;
size_t l = 0;
while(l<s){
n+=abs(m[l]);
l++;
}
return sqrt(n);
}
double p_norm(unsigned int p){
return pow(this->dot(),1/p);
}
vect<T> normalized(){
// devides the vector by it's eucledian distance
// which results in a unity vector, meaning
// the sum the elements of a unity vector add up to one;
T n = 0;
vect<T> unity = *this;
n = unity.magnitude();
return unity/n;
}
vect<T> diff(vect<T> other){
vect<T> diff;
return (*this-other);
}
double cosine(vect<T> other){
// returns the cosine of theta of this vector and another
return this->dot(other)/(this->magnitude()*other.magnitude());
}
double angle(vect<T> other){
return acos(this->cosine(other))*(180/3.14159265);
}
bool is_perpendicular(vect<T> other){
return (this->dot(other)==0);
}
vect<T> direction_cosine(){
vect<T> direction;
double norm = this->norm();
double cosine = 0;
double theta = 0;
for(size_t i = 0; i < s; i++){
cosine = m[i]/norm;
theta = acos(cosine);
}
return direction;
}
vect<T> parallel_comp(vect<T> other){
// projection of other onto *this
vect<T> temp = *this;
return (temp.dot(other)/temp.dot(temp))*temp;
}
vect<T> perpendicular_comp(vect<T> other){
return other-parallel_comp(other);
}
double parallel_magnitude(vect<T> other){
// returns the the projections size
return this->parallel_comp(other).magnitude();
}
double error_magnitude(vect<T> other){
// returns the the parallel components size relative to it's own
return this->perpendicular_comp(other).magnitude();
}
vect<T>& operator=(const vect<T>& other){
if (this!=&other)
{
m = other.getData();
s = other.size();
}
return *this;
}
T& operator[] (size_t i) {
return m[i];
}
const T& operator[] (size_t i) const {
return m[i];
}
};
template<class T>
std::ostream& operator<<(std::ostream &os, const vect<T>&v){
for(int i = 0; i < v.size(); i++){
os << v.getData()[i] << " ";
}
return os;
}
template<class T>
std::istream& operator>>(std::istream& is, vect<T>& v){
T value;
is >> value;
return is;
}
template<class T>
vect<T>& operator+=(vect<T>& a, const vect<T>& b)
{
for(size_t i=0; i<a.size(); ++i)
a[i]+=b[i];
return a;
}
template<class T>
vect<T> operator+(const vect<T>& a, const vect<T>& b)
{
vect<T> z(a.size());
for(size_t i=0; i<a.size(); ++i)
z[i] = a[i]+b[i];
return z;
}
Plus many more overloaded arithmetic operators which are all like the last two ones.
I know it's nothing fancy but I am always looking for ways to make things better.
### C++14
Its 2014 most modern compilers now support C++14 so you should use it. This code is still very C++03. For this class this simply means adding move semantics (and nothrow on swap).
To add move semantics you need to add a move constructor and move assignment operator.
vect(vect&& other);
vect& operator=(vect&& other);
Potentially there are a couple of places where you can use range based for operator.
### General
This seems redundant (and thus dangerous).
size_t s;
The size is already stored as part of the the other member m;
### Const Correctness
None of these functions
double sum();
double mean();
double dot();
modify the state of your vector. So you should mark them as const members.
double sum() const;
double mean() const;
double dot() const;
This allows you pass your vect to a function as a const reference and still call these non mutating functions.
### Efficiency
You may want to cache the sum in a mutable member. Invalidate it if the vector is mutated. There is no point re-computing this value if the vector has not changed.
double sum(){
// computes the sum of the vectors elements
double sum = 0;
size_t l = 0;
while (l<s) {
sum+=m[l];
l++;
}
return sum;
}
Also prefer to use some of the algorithms that are built for you when you can.
sum = std::accumulate(m.begin(), m.end());
### Copy and Swap
This is an old way of doing this:
// overloaded operators
vect<T>& operator=(const vect<T>& other){
if (this!=&other)
{
m = other.getData();
s = other.size();
}
return *this;
}
The more modern way is to use the copy and swap idium.
// overloaded operators
vect<T>& operator=(vect<T> other){ // Notice the pass by value to get the copy.
other.swap(*this); // Swap the content of this object.
// With the copy you just created. This updates it
// in an exception safe way,
return *this; // return yourself.
} // Let the destructor of other cleanup your old state.
void swap(vect<T>& other) throws() { // use C++11 nothrow if you upgrade to C++11
std::swap(this->m, other.m);
std::swap(this->s, other.s);
}
### The X operators can be written in terms of the X= operator.
Example:
Define the + operator in terms of the += operator.
template<class T>
vect<T> operator+(const vect<T>& a, const vect<T>& b)
{
vect<T> result(a); // Make a copy.
result += b;
return result;
}
Once you studied that and see it does the same. There is a small optimization.
template<class T>
vect<T> operator+(const vect<T> copy, const vect<T>& b)
{
copy += b; // Notice the copy is passed by value.
// So there is an implicit copy
// So you don't need the manual copy inside the code.
return copy;
}
### Your streaming operations should be symmetrical and distinguishable.
The output operator dumps the whole vector:
template<class T>
std::ostream& operator<<(std::ostream &os, const vect<T>&v){
for(int i = 0; i < v.size(); i++){
os << v.getData()[i] << " ";
}
return os;
}
But the read operator only reads a single value into the vector (so its not symmetrical).
template<class T>
std::istream& operator>>(std::istream& is, vect<T>& v){
T value;
is >> value; // Also note. That if the read fails.
// You should probably test to make sure the read works.
// if (is >> value) {v.addTo(value);}
return is;
}
Personally I would make it dump the data in a way that makes it obvious of the start/end point (or) you can prefix the dump with a count and then the values.
template<class T>
std::ostream& operator<<(std::ostream &os, const vect<T>&v){
os << v.size() << ": ";
for(int i = 0; i < v.size(); i++){
os << v.getData()[i] << " ";
}
return os;
}
// Note: This does not work for T == std::string as it does not
// have symmetric input output operators. But all other normal types do.
template<class T>
std::istream& operator>>(std::istream& is, vect<T>& v){
std::size s = 0;
char marker = 'B';
if (is >> s >> marker)
{
if (marker != ':')
{ // The mark is not what we expect.
// So mark the stream as bad so that processing stops.
is.setstate(std::ios::failbit);
}
else
{
T value;
for(;s > 0 && (is >> value);--s)
{
// Have values and successful read.
}
}
}
return is;
}
• Some minor remarks: "range based for operator" looks like a typo (?). "You may want to cache the sum in a mutable member." Might violate Single Responsibility Principle. "Also prefer to use some of the algorithms" Somehow was swallowed by a code block. accumulate requires a third argument. "The more modern way is to use the copy and swap idium." Typo, and don't let Howard Hinnant hear this ;) "There is a small optimization" First parameter must not be const. – dyp Sep 27 '14 at 0:00
### Naming
Your "vector" is not really like std::vector, but vect sounds like it. It would be more intuitive to rename it some thing different, for example vectorstats, or just vstats, or something.
The m and s variables are terrible.
### size
I don't really see the point of the s variable. Why do you want to count the size yourself? Why not use m.size()? That would be a lot less error-prone, because as a general rule, the less lines of code you write yourself the better.
### sum
It would be more natural to use the iterator pattern, shorter and better:
double sum() {
double sum = 0;
for (std::vector<T>::const_iterator it = m.begin(); it != m.end(); ++it) {
sum += *it;
}
return sum;
}
### dot
The temp variable is completely pointless in this method:
double dot(){
// computes the inner product of this vector
vect<T> temp = *this;
double dot = 0;
size_t l = 0;
while(l<s){
dot+=pow(temp[l],2);
l++;
}
return dot;
}
You could rewrite as:
double dot() {
double dot = 0;
for (std::vector<T>::const_iterator it = m.begin(); it != m.end(); ++it) {
dot += pow(*it, 2);
}
return dot;
}
The same is true for the overloaded version of this method.
### Excessive parentheses
The too many parantheses are seriously hurting readability here:
return ((this->sorted()[(s/2)-1]+this->sorted()[(s/2)])/2);
And the lack of spaces around operators don't help!
This should have been:
return (this->sorted()[s / 2 - 1] + this->sorted()[s / 2]) / 2;
Or actually, it's wasteful to sort the vector twice, better sort once and save in a temp variable:
std::vector<T> temp = this->sorted();
return (temp[s / 2 - 1] + temp[s / 2]) / 2;
Now that I look at this cleaner version, it's clear that this won't work if the vector is empty, because if s == 0 this will end up referencing temp[-1]. This was not so easy to see in the original version, now it's obvious.
This was just the worst example I found, but in many many places you use far more parentheses than you need. I suggest to review the entire code and trim a little bit.
Also try to put spaces around operators like I did in this example.
### Placement of curly braces
You're placing curly braces inconsistently. Sometimes you use put the opening brace on the same line as the statement, like this:
double sum(){
// ...
while (l<s) {
// ...
}
}
Other times you place opening brace on the next line, like this:
if (s%2==0)
{
// ...
}
else
{
// ...
}
I suggest to pick either of these styles and stick to it. (I prefer the first.)
• The class name should be capitalized so that it's easily distinguished from functions and variables.
• m is a very undescriptive name as it's only one letter. Write out the entire name instead of giving a random letter or some abbreviation. It's especially important to do this so that others can know what this vector is storing.
• You don't need s; the size is already known to the vector structure. Just access its size via the size() member function.
• If you're just summing the values in a vector, you can use std::accumulate():
return std::accumulate(m.cbegin(), m.cend(), 0);
(That would be the entire body of sum().)
1. You don't need an empty destructor. Since it is a no-op. You can leave it out and the compiler will provide a default for you.
2. The default constructor, vect() should not init m with 0.
3. The copy constructor, vect(const vect<T>&v) is not needed either. The compiler will generate the exact same code for you if you don't provide it.
4. I don't really see the need for the s variable here. It is just keeping a copy of m.size(). Why not just use std::vector::size() for that? It is a bunch of extra work to keep that var up-to-date.
5. std::vector<T> getData() const might be more efficient if returning by reference. If the intended use for this method is to allow read-only access to the internal vector, then there is no reason to return by copy. Return by const reference: const std::vector<T> & getData() const.
6. rmFrom() is a very unclear name. rm is short for remove? Then just call it remove.
7. The methods that add data to the vector, such as addTo()/addAt() should be provided in two flavors: one taking a const T& and one taking a T&&. If you refer to std::vector:push_back() you will see that it provides these two overloads to optimize for move semantics (move semantics will require a C++11 capable compiler).
8. You should probably also provide a move constructor and a move assignment operator for your vect. See the rule of three/five/zero.
9. Unneeded temps inside dot(). Why are you creating temporary copies of the vector inside both dot() methods? That is nonsense since both function only read from the vector.
10. Your code is not fully const correct. There are several methods that need to be made const.
11. In vect<T> diff(vect<T> other) there is a local variable inside (also named diff) that is never used.
12. You are using the <cmath> functions. So functions like sin, cos, acos, etc should be prefixed with std::. E.g.: std::acos(). That would be the correct form.
13. The array access operators [] would benefit from some runtime bounds checking. I suggest adding asserts to those. E.g.: assert(i < m.size());
Few missing poins.
• STL provides convenient algorithms for pretty much all loops in your code. Besides std::accumulate, take a look at std::inner_product and std::transform algorithms.
• I question a mathematical validity of p_norm. It calculates $(\sum {m_i}^2)^\frac{1}{p}$. Shouldn't it be $(\sum {m_i}^p)^\frac{1}{p}$ instead?
• Certain operations (such as operator+= and dot) only make sense for vectors of the same dimension. Depending on the size of your vectors you'd be getting either a not very meaningful result or an exception. Doesn't seem right. I don't know your use case; I'd recommend to address the problem anyway.
I'd probably implement this as at least two distinct classes: one class to implement statistical functions, and one or more classes to implement linear algebra.
The statistical classes could have additional functions such as sample variance, estimated population variance, and covariances. This might lead to a bit of duplication of code (since some of the operations here look a lot like inner product, in fact are inner product if you take a vector-space approach to your statistics) but I think this might be worth the benefit of making it clear what functions are best suited to whatever domain in which you're currently working. (A particular side benefit is that in a development environment that offers autocompletion of member function names, it would offer the ones that you might actually want to use and not a lot of functions that make no sense at all.)
For a linear-algebra vector, you could make the dimension of the vector space be one of the template parameters. It should still be possible to construct a higher-dimension vector by adjoining a new component to a lower-dimension vector. (If you do a lot of work in two-, three-, or four-dimensional vector spaces, you might want vector templates for each of those specific dimensions in which the iterative functions are completely unrolled, though it's possible that the compiler will do this anyway when the dimension is explicit in the class definition; it would be interesting to benchmark two templates against each other, one designed exclusively for two-dimensional vectors and the other an N-dimensional vector with the template parameter N set to 2.)
I would provide a better approximation of pi (more digits). I probably would define pi as a constant (possibly a static const member variable, although possibly just a static variable in the compilation unit).
Arc cosine does not accurately measure the angle between two vectors when that angle is small. The angle between two vectors is twice the arc sine of half the magnitude of the difference between two unit vectors parallel to the original two vectors; this works well for small angles, reasonably well for other acute angles (though it's probably more computation than the arc cosine method), but is inaccurate for angles near pi, where it would be better to take the sum of the unit vectors rather than their difference. I'd suggest comparing the dot product to the product of the magnitudes of the two vectors at runtime, and using that result to decide whether to use the cosine method or one of the other methods. There are some tweaks you can do to try to minimize the number of square roots you have to compute (such as, initially compare the squares of two quantities rather than first computing the quantities themselves).
The percentile function looks dangerous. What happens if you ask for the 99th percentile of a vector with only forty elements? Since 0.99*40 = 39.6, which rounds to 40, you would be trying to access m[40], which is beyond the end of your data set. I think you need to decide what you want percentiles to do, and throw an exception when the function call cannot produce a correct result. It seems to me you might want d.percentile(0.5) to do exactly the same thing as d.median(), which is not true in your current implementation when d has an even number of elements.
• Thank you for your comments. I did not split the vector in two different ones, rather took the methods (linalg) and (stats) into two different namespaces outside of the class. I agree that the class was ambiguous in it's methods. – Vincent Oct 7 '14 at 6:07
Class Design Changes Most notably is that I decided follow the advice of @glampert and decided to go with the rule of zero, and drop any other destructors/copy/move constructors. I played around with different version and came to the conclusion that at this points no additional constructors are required. So I dropped them.
Use of STD Library Another recurring topic was to make use of the and libraries. This was a great advice. I incorporated them quite a view times.
Other changes Changes of class and function names. Adding to additional constructor: initialiser list and "parameter pack". Dropping the unnecessary size_t in the private data section. Dropping some of the functions.
#include <iostream> // std::cout
#include <initializer_list> // allows to write data onto in this form: MVector v = {...}
#include <vector> // container for private data of the class
#include <algorithm> // swap...
#include <numeric> // accumulate, innerproduct
#include <functional> // function as parameter
#include <random> // random numer generatore and distributions
#include <cmath> // trigon functions
#include <assert.h>
// ************ Description (short) ***************
// Custom vector class for personal usage.
// ************** Definition Vector ***************
// Definition http://en.wikipedia.org/wiki/Euclidean_vector:
// In mathematics, physics, and engineering, a Euclidean vector
// (sometimes called a geometric or spatial vector or—as here—simply a vector)
// is a geometric object that has magnitude (or length) and direction
// and can be added to other vectors according to vector algebra.
// *************** Purpose of Class ***************
// Primary purpose: of this class is primarily to store numeric data.
// Secondary purpose: simple construction of a vector object (ease of use)
// Quaternary purpose: and inclusion of vector operations and algorithms
// commonly used in linear algebra and statistics.
// ******************** Data **********************
// The data is stored in a std::vector, which provides random access to it's elements.
#ifndef MVECTOR_H
#define MVECTOR_H
template<class T>
class MVector
{
std::vector<T> data;
public:
// create a vector of size n either with xvalue or lvalue
// requires an object of std::size_t to construct a vector
// of n zeros.
explicit MVector(const std::size_t& n) :data(n) {}
explicit MVector(std::size_t&& n) :data(n) {}
// construct a vector by initialising an unspecified amount of values
template<class ... Data>
explicit MVector(T first, Data&&... values): data{first,
std::forward<T>(static_cast<T>(values))... }{}
// construct a vector by initializer list
MVector(std::initializer_list<T> il) :data(il){}
// construct with a std::vector by initializer list
explicit MVector(const std::vector<T>& v) :data(v){}
explicit MVector(std::vector<T>&& v) :data(v){}
// returns the data
const std::vector<T>& getData() const{
return data;
}
// returns the amount of elements stored in the vector
std::size_t size() const{
return data.size();
}
// add an elments at the end of the vector
// @ param "value" - object of class T
data.push_back(value);
}
// add an elments at the end of the vector
// @ param "value" - object of class T
data.push_back(value);
}
// add an elments at specified location in the vector
// @ param "value" - object of class T
// @ param "loc" - location of the input
void addAt(const T& value, const std::size_t& loc=0){
data.emplace(data.begin()+loc, value);
}
// add an elments at specified location in the vector
// @ param "value" - object of class T
// @ param "loc" - location of the input
data.emplace(data.begin()+loc, value);
}
// removes an elments from the end of the vector
void remove_From(){
data.pop_back();
}
// removes an elments from the specified location of the vector
// @ param "loc" - index which element to delete
void remove_At(const std::size_t& loc){
data.erase(data.begin()+loc);
}
// removes an elments from the specified location of the vector
// @ param "loc" - index which element to delete
void remove_At(std::size_t&& loc){
data.erase(data.begin()+loc);
}
// resizes the vector to the size of to
// @ param "to" - size of which the vector will be changed to
void resize(std::size_t to){
data.resize(to);
}
// returns a sorted COPY of the original vector.
MVector<T> sorted(){
std::vector<T> temp = this->getData();
std::sort(temp.begin(), temp.end());
MVector<T> v(temp);
return v;
}
// convenience functions
// returns the median, linearly interpolated, of the vector
double median() {
MVector<T> temp = this->sorted();
if (data.size()%2==0){
return (temp[data.size() / 2 - 1] + temp[data.size() / 2]) / 2;
}
else{
return temp[data.size() / 2];
}
}
// returns the p'th percentile of the vector
// rounds to the nearest index and return the corresponding
// @ param "p" - value between 0 and 1: the percentile
double percentile(double p) {
assert(p>=0 && p<=1);
return (this->sorted()[round(data.size()*p)]);
}
// returs the sum of all objects in the vector
// @ param "loc" - adds a constant value to all parameters
double sum(T loc=0) const{
return std::accumulate(data.begin(), data.end(), loc);
}
// returs the arithmetic mean of the vector
double mean() const{
return this->sum()/data.size();
}
// returs the inner product of the vector
// @ param "loc" - adds a constant value to all parameters
double dot(T loc=0) const{
return std::inner_product(data.begin(), data.end(),
data.begin(), loc);
}
// returs the inner product of two different vectors
// @ param "loc" - adds a constant value to all parameters
double dot(MVector<T> other, T loc=0) const{
return std::inner_product(data.begin(), data.end(),
other.getData().begin(), loc);
}
// returns the "norm" or "length" of a vector
double magnitude() const{
return sqrt(this->dot());
}
// returns the manhatten "norm" or "length" of a vector
double manhattan_norm(double loc=0) const{
return std::sqrt(
std::accumulate(data.begin(), data.end(), loc,
[](double x, double y){
return std::abs(x)+std::abs(y);}));
}
// devides the vector by it's eucledian distance
// which results in a unity vector, meaning
// the sum the elements of a unity vector add up to one;
MVector<T> normalized() const{
double n = 0;
MVector<T> unity = *this;
n = unity.magnitude();
return unity/n;
}
// returns the cosine of theta of this vector and another
// @ param "other" - other vector
double cosine(MVector<T> &other) const{
return this->dot(other)/(this->magnitude()*other.magnitude());
}
// returns the angel between to vectors (not exact, pi is approximated)
// @ param "other" - other vector
double angle(MVector<T> &other) const{
return std::acos(this->cosine(other))*(180/3.14159265);
}
// returns true if the angel between to vectors
// is 90 degrees, which is exactly the case when the dot product is zero.
// @ param "other" - other vector
bool is_perpendicular(const MVector<T> &other) const{
return (this->dot(other)==0);
}
// projection of other onto *this
// requires copy of *this because multiplying *this resulted in compilation error
// @ param "other" - other vector
MVector<T> parallel_comp(const MVector<T>& other) const{
MVector<T> temp = *this;
return (temp.dot(other)/temp.dot(temp))*temp;
}
// return the "error component" of two vectors
// @ param "other" - other vector
MVector<T> perpendicular_comp(const MVector<T>& other) const{
return other-parallel_comp(other);
}
T& operator[] (std::size_t i) {
assert(i<data.size());
// make sure that the index is not out of bound
return data[i];
}
const T& operator[] (std::size_t i) const {
assert(i<data.size());
// make sure that the index is not out of bound
return data[i];
}
MVector<T>& operator=(MVector<T> other){
this->data.swap(other.data);
return *this;
}
// end of class
};
// *****************************************************************************
// bool operators
// *****************************************************************************
template<class T>
bool operator==(const MVector<T> &lhs,
const MVector<T> &rhs) {
return std::equal(lhs.getData().begin(), lhs.getData().end(),
rhs.getData().begin());
}
template<class T>
bool operator!=(const MVector<T> &lhs,
const MVector<T> &rhs) {
return (! (lhs == rhs) );
}
// *****************************************************************************
// input output operators
// *****************************************************************************
template<class T>
std::ostream& operator<<(std::ostream &os, const MVector<T>& v){
for(int i = 0; i < v.size(); i++){
os << v.getData()[i] << " ";
}
return os;
}
template<class T>
std::istream& operator>>(std::istream& is, MVector<T>& v){
T value;
is >> value;
return is;
}
// *****************************************************************************
// // vector arithmetic overloaded operators
// *****************************************************************************
// *****************************************************************************
// *****************************************************************************
template<class T>
MVector<T>& operator+=(MVector<T>& lhs, const MVector<T>& rhs){
assert(lhs.size() == rhs.size());
for(std::size_t i=0; i < lhs.size(); ++i){
lhs[i]+=rhs[i];
}
return lhs;
}
template<class T>
MVector<T> operator+(MVector<T> lhs, const MVector<T>& rhs){
assert(lhs.size()==rhs.size());
lhs+=rhs;
return lhs;
}
template<class T>
MVector<T>& operator+=(MVector<T>& lhs, const T& rhs){
for(std::size_t i=0; i<lhs.size(); ++i)
lhs[i]+=rhs;
return lhs;
}
template<class T>
MVector<T> operator+(MVector<T> lhs, const T& rhs){
lhs+=rhs;
return lhs;
}
template<class T>
MVector<T> operator+(const T& lhs, MVector<T> rhs){
MVector<T> result(rhs.size());
for(std::size_t i=0; i<rhs.size(); ++i)
result[i] = lhs+rhs[i];
return result;
}
// *****************************************************************************
// // vector creation routines: constant/random space
// *****************************************************************************
template <class T>
MVector<T> range(T to, T from = 0, T by = 0){
MVector<T> v;
if (from>to){
for (T i=from; i > to; i-=by) {
}
}
else{
for (T i=from ; i < to; i+=by) {
}
}
return v;
}
template<class T>
MVector<T> range_function(std::size_t to, std::function<T(T)> fun, T from=0, T by=1){
MVector<T> v;
if (from<to){
for (T i = from; i < to; i+=(by)) {
}
}
else{
for (T i = from; i > to; i-=(by)) {
}
}
return v;
}
template <class T>
MVector<T> rgeom(std::size_t length, T probability){
assert(probability>=0 && probability<=1);
std::mt19937 gen;
gen.seed(time(NULL));
std::geometric_distribution<T> geometric(probability);
MVector<T> v;
for (std::size_t i = 0; i < length; i++) { | 2018-12-13 05:06:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3585756719112396, "perplexity": 7853.484992409454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824448.53/warc/CC-MAIN-20181213032335-20181213053835-00349.warc.gz"} |
http://mymathforum.com/trigonometry/342872-write-2sin-3x-cos-x-sum.html | My Math Forum Write 2sin(3x)cos(x) as a sum
User Name Remember Me? Password
Trigonometry Trigonometry Math Forum
November 19th, 2017, 10:59 PM #1 Newbie Joined: Oct 2017 From: Redlands, CA Posts: 15 Thanks: 0 Write 2sin(3x)cos(x) as a sum Please help me out with this. I'm new to the product-to-sum formulas and the 2sin is throwing me off. Write 2sin(3x)cos(x) as a sum
November 19th, 2017, 11:17 PM #2 Newbie Joined: Oct 2017 From: Redlands, CA Posts: 15 Thanks: 0 Nevermind.. I was overthinking again. The 2 goes away because in product-to-sum you multiply by 1/2.
November 20th, 2017, 08:26 AM #3 Math Team Joined: Jan 2015 From: Alabama Posts: 3,261 Thanks: 895 The obvious thing to have done would be to ignore the "2", expand sin(3x)cos(x) and then multiply by 2. For sin(3x)cos(x), I would use the fact that sin(A+ B)= sin(A)cos(B)+ cos(A)sin(B). It is also true, then, that sin(A- B)= sin(A)cos(B)- cos(A)sin(B). Adding 2sin(A)cos(B)= sin(A+ B)+ sin(A- B). Here, 2sin(3x)cos(x)= sin(3x+ x)+ sin(3x- x)= sin(4x)+ sin(2x). Thanks from JPow
November 20th, 2017, 09:02 AM #4 Global Moderator Joined: Dec 2006 Posts: 20,099 Thanks: 1905 The relevant product-to-sum formula is given as $2\sin A\cos B = \sin(A + B) + \sin(A - B)$ in the book I use. Thanks from JPow
November 21st, 2017, 09:32 AM #5 Newbie Joined: Oct 2017 From: Redlands, CA Posts: 15 Thanks: 0 Thanks for the replies. It makes a lot more sense to me now.
Tags 2sin3xcosx, sum, write
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post Speed Algebra 3 March 5th, 2014 01:51 PM yogazen2013 Algebra 5 August 6th, 2013 03:38 AM chessy Calculus 8 February 2nd, 2011 10:31 AM tinyone Algebra 14 October 27th, 2010 09:38 PM conjecture Algebra 1 July 17th, 2008 01:45 AM
Contact - Home - Forums - Cryptocurrency Forum - Top | 2019-01-18 17:27:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7574639320373535, "perplexity": 14575.957014875654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660258.36/warc/CC-MAIN-20190118172438-20190118194438-00035.warc.gz"} |
https://academic.oup.com/cercor/article/23/2/488/288087/Spatial-and-Temporal-Variations-of-Cortical-Growth | ## Abstract
Spatial and temporal variations in cortical growth were studied in the neonatal ferret to illuminate the mechanisms of folding of the cerebral cortex. Cortical surface representations were created from magnetic resonance images acquired between postnatal day 4 and 35. Global measures of shape (e.g., surface area, normalized curvature, and sulcal depth) were calculated. In 2 ferrets, relative cortical growth was calculated between surfaces created from in vivo images acquired at P14, P21, and P28. The isocortical surface area transitions from a slower (12.7 mm2/day per hemisphere) to a higher rate of growth (36.7 mm2/day per hemisphere) approximately 13 days after birth, which coincides with the time of transition from neuronal proliferation to cellular morphological differentiation. Relative cortical growth increases as a function of relative geodesic distance from the origin of the transverse neurogenetic gradient and is related to the change in fractional diffusion anisotropy over the same time period. The methods presented here can be applied to study cortical growth during development in other animal models or human infants. Our results provide a quantitative spatial and temporal description of folding in cerebral cortex of the developing ferret brain, which will be important to understand the underlying mechanisms that drive folding.
## Introduction
In gyroencephalic species, such as humans, expansion of the cerebral cortical surface during brain development is associated with the mechanical process of gyral and sulcal folding (Welker 1990). Although empirical observations have been reported that abnormal cerebral cortical folding is associated with various neurodevelopmental disorders, the cellular and biomechanical mechanisms that underlie this process are not well understood. Experimental evidence does exist, however, on the interdependence of cortical folding and other critical central nervous system (CNS) developmental events. For example, Chenn and Walsh (2002) have caused gyral and sulcal-like structures in normally lissencephalic mice by inhibiting neuronal precursors’ exit from the cell cycle. Additionally, a mitogenic influence of thalamic axons on cortical neuronal precursors (Dehay et al. 2001) has been proposed to underlie the effect of early enucleation on altered cortical folding patterns observed in rhesus macaques (Rakic 1988) and ferrets (Reillo et al. 2011). The potential link between cerebral cortical morphology at maturity and its developmental history therefore is a motivating factor to more fully understand the mechanical factors that drive folding.
Cerebral cortical neurons originate in transient zones (e.g., the ventricular and subventricular zones) and then migrate to their final destination (Bystron et al. 2008). Neural progenitor cells in the ventricular and subventricular zones regulate the number and future cortical location of neurons (Rakic 2009). The number of neural stem cells prior to the onset of neurogenesis is exponentially proportional to the number of neurons produced and, subsequently, to the surface area of the cortex (Rakic 1988; Chenn and Walsh 2003). The spatial and temporal pattern of neurogenesis in the human is well described (reviewed in Bystron et al. 2008; Breunig et al. 2011). Early in development (e.g., human gestation weeks 5–20), expansion of the surface area of the cerebral cortex is mainly due to the addition of neurons (Nieuwenhuys et al. 2008). However, long after the completion of this proliferative phase (e.g., human gestation weeks 20 through beyond birth), the cerebral cortical surface area still continues to expand. Indeed, the greatest amount of cortical folding occurs following the conclusion of pyramidal cell neurogenesis (Nieuwenhuys et al. 2008).
Experimental determination of biomechanical characteristics of the cerebral cortical growth is critical for understanding the underlying mechanisms that drive cortical folding. In several previous mechanical modeling efforts, assumed values for quantities such as surface area expansion, material properties of cortical and subcortical structures, and tension generated by axons and glial cells (Richman et al. 1975; Todd 1982; Smart and McSherry 1986; Welker 1990; Van Essen 1997; Toro and Burnod 2005; Xu et al. 2010) have necessarily been adopted due to the lack of such measurements. The 2 most widely discussed hypotheses for cortical folding are mechanical buckling caused by differential growth between inner and outer cortical layers (Richman et al. 1975) and the axonal tension-based theory of morphogenesis (Van Essen 1997). Following the work of Smart and McSherry (McSherry 1984; McSherry and Smart 1986; Smart and McSherry 1986), we have made use of the ferret as an ideal animal model for investigating the relationship between cellular level and macroscopic changes associated with cortical folding and surface area expansion (Barnette et al. 2009; Kroenke et al. 2009; Xu et al. 2010). Not only are regional and temporal characteristics of cell proliferation (Jackson et al. 1989; Noctor et al. 1997; Reillo et al. 2011) and differentiation (Voigt et al. 1993; Zervas and Walkley 1999; Kroenke et al. 2009; Bock et al. 2010; Jespersen et al. 2011) known with high precision but also the gestational period is sufficiently short in this gyroencephalic species that cortical folding takes place during the postnatal period, which facilitates its characterization and experimental manipulation. By measuring tension-induced morphological deformations following microdissections, we observed patterns of axonal tension that are inconsistent with the hypothesis of axonal tension-induced cerebral cortical folding (Xu et al. 2010). Instead, finite element models of cortical expansion suggest the existence of intracortical stress fields that are consistent with the tension measurements and can produce sulcal/gyral folds through a mechanical buckling process (Xu et al. 2010). However, in order to generate a consistent pattern of sulci/gyri, as is observed in ferrets, additional factors, such as regional variation in the development of stress patterns, are needed. Smart and McSherry (1986) previously noted qualitatively that folding of the ferret cerebral cortex follows a rostral/caudal pattern (Smart and McSherry 1986) that resembles the rostal/lateral to caudal/medial gradient in pyramidal cell neurogenesis (the transverse neurogenetic gradient, TNG) that they and others described for this species (McSherry 1984; McSherry and Smart 1986; Jackson et al. 1989; Noctor et al. 1997). Therefore, the possibility exists that regional patterns in surface area expansion could give rise to consistent stress patterns that are important for a consistent pattern of cerebral cortical folding.
Here, we have quantitatively characterized temporal and regional patterns of surface area expansion in the ferret cerebral cortex and related our observations to previous descriptions of neurogenesis and cellular morphological differentiation. In a cross-sectional study of cerebral cortical surface area expansion from postnatal ages ranging from 4 days (P4) to P35, we have determined the rate of areal expansion over the final phases of neurogenesis and cell migration from ventricular/subventricular zones to the cortical plane, as well as during the subsequent phase of neuronal morphological differentiation. In order to examine the relationship between cortical surface area expansion and previously characterized pattern of cellular morphological differentiation, as well as cortical folding, longitudinal measurements analyzed over the period from P14 to P28 are used to compare regional patterns in these attributes of cerebral cortical growth. The implications of these findings are discussed in the context of cortical surface area expansion rates measured in other species, including humans, and in relation to the mechanics of cerebral cortical folding.
## Experimental Procedures
In vivo and ex vivo MR images were acquired at ages P4, P7, P10, P14, P17, P21, P24, P28, and P35, and cortical surface representations were created. Nondimensionalized sulcal depth and mean curvature were calculated along with surface area of the isocortex and allocortex for each surface. Spatial and temporal variations in growth were calculated from 2 ferrets from the same litter scanned at 1 week intervals (P14, P21, and P28).
### Animal Care
Female ferret kits were obtained from the commercial vendor Marshall Bioresources (North Rose, NY). Only female kits were analyzed in this study to remove potential effects due to differences in sex, though we are not aware of any such differences in this species. Kits were delivered at P4–5 to a dedicated animal facility at Washington University (WU), where they remained for the duration of the study. All procedures were performed in accordance with NIH and institutional guidelines for the care and use of animals and approved by the WU Institutional Animal Care and Use Committee.
### Image Acquisition
In vivo images were acquired at ages P7 (N = 2), P14 (N = 3), P21 (N = 3), P28 (N = 3), and P35 (N = 1). Each kit was initially anesthetized using 3.5% isoflurane in O2 in a vented anesthesia chamber and placed in a nose cone with a pallet bar or tooth bar, depending on its age. Anesthesia was maintained with 1.5% isoflurane in O2 (1.0 L/min). The animal’s head was kept still in the prone position using a custom-made head support. Pulse rate and oxygen saturation levels were monitored continuously by a magnetic resonance imaging (MRI)-compatible pulse oximeter (Nonin Medical, Plymouth, MN) taped to one of the hind paws. Body temperature was maintained by flowing temperature-controlled water through a heating pad located underneath the animal. Animals were kept sedated for a total of 120–180 min for these procedures.
Images were acquired using an 11.7T small animal MRI system controlled by a Varian INOVA console and equipped with separate transmit and receive radiofrequency (RF) coils. T2-weighted images were acquired using a multislice spin-echo pulse sequence. The imaging parameters echo time (TE) and repetition time (TR) were chosen to maximize signal to noise and contrast to noise in the images at each age (Barnette et al. 2009). The values of TR and TE ranged from 4 to 4.4 s and 55 to 80 ms, respectively. Images were acquired at a resolution of 250 × 250 × 250 μm (isotropic), which provided adequate signal-to-noise ratio while still allowing for the structure of the cortex to be identified.
To prepare tissue for ex vivo measurements, animals were injected with 0.5 mL euthasol (intraperitoneal). Heparinized phosphate buffered saline was injected into the left cardiac ventricle until the fluid of the right atria was clear. Phosphate buffered paraformaldehyde (4%, pH 7.4) was then perfused through the left ventricle for approximately 10 min. The brains were extracted and placed in 4% paraformaldehyde indefinitely.
Images of postmortem tissue were acquired from brains of P4 (N = 1), P10 (N = 1), P17 (N = 1), and P24 (N = 1) animals, using a 4.7T small animal MRI system controlled by a Varian INOVA console. Single-turn solenoidal transmit/receive RF coils used were matched to brain sizes. Imaging parameters were chosen to maximize signal and contrast at each age. Image resolution was increased as a function of brain size from 150 μm isotropic to 350 μm isotropic. The TR and TE ranged from 5.5 to 11.8 s and 85 to 100 ms, respectively.
### Image Segmentation and Surface Generation
CARET software (Van Essen et al. 2001) was used for image segmentation and surface generation. Images were segmented manually at the pial surface, which is the boundary between gray matter and cerebrospinal fluid. The segmentation volume was eroded by one voxel so that the boundary of the segmentation was within the cortex, approximating a midcortical surface. A midcortical surface is advantageous because a region of surface area represents approximately the same cortical volume whether the region is located on a gyrus or sulcus (Van Essen 2005). The olfactory bulbs were removed from the segmentation to avoid errors in the surface-based analyses (e.g., inflating the cortical surface to a sphere). A single slice of a T2-weighted image with the segmentation result overlaid on the right hemisphere from a single kit at 2 time points is shown in Figure 1A.
Figure 1.
(A) Sample T2-weighted images at acquired in vivo at P14 and P21. Using CARET software (Van Essen et al. 2001), images were manually segmented at the boundary of gray matter and cerebrospinal fluid, and cortical surface representations were generated. The isocortex is highlighted in blue and allocortex in red. (B) The LACROSS registration approach (Knutsen et al. 2010) was used to determine a point-to-point correspondence between surfaces at different time points. Following the arrows between the surfaces, 1) an initial correspondence is determined between the surfaces the younger and older surfaces. 2) The older surface is then mapped to a sphere, and 3) a partial differential equation is solved on the spherical surface so that distortions between the 2 cortical surfaces are minimized while surface features are matched. The output of the solution is a spherical surface with updated coordinates. 4) These updated spherical coordinates are then mapped back onto the older cortical surface, which 5) provides a smoother mapping between the younger and older cortical surfaces. The solution is implemented on a spherical surface to simplify computations. Importantly, distortions introduced in mapping a cortical surface to a spherical surface are taken into account.
Figure 1.
(A) Sample T2-weighted images at acquired in vivo at P14 and P21. Using CARET software (Van Essen et al. 2001), images were manually segmented at the boundary of gray matter and cerebrospinal fluid, and cortical surface representations were generated. The isocortex is highlighted in blue and allocortex in red. (B) The LACROSS registration approach (Knutsen et al. 2010) was used to determine a point-to-point correspondence between surfaces at different time points. Following the arrows between the surfaces, 1) an initial correspondence is determined between the surfaces the younger and older surfaces. 2) The older surface is then mapped to a sphere, and 3) a partial differential equation is solved on the spherical surface so that distortions between the 2 cortical surfaces are minimized while surface features are matched. The output of the solution is a spherical surface with updated coordinates. 4) These updated spherical coordinates are then mapped back onto the older cortical surface, which 5) provides a smoother mapping between the younger and older cortical surfaces. The solution is implemented on a spherical surface to simplify computations. Importantly, distortions introduced in mapping a cortical surface to a spherical surface are taken into account.
A mesh representation of each cortical surface was generated by CARET from the segmentation volume, using a smoothing filter (strength = 0.1; iterations = 15). A cortical surface from each age is shown in Figure 2A. Each surface consists of approximately 10 000–30 000 points in space connected by a triangular mesh of approximately 20 000–60 000 faces. The medial wall was manually identified using the anatomical MR images and the cortical surface. The coordinates that reside on the medial wall are not part of the cortex and are excluded from the analyses described herein.
Figure 2.
(A) The cortical surface undergoes a large amount of growth and folding during the first 5 weeks of life. Surface model representations of the cortex were generated from the segmentation volumes using CARET software (Van Essen et al. 2001). Surface meshes consist of 10 000–30 000 points and 20 000–60 000 triangular faces. A sphere of radius 1 mm is plotted for scale. (B1): The surface area of the isocortex (IC) and allocortex (AC) are shown as a function of age. A model selection algorithm identified a transition in the rate of the isocortical surface area at P12.7 days. The rate of expansion increases from 17.2 mm2/day per hemisphere to 36.7 mm2/day per hemisphere after P12.7 days. The surface area of the allocortex increases at a much lower rate (3.8 mm2/day per hemisphere) than the isocortex during development. (B2) The average of normalized mean curvature ($K¯*$) increases rapidly from P10 to P17, while it increases slightly both before P10 and after P17. (B3) The average of normalized sulcal depth ($Δ¯*$) increases steadily at a high rate until P21 and increases at a lower rate after P21. Note that both $K¯*$ and $Δ¯*$ have no units.
Figure 2.
(A) The cortical surface undergoes a large amount of growth and folding during the first 5 weeks of life. Surface model representations of the cortex were generated from the segmentation volumes using CARET software (Van Essen et al. 2001). Surface meshes consist of 10 000–30 000 points and 20 000–60 000 triangular faces. A sphere of radius 1 mm is plotted for scale. (B1): The surface area of the isocortex (IC) and allocortex (AC) are shown as a function of age. A model selection algorithm identified a transition in the rate of the isocortical surface area at P12.7 days. The rate of expansion increases from 17.2 mm2/day per hemisphere to 36.7 mm2/day per hemisphere after P12.7 days. The surface area of the allocortex increases at a much lower rate (3.8 mm2/day per hemisphere) than the isocortex during development. (B2) The average of normalized mean curvature ($K¯*$) increases rapidly from P10 to P17, while it increases slightly both before P10 and after P17. (B3) The average of normalized sulcal depth ($Δ¯*$) increases steadily at a high rate until P21 and increases at a lower rate after P21. Note that both $K¯*$ and $Δ¯*$ have no units.
### Calculation of Curvature and Sulcal Depth
At a point on a surface, curvature represents the deviation of the surface from the tangent plane. A tensor is required to describe curvature of a surface. Local estimates of principal curvature were calculated using an approach described in Filas et al. (2008). Mean curvature, $K$, is given by
(1)
$K=12(κ1+κ2),$
where $κ1$ and $κ2$ are the first and second principal curvatures, respectively. Each curvature value $κi$ is the inverse of the radius of curvature of a curve formed by the intersection of the surface with a normal plane (a plane perpendicular to the tangent plane at the point). The first principal curvature describes the curve in the normal plane in which this curvature is greatest. The second principal curvature describes the curve in the binormal plane (perpendicular to both the tangent plane and the normal plane of maximum curvature). Curvature has units of length−1.
Sulcal depth, Δ, is a measure of the distance from a point on the cortical surface to the nearest point on a convex hull. The convex hull for a cortical surface is the convex surface with the smallest area that encapsulates the entire cortical surface (i.e., imagine the surface formed by stretching an elastic balloon around a convoluted surface). Sulcal depth is calculated using CARET software (Van Essen 2005).
Mean curvature and sulcal depth provide local measures of shape on the surface but can also be used to generate global measures of shape. A variety of global measures of shape have been defined previously based on mean curvature, Gaussian curvature, individual principal curvatures, and surface area (Van Essen and Drury 1997; Magnotta et al. 1999; Batchelor et al. 2002; Rodriguez-Carranza et al. 2008). However, a single global measure may not be sufficient when looking at the brain; for example, both amplitude and frequency are required to accurately characterize a sine wave. Accordingly, we use an average of sulcal depth (analogous to amplitude) and an average of mean curvature (analogous to frequency) to provide a global description of cortical shape.
Rodriguez-Carranza et al. (2008) note that a good global measure of shape should be size independent. Both sulcal depth and curvature are dependent on size; for example, a circle of radius one has a different value of curvature ($κ=1$) than a circle of radius 2 ($κ=0.5$). Nondimensionalization of mean curvature and sulcal depth account for differences in size:
(2)
$Δ*=ΔLc,K*=KLc,$
where $Lc$ is a characteristic length, which is defined here as the square root of the surface area divided by 4π. The average of normalized sulcal depth, $Δ¯*$, and the average of normalized mean curvature, $K¯*$, are calculated by integrating each value over the surface and dividing by the surface area:
(3)
$Δ¯*=∫A|Δ*|dAA,K¯*=∫A|K*|dAA.$
Surface integrals are approximated using an extension of the trapezoidal method. Supplementary Figure S1 provides examples of normalized mean curvature and normalized sulcal depth values for 4 shapes. To provide a reference for a known shape, for a sphere, the normalized mean curvature is $K¯*=1$ and the normalized sulcal depth is $Δ¯*=0$.
### Longitudinal Analysis
In vivo images of 2 kits from the same litter were acquired at P14, P21, and P28. Cortical surfaces of both the left and right hemisphere were created by manual segmentation as described above. In order to estimate temporal and spatial variations in growth, a point-to-point correspondence is required between surfaces. The LAndmark Correspondence and Relaxation Of Surface Strain (LACROSS) registration approach (Knutsen et al. 2010) was applied to determine a point-to-point correspondence. This procedure entails the use of the finite element method to solve a partial differential equation on a parameterized surface (i.e., a sphere). An optimal correspondence between the 2 surfaces is identified by minimizing an energy function that is based on distortions between the surfaces (i.e., surface strain) and surface shape functions (i.e., mean curvature). A schematic of the approach is shown in Figure 1B. The LACROSS registration method uses COMSOL Multiphysics v3.5 (COMSOL, Inc., Burlington, MA) and custom functions written in MATLAB (Mathworks, Inc., Natick, MA). Hereafter, the 2 animals used for the longitudinal analysis will be referred to as ferrets A.1 and A.2.
A local measure of cortical growth between 2 surfaces is required. Specifically, we are interested in quantifying how a small region on a younger surface (e.g., P14) deforms over time into its corresponding region on the older surface (e.g., P21). We define relative cortical growth from a younger surface to an older surface as the change in surface area per unit area per day. This measure can be thought of conceptually by drawing a small circle with area $A1$ at some location on the younger surface. Mapping the circle to the older surface will cause it to deform and grow, so that it now has an area of $A2$. Relative cortical growth, $ΔA$, is given by
(4)
$ΔA=(J−1)≈A2−A1A1,$
where J is the dilatation ratio between the older and younger surfaces. The derivation for relative cortical growth is provided in Appendix 1. The ratio of sulcal depth at each vertex at P14 and P21 to P28 was calculated within sulci. Points that reside on sulci were identified using the sulcal depth values at P28 by applying a threshold at Δ = 1 mm. The threshold was determined by inspection.
The cortical location of neurons derived from the source of the TNG was estimated based upon the results obtained from McSherry (McSherry 1984; McSherry and Smart 1986) as described previously (Kroenke et al. 2009). The estimated location of the TNG origin was identified on surfaces of P28 animals. Using CARET software, the geodesic distance from each point on the surface to the origin of the TNG was determined and normalized based on the maximum calculated distance.
## Results
### Surface Area Expansion
Histogenesis of the allocortex differs from that of the isocortex. The former does not undergo the deep lamina first, superficial lamina last sequence of neuron birth and migration; and the laminar organization of allocortex differs from the 6-layered pattern observed in isocortex at maturity (Sidman and Rakic 1982). Therefore, we characterized surface area expansion of these regions separately, by taking advantage of the rhinal fissure as an anatomical landmark delineating the allocortical/isocortical border (Kroenke et al. 2009). As shown in Figure 2B, the surface area of both the isocortex and allocortex increase during the first 7 weeks of life. Consistent with our previous observations (Barnette et al. 2009), significant differences are not observed in surface area or in expansion rate between data acquired in vivo using post mortem tissue. The allocortex is smaller and expands in surface area at a lower rate than the isocortex. By fitting the allocortical data in Figure 2B to a line, we estimate the allocortical expansion rate over the period from P4 to P35 to be 3.8 mm2/day per hemisphere.
The isocortical surface area expansion data in Figure 2B reveals the possibility that the initial rate of surface area increase (e.g., P4 − Ttr, Fig. 2B) is lower than at later stages (e.g., Ttr − P35, Fig. 2B) in which Ttr represents the transition age between surface area rates of change. To investigate this, a model selection calculation was performed. A linear increase in surface area expression,
(5)
$A=m(age)+b$
in which the slope, m, is the rate of area expansion, and the intercept, b, is the isocortical surface area at postnatal day 0, was compared with a two-slope expression
(6)
$A=m1(age)+b if age
in which the isocortical surface area is assumed to increase linearly at a rate m1 over the age range prior to the fitted parameter “transition.” Afterward, isocortical surface area increases at rate m2. It was found that the Akaike Information Criterion, corrected for small sample size (AICc) (McQuarrie and Tsai 1998) is smaller for the 4-parameter expression equation (6) (223) than for the 2-parameter expression equation (5) (238), which indicates equation (6) provides a significant improvement over equation (5) in the agreement between the data and the model expression. Fitting the data shown in Figure 2B to the equation (6) yields a value for transition of Ttr = P12.7 days, an isocortical surface area expansion rate (m1) of 14.6 mm2/day per hemisphere between ages P4 and P12.7 and an expansion rate (m2) of 36.7 mm2/day per hemisphere between ages P12.7 and P35.
### Cerebral Cortical Folding
Figure 2A contains surface representations of the cortex at different ages, ranging from P4 to P35. Qualitatively, the cortical surface in the first week of life is small and mostly devoid of folds, with only small indentations hinting at where future sulci and gyri will develop. As the cortex continues to grow, the folds deepen and increase in curvature.
To characterize folding of the cerebral cortex over this time period, normalized sulcal depth, $Δ¯*$, and normalized mean curvature, $K¯*$, averaged over the isocortex, are plotted as a function of age in Figure 2B,C. These measures provide a quantitative description of the amplitude and frequency of the folds, respectively. As illustrated in the averaged normalized surface curvature, at P4 and P7, the earliest ages examined, the cortical surface is very smooth. For comparison, $K¯*=1$ for a sphere, and $K¯*=1.49$ and 1.44 at P4 and P7, respectively. The surfaces become markedly more folded from P10 to P17, which corresponds to the largest rate of increase in $K¯*$. Normalized sulcal depth increases at a steady rate during the first 3 weeks of life, after which the rate of increase is much diminished. This corresponds to the development of folds and the relative deepening of the folds. Normalized sulcal depth increases at a lower rate after ∼P21. From P17 to P35, the surface area continues to increase at a high rate, but the degree of folding increases at a much lower rate.
### Longitudinal Analysis of Surface Area Expansion
To test the possibility that surface area expansion is linked to cellular-level morphological differentiation, we reasoned that regional patterns of differentiation should be reflected in a similar regional pattern of surface area expansion. In a previous study of changes of water diffusion anisotropy within the ferret cerebral cortex (Kroenke et al. 2009), we observed a rostral/lateral to caudal/medial gradient in cortical water diffusion fractional anisotropy (FA) that parallels the TNG and was interpreted to arise from a regional pattern in cellular morphological differentiation (McSherry 1984; McSherry and Smart 1986). Serial surface area measurements were therefore carried out on 2 ferrets at ages P14, P21, and P28. The LACROSS method (Knutsen et al. 2010) was employed to determine a point-to-point correspondence between cortical surfaces at different ages, which allows for the calculation of local measurements of surface area expansion. Relative cortical growth (eq. 4) was calculated as the change in local surface area from P14 to P21 and from P21 to P28, normalized by the local surface area on the younger surface. Cortical growth values from P14 to P21 and P21 to P28 were projected onto cerebral cortical surface models for one hemisphere of one animal in Figure 3A and for both hemispheres in each animal in Supplementary Figures S2 and S3. Relative cortical growth was found to exhibit a rostral/lateral to caudal/medial regional pattern, as shown in Figure 3B. To quantify this regional variation, relative cortical growth is plotted as a function of relative geodesic distance from the cortical site of neurons derived from the origin of the TNG ($d*$) in Figure 3B and Supplementary Figure S4. A linear model was fit to the data using a least-squares fitting algorithm in Matlab. Combining the results for each hemisphere, the slope of cortical growth as a function of $d*$ is 0.245 with an intercept at 0.833 from P14 to P21 and 0.357 with an intercept of 0.763, from P21 to P28. Values of the intercept and slope for each of the animals and hemispheres are listed in Table 1. Note that the slope and intercept values are unit less, as they are derived from 2 unit-less quantities: relative cortical growth and relative geodesic distance.
Table 1
Slope and intercept values for cortical growth as a function of relative geodesic distance from the origin of the TNG (Fig. 3B and Supplementary Fig. S3)
P14 to P21 P21 to P28 Animal Hemisphere Slope Intercept Slope Intercept A.1 Left 0.063 0.896 0.098 0.420 Right 0.161 0.875 0.161 0.427 A.2 Left 0.315 0.819 0.112 0.427 Right 0.441 0.735 0.133 0.392 Combined 0.245 0.833 0.119 0.420
P14 to P21 P21 to P28 Animal Hemisphere Slope Intercept Slope Intercept A.1 Left 0.063 0.896 0.098 0.420 Right 0.161 0.875 0.161 0.427 A.2 Left 0.315 0.819 0.112 0.427 Right 0.441 0.735 0.133 0.392 Combined 0.245 0.833 0.119 0.420
Figure 3.
Relative cortical growth from P14 to P21 and P21 to P28. (A) Relative cortical growth increases as a function of relative geodesic distance from the origin of the TNG values for both hemispheres of both ferrets were plotted on the same figure. The increase in slope is larger from P14 to P21 than from P21 to P28. (B) Relative cortical growth from P14 to P21 (left) and P21 to P28 (right) in the left hemisphere of ferret A.1. While we show that the isocortex expands at a roughly constant rate after the identified transitional age Ttr (P12.7), relative cortical growth is larger from P14 to P21 because cortical growth represents the change in local surface area relative to the initial local surface area. Growth was not calculated on the medial wall, which is shown in gray.
Figure 3.
Relative cortical growth from P14 to P21 and P21 to P28. (A) Relative cortical growth increases as a function of relative geodesic distance from the origin of the TNG values for both hemispheres of both ferrets were plotted on the same figure. The increase in slope is larger from P14 to P21 than from P21 to P28. (B) Relative cortical growth from P14 to P21 (left) and P21 to P28 (right) in the left hemisphere of ferret A.1. While we show that the isocortex expands at a roughly constant rate after the identified transitional age Ttr (P12.7), relative cortical growth is larger from P14 to P21 because cortical growth represents the change in local surface area relative to the initial local surface area. Growth was not calculated on the medial wall, which is shown in gray.
In addition to longitudinal changes in relative cortical growth, we looked at the progression of sulcal depth at P14 and P21 relative to its value at P28 as a way to quantify the degree of folding on a regional level. Figure 4 and Supplementary Figure S5 show the ratio of sulcal depth at P14 and P21 relative to P28 as a function of $d*$ for surface points that reside on sulci, which were identified by applying a threshold to sulcal depth values at P28 (Δ = 1 mm). The threshold was determined by inspection. Similar to above, a linear model was fit to the data using a least-squares fitting algorithm in Matlab. The slope of the ratio of sulcal depth as a function of $d*$ from P14 to P28 (slope = −0.14) decreases at a rate 4.5 times higher than from P21 to P28 (slope = −0.03). The intercept of the modeled data from P21 to P28 (intercept = 0.82) is larger than from P14 to P28 (intercept = 0.55). The values of the slope and intercept for both animals and hemispheres are listed in Table 2.
Table 2
Slope and intercept values for the ratio of sulcal depth values at P14 and P21 relative to P28 as a function of relative geodesic distance from the origin of the TNG (Fig. 4 and Supplementary Fig. S5)
Δ(P14)/Δ(P28) Δ(P14)/Δ(P28) Animal Hemisphere Slope Intercept Slope Intercept A.1 Left −0.13 0.55 −0.06 0.84 Right −0.14 0.55 −0.012 0.80 A.2 Left −0.12 0.55 −0.063 0.83 Right −0.15 0.57 0.018 0.80 Combined −0.14 0.56 −0.030 0.82
Δ(P14)/Δ(P28) Δ(P14)/Δ(P28) Animal Hemisphere Slope Intercept Slope Intercept A.1 Left −0.13 0.55 −0.06 0.84 Right −0.14 0.55 −0.012 0.80 A.2 Left −0.12 0.55 −0.063 0.83 Right −0.15 0.57 0.018 0.80 Combined −0.14 0.56 −0.030 0.82
Figure 4.
Ratio of sulcal depth at P14 and P21 relative to P28. Points residing in the 6 major sulci of the ferret were identified via an applied threshold of sulcal depth values at P28. The ratio of sulcal depth at P14 and P21 relative to P28 represents the progress in folding toward maturation. As relative distance from the origin of the TNG increase, the sulcal depth ratio from P14 to P28 decreases (slope = −0.14; intercept = 0.55) while the ratio from P21 to P28 decreases at a much lower rate (slope = −0.03; intercept = 0.82).
Figure 4.
Ratio of sulcal depth at P14 and P21 relative to P28. Points residing in the 6 major sulci of the ferret were identified via an applied threshold of sulcal depth values at P28. The ratio of sulcal depth at P14 and P21 relative to P28 represents the progress in folding toward maturation. As relative distance from the origin of the TNG increase, the sulcal depth ratio from P14 to P28 decreases (slope = −0.14; intercept = 0.55) while the ratio from P21 to P28 decreases at a much lower rate (slope = −0.03; intercept = 0.82).
## Discussion
A kinematic description of folding is necessary to provide insight into the underlying mechanisms of cortical folding (Van Essen 1997). The goal of this study is to provide a quantitative description of global and local growth in the developing ferret brain. Expansion of cortical surface area is due to cellular proliferation and morphological differentiation. The surface area of the isocortex was measured on cortical surface representations that were created from anatomical MR images as a function of age (P4–P35). The ferret is an ideal animal model for investigating the relationship between cellular-level and macroscopic changes associated with cortical folding and surface area expansion. Due to the resulting flexibility of experimental procedures that can be used, temporal characteristics of cell proliferation (Jackson et al. 1989; Noctor et al. 1997; Reillo et al. 2011) and differentiation (Voigt et al. 1993; Zervas and Walkley 1999; Kroenke et al. 2009; Bock et al. 2010; Jespersen et al. 2011) are well characterized in this species. In particular, cell birthdating (Jackson et al. 1989; Noctor et al. 1997) and other histological methods (McSherry 1984; McSherry and Smart 1986) have established that isocortical pyramidal neuron production is essentially complete by post-conceptional day (PC)41 (the ferret gestational term is 42 days) in rostral/lateral cortex (Noctor et al. 1997), PC49 (postnatal day (P)8) in occipital cortex (Jackson et al. 1989), and intermediate at other locations according to a TNG (McSherry and Smart 1986; Kroenke et al. 2009). Allowing for a 7–10 day period for neurons to migrate from ventricular zones to the cortical plate (Roberts et al. 1993; Noctor et al. 1997), the cellular proliferation period of surface area expansion occurs until approximately P9–P17, whereas expansion after this period is associated with morphological differentiation on the cellular level.
An estimate of the transition from cellular proliferation to morphological differentiation is reflected in the observed transition in the rate of surface area expansion in the isocortex at Ttr = P12.7 days. The rate of cortical expansion associated with cellular proliferation (age < Ttr) is 12.7 mm2/day per hemisphere, approximately one-third the value of 36.7 mm2/day per hemisphere associated with morphological differentiation (age > Ttr).
Compared with other gyroencephalic species studied to date, the rate of increase in cerebral cortical area is slow if considered in absolute terms but rapid when considered as a fraction of the adult surface area. Cross-sectional studies of baboon postmortem fetal tissue from gestational days 90 through 146 (Kroenke et al. 2007) and in utero images acquired over the range from gestational days 119 to 179 (Kochunov et al. 2010) report cerebral cortical surface area expansion rates of 51 and 53 mm2/day per hemisphere, respectively. Additionally, cross-sectional studies of fetal human brain development obtained from in utero MRI procedures performed between 25 and 35 weeks gestation (Clouchoux et al. 2011), as well as data obtained from prematurely delivered human infants obtained over a 26–36 weeks postconception age range (Dubois et al. 2008) reported a cortical surface areas that expand at rates of 210 and 190 mm2/day per hemisphere, respectively. Note that half the total cortical surface area expansion rates reported by Kochunov et al. (2010) and in the human studies are quoted here for a single hemisphere. For the human studies, expansion rates were not explicitly reported but were estimated based on surface area values reported in Figure 9 of Clouchoux et al. (2011) and Figure 3C of Dubois et al. (2008). Surface area expansion data for ferret and primate species are shown in Figure 5A.
Figure 5.
Inter-species comparison of surface area during development. (A) Surface area expansion data for ferret and primate species as a function of the number of days post conception. Ferret data (green) are from this study while surface area values in the baboon (dark red—Kroenke et al. 2007; light red—Kochunov et al. 2010) and human (dark blue—Dubois et al. 2008; light blue—Clouchoux et al. 2011) were obtained from previously published studies. (B) The ratio of surface area during development to surface area in the adult as a function of CNS developmental event score.
Figure 5.
Inter-species comparison of surface area during development. (A) Surface area expansion data for ferret and primate species as a function of the number of days post conception. Ferret data (green) are from this study while surface area values in the baboon (dark red—Kroenke et al. 2007; light red—Kochunov et al. 2010) and human (dark blue—Dubois et al. 2008; light blue—Clouchoux et al. 2011) were obtained from previously published studies. (B) The ratio of surface area during development to surface area in the adult as a function of CNS developmental event score.
Using values of 760 (this study and Bock et al. 2011), 9700 (Kochunov et al. 2009), and 94 100 (Hill et al. 2010) mm2 per hemisphere as adult cerebral cortical surface areas, expansion rates of 4.2, 0.57, and 0.22% of the adult surface area per day are obtained for ferret, baboon, and human, respectively. However, as is recognized in other interspecies developmental studies (Breunig et al. 2011), quantitative comparisons require differences in the rate of CNS development to be acknowledged. According to comparative analyses of CNS development (Clancy et al. 2007), the Figure 5A age ranges correspond to ferret P8–P37 for Kroenke et al. (2007), P25–P52 for Kochunov et al. (2010), P42–P56 for Clouchoux et al. (2011), and P41–P54 for Dubois et al. (2008), in which the approximation procedure described in Leigland and Kroenke (2011) for converting baboon to rhesus macaque developmental ages was used. Thus, the majority of the Kroenke et al. (2007) data, the entirety of the Kochunov et al. (2010), and human data correspond to ferret ages greater than Ttr. In Figure 5B, postnatal ages are converted to event scores after correcting for between-species differences in the rate of CNS development (Clancy et al. 2007). As is evident in Figure 5B, the rate of ferret cortical surface area expansion (230% of adult surface per event) is faster than that of baboon and human (70% and 45% of adult surface per event, respectively), after correcting for species-specific developmental time scales. Interestingly, the surface area of the isocortex during development actually overshoots the surface area of the adult.
In humans, the developmental sequence from 25 to 40 weeks gestational age (GA) corresponds to the development time in the ferret from P10 to P21 (Barnette et al. 2009), and global shape analysis has been calculated both in vivo (Dubois et al. 2008; Rodriguez-Carranza et al. 2008) and ex vivo (Batchelor et al. 2002). Both Batchelor et al. (2002) and Rodriguez-Carranza et al. (2008) used 7 and 16 global shape metrics, respectively. The metrics used by Batchelor et al. (2002) are size dependent and are influenced by the large changes in scale during development. For comparison with the normalized mean curvature ($K¯*$) index presented in this study, normalized L2 norm of mean curvature (MLNT) (Rodriguez-Carranza et al. 2008) and Sulcification Index (SI) (Dubois et al. 2008) are used. Both MLNT and SI increase linearly during the developmental time periods studied (28–37 weeks GA and 26–36 weeks GA, respectively). In the ferret, $K¯*$ increases at a higher rate from P10 to P21 compared with before P10 or after P21 (Fig. 2B2). After P21, $K¯*$ increases slowly. This suggests that while the isocortical surface area continues to expand at a constant rate (36.7 mm2/day per hemisphere through P35), the brain is only becoming slightly more curved during this time. In addition, normalized sulcal depth ($Δ¯*$) provides a size-independent shape index that compliments $K¯*$, providing information on both the amplitude and frequency of the folding during development.
Regional changes in surface area and sulcal depth provide additional information to supplement surface area changes of the isocortex as a whole. While the surface area of the isocortex increases at a constant rate from Ttr to P35, the rate of change in shape as measured by relative cortical growth and ratio of sulcal depth varies both along and between sulci. In addition to spatial variations, differences are also seen as a function of age (Fig. 4 and Supplementary Fig. S5). The ratio of sulcal depth represents the progress of folding toward maturity at each surface coordinate that reside on a sulcus. The P28 surface was designated as being at maturity since it was the oldest surface obtained in the 2 animals studied longitudinally. The change in normalized mean sulcal depth is relatively small from P21 to P28 (and to P35) compared with P14 to P21. Relative cortical growth is calculated by registering (i.e., determining a one-to-one correspondence between local isocortical sites) 2 cortical surfaces created from images from the same animal acquired at different ages. Relative cortical growth is quantified by the change in a small region of surface area per unit area, and, in this study, is measured from P14 to P21 and P21 to P28 in 2 animals. The rate of relative cortical growth exhibits a regional pattern of low surface expansion rostrally and laterally, to high expansion medially and caudally, which is similar to the regional pattern of cortical water diffusion anisotropy within the developing ferret cerebral cortex (Kroenke et al. 2009).
The rostral/lateral to caudal/medial gradient in diffusion anisotropy has been interpreted to result from differences in cortical neuron age due to the TNG, and hence, extent of morphological differentiation (Kroenke et al. 2009). To examine the possibility that the regional pattern in relative cortical expansion arises as a consequence of regional patterns of cellular morphological differentiation, it was therefore determined whether relative cortical expansion also correlates with relative geodesic distance from the cortical site of neurons derived from the TNG source ($d*$). The confirmatory results shown in Figures 3A and Supplementary Figure S3 and Table 1 demonstrate that relative cortical expansion is also positively correlated with ($d*$). Cortical FA values obtained using expressions from Kroenke et al. (2009), as described in Appendix II, are listed in Table 3 for limiting cases in which $d*= 1$ and $d*= 0$. A larger decrease in FA is seen near the occipital pole ($d*= 1$) compared with at the origin of the TNG ($d*= 0$) from both P14 to P21 and P21 to P28. Similarly, the decrease in FA is larger from P14 to P21 compared with the decrease from P21 to P28 in each corresponding region. As shown in Table 3, regional and temporal patterns in relative cortical surface area parallel the magnitude in reductions in cortical FA. In general, the common gradient observed in water diffusion anisotropy and relative cortical growth with respect to the TNG indicates that the amount of reduction in FA over a given time period directly relates to the amount of relative cortical growth over that time period. Different factors, such as variations in the number of glial cells, synaptogenesis and arborization may explain spatial and temporal variations in patterns of growth. For example, an increased number of intermediate radial glial cells are seen in regions with greater tangential cortical expansion (Reillo et al. 2011), though folding of the cortex is not ensured by the presence of proliferating basal radial glia cells (Hevner and Haydar 2012).
Table 3
The change in FA as a function of relative geodesic distance from the origin of the TNG ($d*$) calculated using equation (A5)
Cortical fractional anisotropy Relative cortical growth FAP14 − FAP21 FAP21 − FAP28 P14 to P21 P21 to P28 d* = 0 0.158 0.074 0.833 0.420 d* = 1 0.178 0.127 0.178 0.127
Cortical fractional anisotropy Relative cortical growth FAP14 − FAP21 FAP21 − FAP28 P14 to P21 P21 to P28 d* = 0 0.158 0.074 0.833 0.420 d* = 1 0.178 0.127 0.178 0.127
McSherry describes the cortical location of neurons derived from the source of the TNG in the ferret to reside approximately midway between the rhinal fissure and the medial boundary (McSherry 1984; McSherry and Smart 1986). Using FA data in the cortex during development, we previously identified the source of the anisotropy to be in the rostral cortex, dorsal to the center of the insula (Kroenke et al. 2009). Studies in different species propose the origin of the TNG to reside near the insula (Sidman and Rakic 1982; Smart 1983; McSherry 1984). The result from Kroenke et al. (2009) is more consistent with Figure 9 from McSherry (1984) compared with a surface coordinate near the insula. For this study, a similar location of the source of the FA gradient located within the rostral cortex was selected as the origin of the TNG. To investigate the effect on the results obtained in this study of an error in the identification of the origin of the TNG, a point located at the approximate center of the insular cortex was selected and set as an alternative origin of the TNG. The relative geodesic distance from all surface coordinates to the alternative origin was calculated. Relative cortical growth as a function of normalized geodesic distance from an origin near the center of the insular cortex was calculated, and a linear model was fit to the data. The calculated slope and intercept values are given in Supplementary Table S1. The similarity of the resultant slope and intercept values (Table 1 and Supplementary Table S1) give confidence in the observed relationship between relative cortical growth and $d*$.
Ideally, cortical growth would have been calculated at earlier time points (i.e., from P7 to P14) in addition to the current rages. Unfortunately, images at P7 were only acquired in one of the kits used for the longitudinal portion of this study. In addition, the LACROSS registration algorithm uses mean curvature to constrain the registration between 2 surfaces, with the assumption that regions of high curvature remain so during development. At P7, the cortical surface is very smooth, with only small indentations that hint at the future locations of sulci and gyri. Indeed, $K¯*$ averaged for the P7 cortical surfaces is 1.44, which is not much larger than that of a sphere $(K¯*=1)$ and is much smaller than at P14 and P21 ($K¯*= 2.76 and 3.81$, respectively). In order to obtain an accurate registration during this time, a different fiducial marker than mean curvature is necessary. One possibility for future studies would be to identify the intersection of blood vessels with the cortical surface using $T2*$-weighted MRI and to track these intersections over time. Methodological developments, such as this, are currently under investigation.
Registration algorithms, through a series of assumptions, provide an estimate of how one surface corresponds to another surface, or, in this case, how one surface grows over time. It is important to acknowledge that the calculations of cortical growth are directly dependent on the assumptions that drive the algorithm. The LACROSS registration algorithm has been tested on a series of artificial test cases and actual cases. The algorithm uses normalized mean curvature to drive the registration but also has a term that minimizes distortions between the surfaces. In addition, it is important to analyze the registration results carefully to make sure that the mapping is physically reasonable on a small scale. This was achieved by mapping small regions between the registered surfaces and visually examining them. While other assumptions could be made to drive the registration, the assumptions made provide a reasonable estimate of growth in the developing brain.
Using the information observed in this study and from previous work, a hypothesis of the pattern of cortical growth prior to P14 can be generated. Given that cellular morphological differentiation first occurs near the origin of the TNG prior to P14, cortical growth should initially be higher near the origin of the TNG. At some point between P7 and P14, a transition should occur where the regions further from the origin of the TNG should grow at a faster rate. It is during this time period that the cortical folds begin to form. The 2 most widely discussed hypotheses for the underlying mechanisms of cortical folding are differential growth (Richman et al. 1975) and tension-based morphogenesis (Van Essen 1997). While axonal tension may not be the main force that drives cortical folding (Xu et al. 2010), axonal connectivity has a strong influence on folding patterns (Barron 1950; Rakic 1988). The model presented by Richman et al. (1975) qualitatively reproduced normal and pathological folding, but used unrealistic material properties and did not provide an explanation for the consistent folding patterns seen between subjects. Xu et al. (2010) proposed a model for folding of the cortex based on differential growth between regions. Using a numerical (finite element) model of the mechanics of the cortex and subcortical regions, they showed that growth in one region followed by remodeling and growth in a neighboring region could cause consistent folds to develop. Measures of actual cortical growth, such as the data obtained from this study, are valuable input parameters for mathematical models of cortical folding.
## Conclusions
Using MRI and surface-based analysis techniques, global and local measures of expansion of the cerebral cortex were examined as a function of postnatal age in the developing ferret brain. The surface area of the isocortex undergoes a transition from a lower growth rate (12.7 mm2/day per hemisphere) to a higher rate (36.7 mm2/day per hemisphere) at approximately 13 days after birth (Ttr = P12.7), which corresponds to the transition from cellular proliferation to morphological differentiation. Cortical expansion in the latter phase is two-thirds to one-half of the rate reported previously in nonhuman and human primates, respectively. Locally, relative cortical growth increases as a function of relative geodesic distance from the origin of the TNG. In addition, the amount of cortical growth is proportional to the change in FA over the same time period. Anatomical MRI and surface-based analyses of the cerebral cortex provide a noninvasive means to quantify folding of the cortex during development; such data will be important for parameterization and validation of mathematical models of cortical folding.
## Supplementary Material
Supplementary material can be found at: http://www.cercor.oxfordjournals.org/
## Funding
National Science Foundation (grant number DMS-0540701 to L.A.T.) and the National Institute of Health (grant number NS070022 to C.D.K., EB005834 to P.V.B.)
The authors gratefully acknowledge conversations with Dr Jeffrey Neil, Dr Terri Inder, and Dr David Van Essen (Washington University in St. Louis) while conducting this research. Conflict of Interest: None declared.
### Appendix 1
The normalized growth rate between 2 surfaces at different times is the change in local area per unit area per day. Let $X$ be the surface coordinates for the younger surface (i.e., P14) and $x$ be the surface coordinates for the older surface (i.e., P21). Surface registration provides a point-to-point correspondence between the 2 surfaces.
(A1)
$x=x(X).$
The deformation gradient tensor, F, transforms a line element on the younger surface to a line element on the older surface (Taber 2004)
(A2)
$F=dxdX.$
In practice, F is calculated between 2 corresponding surfaces using the approach described in Filas et al. (2008). Briefly, at each point on the surface, a second-order polynomial is fit in the least-squares sense to describe the coordinate components of the older surface in terms of the components of the younger surface over a small region. Derivatives are then calculated analytically.
The dilatation ratio, $J$, represents the ratio of the area of the older surface to the younger surface at each coordinate
(A3)
$J=det(F)≈A2A1 ,$
where $det(−)$ is the determinant, and $A1$ and $A2$ are the areas of small corresponding regions on the younger and older surfaces, respectively. Relative cortical growth is a function of $J$ and is given by
(A4)
$ΔA=(J−1)≈(A2−A1A1).$
### Appendix II
FA as a function of age ($a$), and relative geodesic distance from the origin of the TNG ($d*$) for nonprimary cerebral cortical areas, using diffusion sensitization scheme A (Kroenke et al. 2009)
(A5)
$FA=α1 if a<β,α2+(α1−α2)exp(−a−βα3) if a≥β,$
where
(A6)
$β=α4+d*α5.$
The optimal parameters that describe FA in the developing ferret brain are: $α1=0.742,α2=0.327, α3=9.1, α4=8.2, andα5=5.0$. Table 3 contains the change in FA between P14 and P21 and P21 and P28 for $d*=0$ and $d*=1$.
## References
Barnette
AR
Neil
JJ
Kroenke
CD
Griffith
JL
Epstein
AA
Bayly
PV
Knutsen
AK
Inder
TE
Characterization of brain development in the ferret via MRI
Pediatr Res
,
2009
, vol.
66
(pg.
80
-
84
)
Barron
DH
An experimental analysis of some factors involved in the development of the fissure pattern of the cerebral cortex
J Exp Zool
,
1950
, vol.
113
(pg.
553
-
581
)
Batchelor
PG
Castellano Smith
Hill
DL
Hawkes
DJ
Cox
TC
Dean
AF
Measures of folding applied to the development of the human fetal brain
IEEE Trans Med Imaging
,
2002
, vol.
21
(pg.
953
-
965
)
Bock
AS
Kroenke
CD
Taber
EN
Olavarria
JF
Retinal input influences the size and corticocortical connectivity of visual cortex during postnatal development in the ferret
J Comp Neurol
,
2012
, vol.
520
(pg.
914
-
932
)
Bock
AS
Olavarria
JF
Leigland
LA
Taber
EN
Jespersen
SN
Kroenke
CD
Diffusion tensor imaging detects early cerebral cortex abnormalities in neuronal architecture induced by bilateral neonatal enucleation: an experimental model in the ferret
Front Syst Neurosci
,
2010
, vol.
4
pg.
149
Breunig
JJ
Haydar
TF
Rakic
P
Neural stem cells: historical perspective and future prospects
Neuron
,
2011
, vol.
70
(pg.
614
-
625
)
Bystron
I
Blakemore
C
Rakic
P
Development of the human cerebral cortex: boulder Committee revisited
Nat Rev Neurosci
,
2008
, vol.
9
(pg.
110
-
122
)
Chenn
A
Walsh
CA
Regulation of cerebral cortical size by control of cell cycle exit in neural precursors
Science
,
2002
, vol.
297
(pg.
365
-
369
)
Chenn
A
Walsh
CA
Increased neuronal production, enlarged forebrains and cytoarchitectural distortions in beta-catenin overexpressing transgenic mice
Cereb Cortex
,
2003
, vol.
13
(pg.
599
-
606
)
Clancy
B
Kersh
B
Hyde
J
Darlington
RB
Anand
KJ
Finlay
BL
Web-based method for translating neurodevelopment from laboratory species to humans
Neuroinformatics
,
2007
, vol.
5
(pg.
79
-
94
)
Clouchoux
C
Kudelski
D
Gholipour
A
Warfield
SK
Viseur
S
Bouyssi-Kobar
M
Mari
JL
Evans
AC
du Plessis
AJ
Limperopoulos
C
Quantitative in vivo MRI measurement of cortical development in the fetus
Brain Struct Funct
,
2012
, vol.
217
(pg.
127
-
139
)
Dehay
C
Savatier
P
Cortay
V
Kennedy
H
Cell-cycle kinetics of neocortical precursors are influenced by embryonic thalamic axons
J Neurosci
,
2001
, vol.
21
(pg.
201
-
214
)
Dubois
J
Benders
M
Cachia
A
Lazeyras
F
Ha-Vinh Leuchter
R
Sizonenko
SV
C
Mangin
JF
Huppi
PS
Mapping the early cortical folding process in the preterm newborn brain
Cereb Cortex
,
2008
, vol.
18
(pg.
1444
-
1454
)
Filas
BA
Knutsen
AK
Bayly
PV
Taber
LA
A new method for measuring deformation of folding surfaces during morphogenesis
J Biomech Eng
,
2008
, vol.
130
pg.
061010
Hevner
RF
Haydar
TF
The (not necessarily) convoluted role of Basal radial glia in cortical neurogenesis
Cereb Cortex
,
2012
, vol.
22
(pg.
465
-
468
)
Hill
J
Dierker
D
Neil
J
Inder
T
Knutsen
A
Harwell
J
Coalson
T
Van Essen
D
A surface-based analysis of hemispheric asymmetries and folding of cerebral cortex in term-born human infants
J Neurosci
,
2010
, vol.
30
(pg.
2268
-
2276
)
Jackson
CA
Peduzzi
JD
Hickey
TL
Visual cortex development in the ferret. I. Genesis and migration of visual cortical neurons
J Neurosci
,
1989
, vol.
9
(pg.
1242
-
1253
)
Jespersen
S
Leigland
L
Cornea
A
Kroenke
C
Determination of axonal and dendritic orientation distributions within the developing cerebral cortex by diffusion tensor imaging
IEEE Trans Med Imaging
,
2012
, vol.
31
(pg.
16
-
32
)
Knutsen
AK
Chang
YV
Grimm
CM
Phan
L
Taber
LA
Bayly
PV
A new method to measure cortical growth in the developing brain
J Biomech Eng
,
2010
, vol.
132
pg.
101004
Kochunov
P
Castro
C
Davis
D
Dudley
D
Brewer
J
Zhang
Y
Kroenke
CD
Purdy
D
Fox
PT
Simerly
C
, et al. .
Mapping primary gyrogenesis during fetal development in primate brains: high-resolution in utero structural MRI of fetal brain development in pregnant baboons
Front Neurosci
,
2010
, vol.
4
pg.
20
Kochunov
P
Glahn
DC
Fox
PT
Lancaster
JL
Saleem
K
Shelledy
W
Zilles
K
Thompson
PM
Coulon
O
Mangin
JF
, et al. .
Genetics of primary cerebral gyrification: heritability of length, depth and area of primary sulci in an extended pedigree of Papio baboons
Neuroimage
,
2009
, vol.
53
(pg.
1126
-
1134
)
Kroenke
CD
Taber
EN
Leigland
LA
Knutsen
AK
Bayly
PV
Regional patterns of cerebral cortical differentiation determined by diffusion tensor MRI
Cereb Cortex
,
2009
, vol.
19
(pg.
2916
-
2929
)
Kroenke
CD
Van Essen
DC
Inder
TE
Rees
S
Bretthorst
GL
Neil
JJ
Microstructural changes of the baboon cerebral cortex during gestational development reflected in magnetic resonance imaging diffusion anisotropy
J Neurosci
,
2007
, vol.
27
(pg.
12506
-
12515
)
Leigland
LA
Kroenke
CD
A comparative analysis of cellular morphological differentiation within the cerebral cortex using diffusion tensor imaging
Neuromethods
,
2011
, vol.
50
(pg.
329
-
351
)
Magnotta
VA
Andreasen
NC
Schultz
SK
Harris
G
T
Heckel
D
Nopoulos
P
Flaum
M
Quantitative in vivo measurement of gyrification in the human brain: changes associated with aging
Cereb Cortex
,
1999
, vol.
9
(pg.
151
-
160
)
McQuarrie
Tsai
CL
Regression and time series model selection
,
1998
Singapore (Singapore)
World Scientific
McSherry
GM
Mapping of cortical histogenesis in the ferret
J Embryol Exp Morphol
,
1984
, vol.
81
(pg.
239
-
252
)
McSherry
GM
Smart
IH
Cell production gradients in the developing ferret isocortex
J Anat
,
1986
, vol.
144
(pg.
1
-
14
)
Nieuwenhuys
R
Voogd
J
van Huijzen
C
The human central nervous system
,
2008
Berlin (Germany)
Springer
Noctor
SC
Scholnicoff
NJ
Juliano
SL
Histogenesis of ferret somatosensory cortex
J Comp Neurol
,
1997
, vol.
387
(pg.
179
-
193
)
Rakic
P
Specification of cerebral cortical areas
Science
,
1988
, vol.
241
(pg.
170
-
176
)
Rakic
P
Evolution of the neocortex: a perspective from developmental biology
Nat Rev Neurosci
,
2009
, vol.
10
(pg.
724
-
735
)
Reillo
I
de Juan Romero
C
Garcia-Cabezas
MA
Borrell
V
A role for intermediate radial glia in the tangential expansion of the mammalian cerebral cortex
Cereb Cortex
,
2011
, vol.
21
(pg.
1674
-
1694
)
Richman
DP
Stewart
RM
Hutchinson
JW
Caviness
VS
Jr
Mechanical model of brain convolutional development
Science
,
1975
, vol.
189
(pg.
18
-
21
)
Roberts
JS
O'Rourke
NA
McConnell
SK
Cell migration in cultured cerebral cortical slices
Dev Biol
,
1993
, vol.
155
(pg.
396
-
408
)
Rodriguez-Carranza
CE
Mukherjee
P
Vigneron
D
Barkovich
J
Studholme
C
A framework for in vivo quantification of regional brain folding in premature neonates
Neuroimage
,
2008
, vol.
41
(pg.
462
-
478
)
Sidman
R
Rakic
P
Haymaker
W
R
Development of the human central nervous system
Histology and histopathology of the nervous system
,
1982
Springfield (IL)
Charles C. Thomas
(pg.
3
-
145
)
Smart
IH
Three dimensional growth of the mouse isocortex
J Anat
,
1983
, vol.
137
Pt 4
(pg.
683
-
694
)
Smart
IH
McSherry
GM
Gyrus formation in the cerebral cortex in the ferret. I. Description of the external changes
J Anat
,
1986
, vol.
146
(pg.
141
-
152
)
Taber
LA
Nonlinear theory of elasticity: applications in biomechanics
,
2004
Singapore (Singapore)
World Scientific
Todd
PH
A geometric model for the cortical folding pattern of simple folded brains
J Theor Biol
,
1982
, vol.
97
(pg.
529
-
538
)
Toro
R
Burnod
Y
A morphogenetic model for the development of cortical convolutions
Cereb Cortex
,
2005
, vol.
15
(pg.
1900
-
1913
)
Van Essen
DC
A tension-based theory of morphogenesis and compact wiring in the central nervous system
Nature
,
1997
, vol.
385
(pg.
313
-
318
)
Van Essen
DC
A population-average, landmark- and surface-based (PALS) atlas of human cerebral cortex
Neuroimage
,
2005
, vol.
28
(pg.
635
-
662
)
Van Essen
DC
Drury
HA
Structural and functional analyses of human cerebral cortex using a surface-based atlas
J Neurosci
,
1997
, vol.
17
(pg.
7079
-
7102
)
Van Essen
DC
Drury
HA
Dickson
J
Harwell
J
Hanlon
D
Anderson
CH
An integrated software suite for surface-based analyses of cerebral cortex
J Am Med Inform Assoc
,
2001
, vol.
8
(pg.
443
-
459
)
Voigt
T
De Lima
Beckmann
M
Synaptophysin immunohistochemistry reveals inside-out pattern of early synaptogenesis in ferret cerebral cortex
J Comp Neurol
,
1993
, vol.
330
(pg.
48
-
64
)
Welker
WI
The significance of foliation and fissuration of cerebellar cortex. The cerebellar folium as a fundamental unit of sensorimotor integration
Arch Ital Biol
,
1990
, vol.
128
(pg.
87
-
109
)
Xu
G
Knutsen
AK
Dikranian
K
Kroenke
CD
Bayly
PV
Taber
LA
Axons pull on the brain, but tension does not drive cortical folding
J Biomech Eng
,
2010
, vol.
132
pg.
071013
Zervas
M
Walkley
SU
Ferret pyramidal cell dendritogenesis: changes in morphology and ganglioside expression during cortical development
J Comp Neurol
,
1999
, vol.
413
(pg.
429
-
448
) | 2017-02-20 00:03:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 72, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6376610398292542, "perplexity": 3121.9774840193654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170286.6/warc/CC-MAIN-20170219104610-00629-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://zbmath.org/?q=an:1006.35021 | # zbMATH — the first resource for mathematics
A regularity result for Hamiltonian systems. (English) Zbl 1006.35021
The authors consider the Hamiltonian system \begin{aligned} \dot x_j & =-(\partial_{\xi_j}p) (t,x,\xi),\\ \dot\xi_j & =-(\partial_{x_j}p) (t,x,\xi),\\ x(0) & = y,\\ \xi(0) & =\eta,\end{aligned} \tag{1} where $$p$$ is a $$\mathbb{C}^\infty$$ function of the variables $$(t,x,\xi)\in(-T,T) \times\Omega \times\mathbb{R}^n$$, $$\Omega \subseteq\mathbb{R}^n$$ open, satisfying the estimates $\bigl|\partial_x^\alpha \partial^\beta_\xi \partial^\gamma_t p(t,x,\xi) \bigr|\leq C_{\alpha \beta\gamma K} \psi(\xi)^{1-|\beta|} \tag{2}$ with $$K\subset \Omega$$ compact and $$\psi$$ an appropriate weight function. This Hamiltonian system appears in the study of the boundary value problem $\partial_tu= p(t, x,\partial_x), \quad u|_{t=0}=u_0\in {\mathcal E}'(\Omega)$ where $${\mathcal E}' (\Omega)$$ denotes the space of distributions with compact support, when a solution of the form $u(t,x)=\int_{\mathbb{R}^n} e^{i\varphi(t,x,\eta)} \lambda (t,x,\eta)\widehat u_0(\eta) d\eta$ is proposed for a phase $$\varphi$$ and an amplitude $$\lambda$$ to be found satisfying estimates (2). The authors show by means of an iterative argument that the system (1) has a solution $$(x(t,y, \eta)$$, $$\xi(t,y,\eta))$$ that is a $$C^\infty$$ function on $$(-T,T)\times \Omega' \times\mathbb{R}^n$$, where $$\Omega'$$ is an open subset of $$\Omega$$ satisfying $$\Omega' \subset \overline{\Omega'}\subset\Omega$$.
##### MSC:
35F15 Boundary value problems for linear first-order PDEs 35B65 Smoothness and regularity of solutions to PDEs 47G30 Pseudodifferential operators | 2022-01-20 05:38:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9191761612892151, "perplexity": 305.33678378861566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301720.45/warc/CC-MAIN-20220120035934-20220120065934-00719.warc.gz"} |
https://zxi.mytechroad.com/blog/tag/combination/ | # Posts tagged as “combination”
Given an integer array nums, find the maximum possible bitwise OR of a subset of nums and return the number of different non-empty subsets with the maximum bitwise OR.
An array a is a subset of an array b if a can be obtained from b by deleting some (possibly zero) elements of b. Two subsets are considered different if the indices of the elements chosen are different.
The bitwise OR of an array a is equal to a[0] OR a[1] OR ... OR a[a.length - 1] (0-indexed).
Example 1:
Input: nums = [3,1]
Output: 2
Explanation: The maximum possible bitwise OR of a subset is 3. There are 2 subsets with a bitwise OR of 3:
- [3]
- [3,1]
Example 2:
Input: nums = [2,2,2]
Output: 7
Explanation: All non-empty subsets of [2,2,2] have a bitwise OR of 2. There are 23 - 1 = 7 total subsets.
Example 3:
Input: nums = [3,2,1,5]
Output: 6
Explanation: The maximum possible bitwise OR of a subset is 7. There are 6 subsets with a bitwise OR of 7:
- [3,5]
- [3,1,5]
- [3,2,5]
- [3,2,1,5]
- [2,5]
- [2,1,5]
Constraints:
• 1 <= nums.length <= 16
• 1 <= nums[i] <= 105
## Solution: Brute Force
Try all possible subsets
Time complexity: O(n*2n)
Space complexity: O(1)
## C++
You are given two integer arrays nums1 and nums2 of length n.
The XOR sum of the two integer arrays is (nums1[0] XOR nums2[0]) + (nums1[1] XOR nums2[1]) + ... + (nums1[n - 1] XOR nums2[n - 1]) (0-indexed).
• For example, the XOR sum of [1,2,3] and [3,2,1] is equal to (1 XOR 3) + (2 XOR 2) + (3 XOR 1) = 2 + 0 + 2 = 4.
Rearrange the elements of nums2 such that the resulting XOR sum is minimized.
Return the XOR sum after the rearrangement.
Example 1:
Input: nums1 = [1,2], nums2 = [2,3]
Output: 2
Explanation: Rearrange nums2 so that it becomes [3,2].
The XOR sum is (1 XOR 3) + (2 XOR 2) = 2 + 0 = 2.
Example 2:
Input: nums1 = [1,0,3], nums2 = [5,3,4]
Output: 8
Explanation: Rearrange nums2 so that it becomes [5,4,3].
The XOR sum is (1 XOR 5) + (0 XOR 4) + (3 XOR 3) = 4 + 4 + 0 = 8.
Constraints:
• n == nums1.length
• n == nums2.length
• 1 <= n <= 14
• 0 <= nums1[i], nums2[i] <= 107
## Solution: DP / Permutation to combination
dp[s] := min xor sum by using a subset of nums2 (presented by a binary string s) xor with nums1[0:|s|].
Time complexity: O(n*2n)
Space complexity: O(2n)
## C++
There are n uniquely-sized sticks whose lengths are integers from 1 to n. You want to arrange the sticks such that exactly k sticks are visible from the left. A stick is visible from the left if there are no longer sticks to the left of it.
• For example, if the sticks are arranged [1,3,2,5,4], then the sticks with lengths 13, and 5 are visible from the left.
Given n and k, return the number of such arrangements. Since the answer may be large, return it modulo 109 + 7.
Example 1:
Input: n = 3, k = 2
Output: 3
Explanation: [1,3,2], [2,3,1], and [2,1,3] are the only arrangements such that exactly 2 sticks are visible.
The visible sticks are underlined.
Example 2:
Input: n = 5, k = 5
Output: 1
Explanation: [1,2,3,4,5] is the only arrangement such that all 5 sticks are visible.
The visible sticks are underlined.
Example 3:
Input: n = 20, k = 11
Output: 647427950
Explanation: There are 647427950 (mod 109 + 7) ways to rearrange the sticks such that exactly 11 sticks are visible.
Constraints:
• 1 <= n <= 1000
• 1 <= k <= n
## Solution: DP
dp(n, k) = dp(n – 1, k – 1) + (n-1) * dp(n-1, k)
Time complexity: O(n*k)
Space complexity: O(n*k) -> O(k)
## Python3
There is a donuts shop that bakes donuts in batches of batchSize. They have a rule where they must serve all of the donuts of a batch before serving any donuts of the next batch. You are given an integer batchSize and an integer array groups, where groups[i] denotes that there is a group of groups[i] customers that will visit the shop. Each customer will get exactly one donut.
When a group visits the shop, all customers of the group must be served before serving any of the following groups. A group will be happy if they all get fresh donuts. That is, the first customer of the group does not receive a donut that was left over from the previous group.
You can freely rearrange the ordering of the groups. Return the maximum possible number of happy groups after rearranging the groups.
Example 1:
Input: batchSize = 3, groups = [1,2,3,4,5,6]
Output: 4
Explanation: You can arrange the groups as [6,2,4,5,1,3]. Then the 1st, 2nd, 4th, and 6th groups will be happy.
Example 2:
Input: batchSize = 4, groups = [1,3,2,5,2,2,1,6]
Output: 4
Constraints:
• 1 <= batchSize <= 9
• 1 <= groups.length <= 30
• 1 <= groups[i] <= 109
## Solution 0: Binary Mask DP
Time complexity: O(n*2n) TLE
Space complexity: O(2n)
## Solution 1: Recursion w/ Memoization
State: count of group size % batchSize
## C++/OPT
Design an Iterator class, which has:
• A constructor that takes a string characters of sorted distinct lowercase English letters and a number combinationLength as arguments.
• A function next() that returns the next combination of length combinationLength in lexicographical order.
• A function hasNext() that returns True if and only if there exists a next combination.
Example:
CombinationIterator iterator = new CombinationIterator("abc", 2); // creates the iterator.
iterator.next(); // returns "ab"
iterator.hasNext(); // returns true
iterator.next(); // returns "ac"
iterator.hasNext(); // returns true
iterator.next(); // returns "bc"
iterator.hasNext(); // returns false
Constraints:
• 1 <= combinationLength <= characters.length <= 15
• There will be at most 10^4 function calls per test.
• It’s guaranteed that all calls of the function next are valid.
Use a bitmask to represent the chars selected. | 2021-10-23 03:48:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18636655807495117, "perplexity": 2932.199422611166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585561.4/warc/CC-MAIN-20211023033857-20211023063857-00187.warc.gz"} |
https://stat.ethz.ch/R-manual/R-devel/library/survival/html/survexp.object.html | survexp.object {survival} R Documentation
## Expected Survival Curve Object
### Description
This class of objects is returned by the survexp class of functions to represent a fitted survival curve.
Objects of this class have methods for summary, and inherit the print, plot, points and lines methods from survfit.
### Arguments
surv the estimate of survival at time t+0. This may be a vector or a matrix. n.risk the number of subjects who contribute at this time. time the time points at which the curve has a step. std.err the standard error of the cumulative hazard or -log(survival). strata if there are multiple curves, this component gives the number of elements of the time etc. vectors corresponding to the first curve, the second curve, and so on. The names of the elements are labels for the curves. method the estimation method used. One of "Ederer", "Hakulinen", or "conditional". na.action the returned value from the na.action function, if any. It will be used in the printout of the curve, e.g., the number of observations deleted due to missing values. call an image of the call that produced the object.
### Structure
The following components must be included in a legitimate survfit object.
### Subscripts
Survexp objects that contain multiple survival curves can be subscripted. This is most often used to plot a subset of the curves.
### Details
In expected survival each subject from the data set is matched to a hypothetical person from the parent population, matched on the characteristics of the parent population. The number at risk printed here is the number of those hypothetical subject who are still part of the calculation. In particular, for the Ederer method all hypotheticals are retained for all time, so n.risk will be a constant.
plot.survfit, summary.survexp, print.survfit, survexp. | 2023-03-29 23:26:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36231452226638794, "perplexity": 1289.9220487334949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00743.warc.gz"} |
https://physics.stackexchange.com/questions/310141/two-conducting-spheres-connected-by-a-wire/310142 | # Two conducting spheres connected by a wire
There are two conducting spheres of different charge and a conducting wire. After they are connected by the wire, charge flows between the spheres. The charge distributes itself so that the spheres are at the same potential, but I have not been able to find an explanation for this. Why wouldn't the charge be distributed so that the electric field is the same, since that is what is moving the charges in the wire?
• In the given link and more precisely in Electrically connected charged balls you could find a full explanation and solution. The instance the two balls are connected there exists potential difference $\:-\boldsymbol{\nabla}\phi\:$ and potential difference means non-zero electric field $\:\mathbf{E}\:$. – Frobenius Feb 6 '17 at 18:48
• This field is responsible for the motion of charge from the ball with the higher potential to the other ball until the whole system (the two balls and the wire) turns to be an equipotential region. Then in this region $\:\mathbf{E}=\boldsymbol{0}\:$ so there is no charge motion. The system is balanced. – Frobenius Feb 6 '17 at 18:49 | 2020-08-09 12:00:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8524209856987, "perplexity": 148.62961693900095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738552.17/warc/CC-MAIN-20200809102845-20200809132845-00237.warc.gz"} |
https://www.hackmath.net/en/math-problem/1609 | # Grower
Grower harvested 190 kg of apples. Pears harvested 10 times less.
a) How many kg pears harvested?
b) How many apples and pears harvested?
c) How many kg harvested pears less than apples?
Result
a = 19 kg
b = 209 kg
c = 171 kg
#### Solution:
$j = 190 \ \\ a = j/10 = 190/10 = 19 = 19 \ \text{ kg }$
$b = a+j = 19+190 = 209 = 209 \ \text{ kg }$
$c = j-a = 190-19 = 171 = 171 \ \text{ kg }$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Showing 0 comments:
Be the first to comment!
## Next similar math problems:
1. Farmers
Farmers loaded into a truck of fruit and vegetables intended for the store. 10 boxes of 5 kg pears, 8 boxes of 6 kg plums, 7 boxes 9 kg of carrots and 10 bags of 50 kg of potatoes. How many kilograms of fruit and vegetables loaded in total? How many kg cou
2. Parking field
There were nine cars in the parking field. The motorcycles were three times less. How many motorcycles are here and how many wheels are in the parking field?
3. Bakers
Baker Martin baked 5 times more cakes than Dennis. How many cakes baked Dennis if Martin bake 25 cakes?
4. Fruit candies
Zuzka and Janka ate fruit candies. Zuzka ate 10 and Janka ate 3 times more. How many sweets did both girls eat together?
5. Number
Which number is 17 times larger than the number 6?
6. Cars
Johnny has 370 cars. Peter twice less. How many cars have Peter?
7. The school
The school has 268 boys. Girls are 60 more than boys. How many children are there at school?
8. Car factory
A workshop had produced 25 cars. While they produced about 15 fewer than they had planned. How many cars they produced in the hall yet? How many cars are still to manufacture?
9. Snacks
The school attends 344 pupils. Half of them take snacks. 13 pupils who took snacks did not attend school. How many snacks left?
10. Fishing boat
The fishing boat caught 14 fish in one day. How many fish will catch 4 fishing boats in 8 days?
11. The lift
The lift can fit 4 people. How many uphill rides must the lift make to move up 12 passengers?
12. Car parking
The car park has 80 cars when 53 cars leaves. How many car are there?
13. Airport
On blue airport are 23 aircraft, flew 7 aircraft, the arrival 9 aircraft. How many planes were a blue airport tonight?
14. Skiing
Ski gondola starts at a distance of 975 m from the top and is 389 meters long. I went to the bottom station to the top station. How many meters I must go to the top of hill?
15. Book
Louise read from their favorite book for three days 18 pages. Every day read the same number of pages. How many pages she read every day?
16. Flour
2 kg of flour costs 100 CZK. How much does it cost half a kilogram?
17. Euro
I have 1 euro. How much will I have when you spend it on? | 2020-02-21 03:46:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20228153467178345, "perplexity": 5366.118012457091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145438.12/warc/CC-MAIN-20200221014826-20200221044826-00317.warc.gz"} |
http://slideplayer.com/slide/2342847/ | # Ordinary Least-Squares
## Presentation on theme: "Ordinary Least-Squares"— Presentation transcript:
Ordinary Least-Squares
Outline Linear regression Geometry of least-squares
Discussion of the Gauss-Markov theorem Ordinary Least-Squares
One-dimensional regression
Ordinary Least-Squares
One-dimensional regression
Find a line that represent the ”best” linear relationship: Ordinary Least-Squares
One-dimensional regression
Problem: the data does not go through a line Ordinary Least-Squares
One-dimensional regression
Problem: the data does not go through a line Find the line that minimizes the sum: Ordinary Least-Squares
One-dimensional regression
Problem: the data does not go through a line Find the line that minimizes the sum: We are looking for that minimizes Ordinary Least-Squares
Matrix notation and Using the following notations
Ordinary Least-Squares
Matrix notation and Using the following notations
we can rewrite the error function using linear algebra as: Ordinary Least-Squares
Matrix notation and Using the following notations
we can rewrite the error function using linear algebra as: Ordinary Least-Squares
Multidimentional linear regression
Using a model with m parameters Ordinary Least-Squares
Multidimentional linear regression
Using a model with m parameters Ordinary Least-Squares
Multidimentional linear regression
Using a model with m parameters Ordinary Least-Squares
Multidimentional linear regression
Using a model with m parameters and n measurements Ordinary Least-Squares
Multidimentional linear regression
Using a model with m parameters and n measurements Ordinary Least-Squares
Ordinary Least-Squares
Ordinary Least-Squares
parameter 1 Ordinary Least-Squares
parameter 1 measurement n Ordinary Least-Squares
Minimizing Ordinary Least-Squares
Minimizing Ordinary Least-Squares
Minimizing is flat at Ordinary Least-Squares
Minimizing is flat at Ordinary Least-Squares
Minimizing is flat at does not go down around Ordinary Least-Squares
Minimizing is flat at does not go down around Ordinary Least-Squares
Positive semi-definite
In 1-D In 2-D Ordinary Least-Squares
Minimizing Ordinary Least-Squares
Minimizing Ordinary Least-Squares
Minimizing Ordinary Least-Squares
Minimizing Always true Ordinary Least-Squares
Minimizing The normal equation Always true Ordinary Least-Squares
Geometric interpretation
Ordinary Least-Squares
Geometric interpretation
b is a vector in Rn Ordinary Least-Squares
Geometric interpretation
b is a vector in Rn The columns of A define a vector space range(A) Ordinary Least-Squares
Geometric interpretation
b is a vector in Rn The columns of A define a vector space range(A) Ax is an arbitrary vector in range(A) Ordinary Least-Squares
Geometric interpretation
b is a vector in Rn The columns of A define a vector space range(A) Ax is an arbitrary vector in range(A) Ordinary Least-Squares
Geometric interpretation
is the orthogonal projection of b onto range(A) Ordinary Least-Squares
The normal equation: Ordinary Least-Squares
The normal equation: Existence: has always a solution
Ordinary Least-Squares
The normal equation: Existence: has always a solution
Uniqueness: the solution is unique if the columns of A are linearly independent Ordinary Least-Squares
The normal equation: Existence: has always a solution
Uniqueness: the solution is unique if the columns of A are linearly independent Ordinary Least-Squares
Under-constrained problem
Ordinary Least-Squares
Under-constrained problem
Ordinary Least-Squares
Under-constrained problem
Ordinary Least-Squares
Under-constrained problem
Poorly selected data One or more of the parameters are redundant Ordinary Least-Squares
Under-constrained problem
Poorly selected data One or more of the parameters are redundant Add constraints Ordinary Least-Squares
How good is the least-squares criteria?
Optimality: the Gauss-Markov theorem Ordinary Least-Squares
How good is the least-squares criteria?
Optimality: the Gauss-Markov theorem Let and be two sets of random variables and define: Ordinary Least-Squares
How good is the least-squares criteria?
Optimality: the Gauss-Markov theorem Let and be two sets of random variables and define: If Ordinary Least-Squares
How good is the least-squares criteria?
Optimality: the Gauss-Markov theorem Let and be two sets of random variables and define: If Then is the best unbiased linear estimator Ordinary Least-Squares
b ei a no errors in ai Ordinary Least-Squares
b b ei ei a a no errors in ai errors in ai Ordinary Least-Squares
b a homogeneous errors Ordinary Least-Squares
b b a a homogeneous errors non-homogeneous errors
Ordinary Least-Squares
b a no outliers Ordinary Least-Squares
outliers b b a a no outliers outliers Ordinary Least-Squares | 2018-03-23 07:32:29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9115818738937378, "perplexity": 5062.738593161164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648198.55/warc/CC-MAIN-20180323063710-20180323083710-00652.warc.gz"} |
https://nbviewer.org/github/addfor/tutorials/blob/master/machine_learning/ml05v04_ensemble_methods.ipynb | # Ensemble Methods Basic concepts¶
In [1]:
import addutils.toc ; addutils.toc.js(ipy_notebook=True)
Out[1]:
In [2]:
import scipy.io
import numpy as np
import pandas as pd
from time import time
from sklearn import datasets, model_selection, metrics, ensemble, tree
from IPython.core.display import Image
from addutils import css_notebook
import matplotlib.pyplot as plt
%matplotlib inline
css_notebook()
Out[2]:
In [3]:
import bokeh.plotting as bk
from bokeh.models import (GraphRenderer, StaticLayoutProvider, Rect,
ColumnDataSource, Range1d, LabelSet, Label)
bk.output_notebook()
## 1 Introduction¶
Ensemble Methods combine the predictions of several models in order to improve generalizability / robustness over a single model.
Among many ensemble methods, two of the most popular families of methods are:
• Bootstrap aggregation or Bagging (also called Averaging): train multiple models by randomly sample with replacement (values can be duplicated) from the dataset and then average (or vote) the predictions. Bagging seems to work better with High-Variance (Complex Models) by decreasing Variance while the Bias is not affected.
• RandomForestClassifier / RandomForestRegressor
splits are made with 0 < max_features < 1 and then the most discriminative threshold is used.
• ExtraTreesClassifier / ExtraTreesRegressor
similar to RF but tresholds are drawn at random for each candidate feature.
• Boosting: incrementally build an ensemble of weak classifiers to produce a powerful 'commitee'. A weak classifier is only slightly better than random guessing. In boosting each new model is trained with a re-weighted version of the data to emphasize the training instances that previous models mis-classified. Boosting works the other way regard bias-variance decomposition, with respect to Bagging; it start with Low-Variance and High-Bias model and works by gradually improve Bias at each step. Increasing boosting steps tends to over-fit data and the algorithm is computationally more expensive than bagging.
• AdaBoostClassifier / AdaBoostRegressor
• GradientBoostingClassifier / GradientBoostingRegressor
In principle bagging and boosting are techniques that can be used with a variety of algorithms. In practice (expecially in the case of bagging) the preferred choice are trees (low-bias high-variance algorithm).
### 1.1 Decision Trees¶
Decision Trees are supervised learning algorithms used for classification and regression. They work by partitioning the feature space into a set of rectangles, and then fit a simple model (i.e. a constant value) in each one. The following algorithm is the one used by CART, one of the most popular decision tree algorithm.
Suppose to have a regression problem with two variables $X_1$, $X_2$ and response variable $Y$ (A simple example of such space is visible in picture below). At the beginning the tree chooses a variable, let's say $X_1$, and split the region in two at a certain point $X_1 = t_1$, with values in $R_1$ for $X_1 \leq t_1$ and values in $R_2$ for $X_1 > t_1$. The two region are further divided and each time the algorithm chooses a variable and a split point (for example $X_2$ and $t_2$), until a certain criterion is met and the algorithm terminate. The reponse value in each region is the average value of $Y$ in that region. For classification problems the class of a region is the majority of classes of $Y$ that fall in that region.
In [4]:
Image("images/tree.png")
Out[4]:
In the case of regression the best combination of variable and split point is determined with a greedy strategy. For each variable the algorithm chooses the best splitting point (the one that minimize the residual sum of square in the two region) and among all the best pairs $<$variable, split point$>$.
For classification problems the criterion for choosing the splitting point is usually the Gini index. Gini index (or Gini impurity) is a measure of how often a randomly chosen element from the set would be incorrectly labeled if it were randomly labeled according to the distribution of labels in the subset. Gini impurity can be computed by summing the probability of each item being chosen times the probability of a mistake in categorizing that item. The formula for the gini impurity measure is: $\sum_{k=1}^K p_{mk}(1-p_{mk})$, where $m$ is the terminal node and $K$ is the number of classes. It reaches its minimum (zero) when all cases in the node fall into a single target category. Gini impurity reaches its maximum value when all classes in the table have equal probability.
To illustrate the structure of a decision tree, we provide an example with the sklearn tree class.
NOTE: please install graphviz before running this cell, otherwise a default tree image will be displayed. For windows users:
• go to graphviz.org and install the software for your version
• close this notebook and jupyter. From the same Anaconda prompt type: PATH=%PATH%;C:\Program Files (x86)\Graphviz2.38\bin and then relaunch jupyter notebook (NOTE: this solution is not permanent. If you want to make it permanent, go to Environment Variables (Control Panel\All Control Panel Items\System\Advanced system settings, click Environment Variables button; under System variables find the variable path; click Edit... and then add C:\Program Files (x86)\Graphviz[version]\bin to the end in the Variable value: field.
• to confirm you can use dot command in the Command Line (Windows Command Processor), type dot -V which should return the software version.
In [5]:
from sklearn.datasets import load_iris
from sklearn import tree
from io import StringIO
from pydot import graph_from_dot_data
clf = tree.DecisionTreeClassifier()
clf = clf.fit(iris.data, iris.target)
dot_data = StringIO()
tree.export_graphviz(clf, out_file=dot_data)
graph = graph_from_dot_data(dot_data.getvalue())[0]
try:
tree_image = Image(graph.create_png())
except:
print('Graphviz is not installed on your system.\
Please follow installation instructions if you want updated pictures')
tree_image =Image("images/temp.png")
tree_image
Out[5]:
Decision Tree typical properties:
• PROS
• Conceptually simple to draw and interpret
• Can handle categorical predictors and do not require normalization
• CONS
• Tend to learn a too complex model (Overfit, High Variance)
• Susceptible to outliers
• Some concepts (for example XOR and additive functions) are hard to learn
• Tend to favor categorical features with many categories, because the number of binary partitions grows exponentially with the number of categories. For this reason choosing the right split becomes hard causing overfit.
## 2 Random Forests¶
Random Forest (RF) works by building a large ensemble of de-correlated trees and then averages them. The algorithm uses a modified bagging, where each tree is built on a random subspace. Bagging averages a set of approximately unbiased models to reduce variance. Trees work well in this context because they can capture complex interaction in the data. In bagging, samples are not necessarily independent and thus averaging doesn't account for all variance. The variance of the average of i.d. variables with correlation $\rho$ is: $\rho \sigma^2 + \frac{1-\rho}{B}\sigma^2$ where $B$ is the number of trees in the ensemble and $\sigma^2$ is the variance of each variable. As $B$ increases the second term disappear but the first remains, thus the correlation between pairs of trees limits the benefit derived from averaging.
The idea of Random Forests is to reduce the correlation of each tree (and thus decrease variance) by randomly selecting a subset of input variables for each split in the tree. This procedure slightly increases bias but achieve a better variance reduction.
Random Forests are a popular method because they work surprisingly well with the default hyperparameters.
The main hyperparameter to adjust is the number of variables selected at random for each split. In sklearn this is called max_features and it's usually optimized with grid search.
The recommended value for max_features is $\sqrt{p}$ (where $p$ is the total number of features) for classification problems or $\lfloor p/3 \rfloor$ for regression problems. These are rules of thumb and they work well for most datasets. In practice it is useful to start with the default and then refine the result. In principle lower values of p reduce the correlation between any pair of trees and hence reduce the variance of the average, at the cost of sligtly increasing the bias of each tree. Note that in scikit-learn the default value of max_features for regression is $p$ (use all features).
In scikit-learn max_features can be used in several way:
• int $\to$ number of features to use
• float $\to$ percentage of features to use
• auto $\to$ $\sqrt{p}$ (classification) or $p$ (regression)
• none $\to$ all features.
The second important parameter to tune is n_estimators: the number of trees in the forest. Since Random Forests are an averaging method they do not usually overfit by adding more trees and the larger the better (but it takes more time to compute. In addition, note that results will stop in getting significantly better beyond a sufficient number of trees.
Random Forests are said to hardly overfit the data. This is not always the case and the average of fully grown trees can result in a model that is too rich and with too much variance. If this is a concern, there are few ways to reduce tree depth, either by specifying the limit directly or by setting the number of training samples in the leaf, or the minimum number of samples to split. In scikit-learn this parameters are:
• max_depth if None the nodes are expanded until pure or min_samples_split.
• min_samples_split minimum number of samples required to split an internal node. Large values lead to smaller trees, higher bias and smaller variance. The optimal value depends in principle on the noise level in the dataset: in noisy datasets, ensemble of fully grown trees will overfit the data.
• min_samples_leaf minimum number of samples in resulting leafs.
If n_jobs=k then computations are partitioned into k jobs, and run on k cores of the machine. If n_jobs=-1 then all cores available on the machine are used.
In [6]:
pd.options.display.notebook_repr_html = True
# We skip the scaling because the tree-based models are almost insensitive to scaling
df = pd.DataFrame(iris.data, columns=iris.feature_names)
# Split Training and Validation Sets
idx_train, idx_valid = model_selection.train_test_split(df.index, test_size=0.15)
df_train, df_valid = df.iloc[idx_train], df.iloc[idx_valid]
y_train, y_valid = iris.target[idx_train], iris.target[idx_valid]
print("Training set / Validation set number of samples: {0}, {1}".format(df_train.shape[0],
df_valid.shape[0]))
print("Number of features: {0}".format(df_train.shape[1]))
Training set / Validation set number of samples: 127, 23
Number of features: 4
In [7]:
rfc = ensemble.RandomForestClassifier()
params = {'n_estimators':[5, 15, 30, 50, 75, 100],
'max_features':[2, 3, 4],
'max_depth':[2, 4, 6, 8],
'random_state':[0]}
t0 = time()
grid = model_selection.GridSearchCV(rfc, params, cv=15, n_jobs=-1)
grid.fit(df_train, y_train)
rfc_best = grid.best_estimator_
print(metrics.confusion_matrix(rfc_best.predict(df_valid), y_valid))
print('\nBest Params: N est:%3i - Mx feat:%2i - Mx dpth:%2i - F1:%.3f'\
%(rfc_best.n_estimators, rfc_best.max_features, rfc_best.max_depth,
metrics.f1_score(rfc_best.predict(df_valid), y_valid, average='micro')))
print(rfc_best)
print('Done in %0.3f[s]' %(time() - t0))
[[8 0 0]
[0 7 0]
[0 1 7]]
Best Params: N est: 15 - Mx feat: 3 - Mx dpth: 2 - F1:0.957
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=2, max_features=3, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=15, n_jobs=1,
oob_score=False, random_state=0, verbose=0, warm_start=False)
Done in 7.268[s]
Let's have a look to one of the Random Forest Trees:
In [8]:
idx = 1
dot_data = StringIO()
tree.export_graphviz(rfc_best.estimators_[idx], out_file=dot_data)
graph = graph_from_dot_data(dot_data.getvalue())[0]
try:
tree_image = Image(graph.create_png())
except:
tree_image =Image("images/temp1.png")
tree_image
Out[8]:
samples is the number of samples that will be classified in the specific leaf if the tree will be fed with the bootstrap set
value is the number of samples that will be classified in the specific leaf if the tree will be fed with the original dataset:
• bootstrap samples = 81
• total samples = 127
• sum of all elements in 'value' = 127
Random Forests properties:
• Properties derived from Trees:
• Can handle categorical predictors and do not require normalization
• Support natively multiclass problems
• Some concepts (for example XOR and additive functions) are hard to learn
• Tend to favor categorical features with many categories
• Typical properties:
• In sklearn classification trees uses a probability distribution in the leafs rather than majority voting. This methods produces a better overall prediction and at the same time can provide also a probability measure of the class membership (not only a pure class vote).
• If the number of relevant features is small, Random Forests can perform poorly, because at each split the probabilty of picking irrelevant variables is higher
Random Forests Advanced topics
Other than regression and classification, Random Forests provide much more information that are useful for different tasks. Some of them apply equally well to other ensemble methods such as Gradient Boosting Trees. Here we review the main concepts while the detailed descriptions are given in the advanced notebook.
• Out Of Bag (OOB) estimate: during the building phase of each tree some samples are left out. These samples are called OOB samples. Using OOB samples a generalization measure can be calculated without relying on a Validation Test Set. This measure is called Out Of Bag score (or OOB estimate).
• Variable importance: trees (and ensemble of trees) can provide a measure indicating how important or useful a variable is in predicting the outcome. Scikit-learn calculate the variable importance from the improvement to the gini index every variable provide at each split. There at least another more reliable algorithm to calculate the Variable Importance that is freely available in the Addfor libraries.
• Feature selection: allows to reduce dimensionaly and thus improve algorithm speed and convergence while keeping the most of the capabilities. The procedure can be automated to remove the last feature until certain stopping criterion (e.g. decrease in accuracy) is met.
• Partial dependence: it shows the relation between the target and a chosen set of varialbe (at most two at a time), marginalizing over the other variables. The chosen variables are usually the most important and this plot is used to gain insight about the function learned by the ensemble and how it models the dependence between the target and the most important variables.
• Proximity measure: random forest can grow a $N \times N$ proximity matrix, constructed by passing the OOB samples through each tree and increasing the proximity of two sample if they ends up in the same terminal node. Plotting this matrix should provide insight on which data points are effectively close, at least as learned by the random forest classfier. However it tends to produce similar graphs and its utility has been doubted.
## 3 Extremely Randomized Trees¶
In Extremely Randomized Trees (or Extra Trees, ET), randomness goes one step further in the way splits are computed. As in random forests, a random subset of candidate features is used, but instead of looking for the most discriminative thresholds, thresholds are drawn at random for each candidate feature. The relation with the output is retained by selecting the best couple $<$feature, random-threshold$>$ as the splitting point (if max_features = 1, the trees would be totally random).
It is possible to interpret this choice in different ways. If a feature is important, a significant fraction of the trees will have (approximately) the same feature with the same split point at the same position in the tree. This increases the correlation of each tree, hence increasing variance. By randomly selecting the threshold the algorithm introduces a slightly increase in Bias but it usually allows to reduce the Variance of the model a bit more.
In [9]:
rfc = ensemble.ExtraTreesClassifier()
params = {'n_estimators':[5, 15, 30, 50, 75, 100],
'max_features':[2, 3, 4],
'max_depth':[2, 4, 6, 8]}
t0 = time()
grid = model_selection.GridSearchCV(rfc, params, cv=15, n_jobs=-1)
grid.fit(df_train, y_train)
rfc_best = grid.best_estimator_
print(metrics.confusion_matrix(rfc_best.predict(df_valid), y_valid))
print('\nBest Params: N est:%3i - Mx feat:%2i - Mx dpth:%2i - F1:%.3f'\
%(rfc_best.n_estimators, rfc_best.max_features, rfc_best.max_depth,
metrics.f1_score(rfc_best.predict(df_valid), y_valid, average='micro')))
print(rfc_best)
print('Done in %0.3f[s]' %(time() - t0))
[[8 0 0]
[0 8 0]
[0 0 7]]
Best Params: N est: 15 - Mx feat: 3 - Mx dpth: 2 - F1:1.000
ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='gini',
max_depth=2, max_features=3, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=15, n_jobs=1,
oob_score=False, random_state=None, verbose=0, warm_start=False)
Done in 7.035[s]
Extremely Randomized Trees properties:
• Properties derived from Trees:
• Can handle categorical predictors and do not require normalization
• Support natively multiclass problems
• Some concepts (for example XOR and additive functions) are hard to learn
• Tend to favor categorical features with many categories
• Typical properties:
• Since the splitting point is draw at random, the computational cost of selecting the split point is reduced.
• In scikit-learn the implementation is similar to that of Random Forest (except for the random choice of threshold), whereas the original algorithm is somewhat different (no bagging).
AdaBoost, short for Adaptive Boosting, is a meta-algorithm, and can be used in conjunction with many other learning algorithms. The core principle of AdaBoost is to fit a sequence of weak learners (i.e., models that are only slightly better than random guessing, such as small decision trees) on repeatedly modified versions of the data. Each iteration in the sequence puts more weights on difficoult examples (examples that are misclassified in previous iterations). The predictions from all of them are then combined through a weighted majority vote (or sum) to produce the final prediction. AdaBoost is sensitive to noisy data and outliers. The classifiers it uses can be weak (i.e., display a substantial error rate), but as long as their performance is slightly better than random, they will improve the final model.
In [10]:
adb = ensemble.AdaBoostClassifier()
params = {'n_estimators':[10, 20, 30, 40],
'learning_rate':[0.1, 0.2, 0.5, 0.7, 1.0]}
t0 = time()
grid = model_selection.GridSearchCV(adb, params, cv=15)
grid.fit(df_train, y_train)
print('Done in'.ljust(20), ': %.3f[s]' %(time() - t0))
for key, value in grid.best_params_.items():
print(key.ljust(20), ':', value)
print('F1 Score'.ljust(20), ': %.3f' %(metrics.f1_score(adb_best.predict(df_valid),
y_valid, average='micro')))
print('\nConfusion Matrix:')
Done in : 7.679[s]
learning_rate : 1.0
n_estimators : 20
F1 Score : 0.913
Confusion Matrix:
[[8 0 0]
[0 7 1]
[0 1 6]]
AdaBoost can be used with a variety of base estimators. This can be an advantage over Gradient Boosting that uses trees, because it is possible to further the analysis process with more variants. However Gradient Boosting uses a slightly different algorithm and since base estimators is predetermined it needs one less parameter to tune. Generally for the particular choice of the loss function AdaBoost is sensitive to outliers, because squared-error loss places much more emphasis on observations with large absolute residuals. Gradient Boosting, instead, support robust loss function and it is less sensitive to outliers.
## 5 Gradient Boosting Regression Trees (GBRT)¶
Gradient Boosting is a technique to build additive regression models by sequentially fitting a simple parametrized function (weak learner) to current "pseudo residuals" by least squares at each iteration. The pseudo residuals are the gradient of the loss functional being minimized with respect to the model values at each data points, evaluated at the current step. Basically it is a numerical optimization in function space rather than parameter space.
Gradient Tree Boosting or Gradient Boosted Regression Trees (GBRT) is a special case of Gradient Boosting where the weak learners are regression trees. It's made for regression and can be adapted to classification. The method was invented by Jerome H. Friedman in 1999.
The main meta-parameters to adjust are the Tree Size and the amount of Regularization. For each of these meta-parameters scikit-learn offers a serie of knobs to adjust. Let's briefly review them.
Tree Size
The size of each tree is an important parameter. If trees are too large they tend to decrease performance and increase computational costs. The optimal tree size is problem dependent and can be controlled with these parameters:
• max_depth: it controls the maximum allowed level of interaction between variables. With max_depth = 2 the tree has up to one root node two internal nodes and four leaves. With this tree the model will include the effects of the interaction between two variables at most. The depth is the level of possible variable interaction. The interaction value between variables is generally unknown, but in most cases it is a low value. In many applications max_depth = 2 is too low, while max_depth > 10 it is unlikely required. Practical experience indicates typical values for this parameter in the range $4 \leq$ max_depth $\leq 8$.
• max_leaf_nodes: alternative way to control the depth of the trees. A tree with max_leaf_nodes = n has at most n - 1 split nodes and can model interaction of order max_leaf_nodes - 1, this beahavior is similar to max_depth = n-1 but the resulting trees are slightly unbalanced and are less sensitive to additivity. Moreover it should be faster to train at the expense of a slightly higher training error.
• min_sample_leaf: it puts a constraint on the number of samples in each leaf, hence it reduces the effects of outliers (you cannot have for example leaf with one node)
Regularization
Controlling the number of boosting iterations is also problem dependent. Each iteration reduce the training error, so that given a sufficient number of iteration this number can be made arbitrarily small. However this can cause overfitting. Thus there is an optimal number of iteration that must be found. There are also other ways to perform regularization. Let's review main parameters in this area:
• shrinkage:
• n_estimators: The number of boosting stages to perform (default=100). It is the main parameter to control regularization.
• learning_rate: this controls the Shrinkage, that is another form of regularization. It's a scale factor applied to tree predictions. Default is $0.1$. A decrease in learning_rate (increase in Shrinkage) has the effect of "reinforce concept": the redundancy between trees increases. The effect is that the model with high Shrinkage usually requires more trees but show a much better Variance.
• subsampling:
• subsample: Choosing subsample $< 1.0$ leads to a reduction of variance and an increase in bias. It's the fraction of samples to be used for fitting the individual base learners. These two parameters are similar to the ones used in Random Forest and are used for the same purpose: introduce randomization and improve on Variance
• max_features: The number of features to consider when looking for the best split. The lower the greater the reduction of variance, but also the greater the increase in bias. As for the Random Forest, use int $\to$ number of features ot use, float $\to$ percentage of features to use, auto $\to$ $\sqrt{n\_feat}$, none $\to$ all features.
Other parameters include choosing the loss function. Several loss function can be used, and as specified with the parameter loss.
Also GBRT allows to compute OOB estimate with the samples not included in the bootstrap sample. The OOB score are stored in the attribute oob_improvement_.oob_improvement_[i] and can be used for model selection, for example to set the optimal number of iterations. OOB scores are usually very pessimistic respect to cross validation but the latter is too time consuming.
Scikit-learn implementation offers also an additional parameter warm_start=True that allows to add more trees to an existing model.
### 5.1 Gradient Boosting Parameter Tuning¶
In the following examples we show how each meta-parameter affects GBRT performance.
In this example we use a synthetic dataset used in the book Elements of Statistical Learning by Hastie et al. and available in scikit-learn. The function make_hastie_10_2 generates a dataset for classification. The features $X_1, \ldots, X_n$ are standard independent Gaussian and the target function is defined by:
$$Y = \begin{cases} 1 & if \sum_{j=1}^{10} X_j^2 > \chi_{10}^2(0.5),\\ -1 & otherwise. \end{cases}$$
In [11]:
X_hastie, y_hastie = datasets.make_hastie_10_2(n_samples=12000, random_state=42)
X_hastie = X_hastie.astype(np.float32)
# map labels from {-1, 1} to {0, 1}
labels, y_hastie = np.unique(y_hastie, return_inverse=True)
X_hastie_train, X_hastie_test = X_hastie[:2000], X_hastie[2000:]
y_hastie_train, y_hastie_test = y_hastie[:2000], y_hastie[2000:]
Tree size
In the following examples we will try to understand how tree size parameters affects the resulting tree shape and then we will see how tree size affects GBRT error varying the number of estimators.
Let's see how max_leaf_nodes and max_depth affect the resulting tree structure. First we choose max_leaf_nodes = 4 and later we choose max_depth = 2. If max_leaf_nodes is specified, max_depth is ignored; the default value of max_leaf_nodes is None and in that case only max_depth is used. Despite having the same number of leaves (4) the trees are slighlty different.
In the first example (max_leaf_nodes = 4) the tree is grown in a greedy best-first fashion, at each split the node with the highest impurity is chosen to be further split while the node with lower impurity becomes a leaf. The resulting tree is unbalanced with leaves at every level.
In [12]:
gbrt = ensemble.GradientBoostingClassifier(n_estimators=1000,
max_leaf_nodes=4,
random_state=42)
gbrt.fit(X_hastie_train, y_hastie_train)
y_gbrt = gbrt.predict(X_hastie_test)
print("F1 score: {0}".format(metrics.f1_score(y_gbrt, y_hastie_test)))
F1 score: 0.9279123998783332
In [13]:
idx = 1
dot_data = StringIO()
tree.export_graphviz(gbrt.estimators_[idx][0], out_file=dot_data)
graph = graph_from_dot_data(dot_data.getvalue())[0]
try:
tree_image = Image(graph.create_png())
except:
tree_image =Image("images/temp2.png")
tree_image
Out[13]:
In the following example, specifying the depth implies that leaves are expanded for every internal nodee until the desired depth is reached. The resulting tree is balanced with $2^d$ leaves, where $d$ is the depth of the tree.
In [14]:
gbrt = ensemble.GradientBoostingClassifier(n_estimators=1000,
max_depth=2,
random_state=42)
gbrt.fit(X_hastie_train, y_hastie_train)
y_gbrt = gbrt.predict(X_hastie_test)
print("F1 score: {0}".format(metrics.f1_score(y_gbrt, y_hastie_test)))
F1 score: 0.9214766469508465
In [15]:
idx = 1
dot_data = StringIO()
tree.export_graphviz(gbrt.estimators_[idx][0], out_file=dot_data)
graph = graph_from_dot_data(dot_data.getvalue())[0]
try:
tree_image = Image(graph.create_png())
except:
tree_image =Image("images/temp3.png")
tree_image
Out[15]:
The difference between the two trees is really subtle. One can argue that the first tree looks like it was pruned. Allowing unbalance in the tree growing process can result in a tree that better follows the interaction of the variables (a variable that interact with another results in a split of first followed by a split of the second only in one of the two brances). In contrast allowing a larger number of leaves (the second tree) can potentially lead to a smoother function and a tree with leaves that are more pure.
The picture below demonstrates the effect of interaction order on the dataset. Since simulated data is additive (sum of quadratic monomials) using an interaction effect $>2$ should create unnecessary variance and perform poorly as the number of iterations increases. As can be seen, interaction of order $2$ improves on test error, after a sufficent number of trees is built. This behavior can be explained by noting that GBRT are additive in nature and the function that creates the target has a very high ways of building it. Constraining GBRT to use shallow trees forces the algorithm to fit the mimic the generative function, that is it contruct the first tree with only a single variable, then the second tree will fit the residuals (i.e.: the other variable). Allowing the trees to be deeper, each tree will capture a more complex model (high variance) resulting in overfitting for lower values of boosting iteration.
In [16]:
original_params = {'n_estimators': 1000,
'random_state': 42}
#TOOLS = "pan,box_zoom,reset,save,box_select"
fig = bk.figure(plot_width=700,
plot_height=500,
title="Tree size")
# tools=TOOLS)
for label, color, setting in [('Depth 2', 'green',
{'max_depth': 2}),
('Depth 6 ', 'turquoise',
{'max_depth': 6}),
('Depth 10', 'magenta',
{'max_depth': 10})]:
params = dict(original_params)
params.update(setting)
clf1.fit(X_hastie_train, y_hastie_train)
test_error = np.zeros((params['n_estimators'],), dtype=np.float64)
for i, y_hastie_pred in enumerate(clf1.staged_decision_function(X_hastie_test)):
# clf.loss_ assumes that y_test[i] in {0, 1}
test_error[i] = clf1.loss_(y_hastie_test, y_hastie_pred)
fig.line((np.arange(test_error.shape[0]) + 1)[::5],
test_error[::5],
color=color,
legend=label)
#sostituire train score con loss! train_score e' insample!
fig.xaxis.axis_label = "Number of Estimators"
fig.xaxis.axis_label_text_font_size = '11pt'
fig.yaxis.axis_label = "Test Error"
fig.yaxis.axis_label_text_font_size = '11pt'
bk.show(fig)
Regularization
Illustration of the effect of different regularization strategies for Gradient Boosting. The loss function used is binomial deviance. Regularization via shrinkage (learning_rate $< 1.0$) improves performance considerably. In combination with shrinkage, stochastic gradient boosting (subsample $< 1.0$) can produce more accurate models by reducing the variance via bagging. Subsampling without shrinkage usually does poorly. Another strategy to reduce the variance is by subsampling the features analogous to the random splits in Random Forests (via the max_features parameter).
In [17]:
original_params2 = {'n_estimators': 1000,
'max_leaf_nodes': 4,
'max_depth': None,
'random_state': 42,
'min_samples_split': 5}
fig = bk.figure(plot_width=700,
plot_height=500,
title="Regularization")
for label, color, setting in [('No shrinkage', 'orange',
{'learning_rate': 1.0, 'subsample': 1.0}),
('learning_rate=0.1', 'turquoise',
{'learning_rate': 0.1, 'subsample': 1.0}),
('subsample=0.5', 'blue',
{'learning_rate': 1.0, 'subsample': 0.5}),
('learning_rate=0.1, subsample=0.5', 'gray',
{'learning_rate': 0.1, 'subsample': 0.5}),
('learning_rate=0.1, max_features=2', 'magenta',
{'learning_rate': 0.1, 'max_features': 2})]:
params2 = dict(original_params2)
params2.update(setting)
clf2.fit(X_hastie_train, y_hastie_train)
# compute test set deviance
test_deviance = np.zeros((params2['n_estimators'],), dtype=np.float64)
for i, y_hastie_pred in enumerate(clf2.staged_decision_function(X_hastie_test)):
# clf.loss_ assumes that y_test[i] in {0, 1}
test_deviance[i] = clf2.loss_(y_hastie_test, y_hastie_pred)
fig.line((np.arange(test_deviance.shape[0]) + 1)[::5],
test_deviance[::5],
color=color,
legend=label)
fig.xaxis.axis_label = "Boosting Iterations"
fig.xaxis.axis_label_text_font_size = '10pt'
fig.yaxis.axis_label = "Test Set Deviance"
fig.yaxis.axis_label_text_font_size = '10pt'
bk.show(fig)
Hyperparameter Tuning: This is a possible approach to tune hyperparameters in Gradient Boosting Regression Trees. Note that it is a difficoult task and there is no unique way to do it.
1. set n_estimators with an high value
2. tune hyperparameters via grid search
3. finally set n_estimators even higher and tune learning_rate
Step 1 and 2: do a grid search by using the maximum number of n_estimators and tune other hyperparameter
In [18]:
gtb = ensemble.GradientBoostingClassifier(n_estimators = 1000)
params = {'max_depth':[4, 6],
'min_samples_leaf':[3, 5, 9],
'learning_rate':[0.1, 0.05, 0.02],
'subsample':[0.5, 1.0],
'max_features':[2,3,4]}
t0 = time()
grid = model_selection.GridSearchCV(gtb, params, n_jobs=-1)
grid.fit(df_train, y_train)
gtb_best = grid.best_estimator_
print('Done in %0.3f[s]' %(time() - t0))
Done in 36.034[s]
In [19]:
for key, value in grid.best_params_.items():
print(key.ljust(20), ':', value)
print('F1 Score'.ljust(20), ': %.3f' %(metrics.f1_score(gtb_best.predict(df_valid),
y_valid, average='micro')))
print('\nConfusion Matrix:')
print(metrics.confusion_matrix(gtb_best.predict(df_valid), y_valid))
learning_rate : 0.1
max_depth : 4
max_features : 4
min_samples_leaf : 3
subsample : 1.0
F1 Score : 0.957
Confusion Matrix:
[[8 0 0]
[0 7 0]
[0 1 7]]
Step 3: increase n_estimators and fine tune learning_rate:
In [20]:
gtb = ensemble.GradientBoostingClassifier()
params = {'n_estimators':[3000],
'max_depth':[6],
'min_samples_leaf':[9],
'learning_rate':[0.05, 0.02, 0.01, 0.005],
'subsample':[0.5],
'max_features':[2]}
t0 = time()
grid = model_selection.GridSearchCV(gtb, params, cv=7, n_jobs=-1)
grid.fit(df_train, y_train)
gtb_best = grid.best_estimator_
print('Done in %0.3f[s]' %(time() - t0))
Done in 14.628[s]
In [21]:
for key, value in grid.best_params_.items():
print(key.ljust(20), ':', value)
print('F1 Score'.ljust(20), ': %.3f' %(metrics.f1_score(gtb_best.predict(df_valid),
y_valid, average='micro')))
print('\nConfusion Matrix:')
print(metrics.confusion_matrix(gtb_best.predict(df_valid), y_valid))
learning_rate : 0.05
max_depth : 6
max_features : 2
min_samples_leaf : 9
n_estimators : 3000
subsample : 0.5
F1 Score : 0.957
Confusion Matrix:
[[8 0 0]
[0 7 0]
[0 1 7]]
GBRT typical properties:
• PROS:
• Natural handling of data of mixed type (= heterogeneous features)
• Predictive power.
• Robustness to outliers in input space (via robust loss functions)
• Support for different Loss functions.
• Automaticalli detects non-linear feature interactions
• Fits naturally additive functions.
• scikit-learn implementation supports warm start; it is possible to add additional estimators to an already fitted model.
• CONS:
• Requires careful tuning (RF are faster to tune, they use essentially one parameter)
• Slow to train (but fast in prediction)
• Cannot extrapolate (it is not possible to predict beyond the minimum and maximum limits of the response variable in the training data, common to many Machine Learning algorithms).
• Scalability issue: due to its sequential nature it is hardly parallelizable.
Visit www.add-for.com for more tutorials and updates. | 2022-05-24 03:10:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5982362031936646, "perplexity": 2173.520129044301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00547.warc.gz"} |
https://www.physicsforums.com/threads/hyugens-principle-and-laser-wave-dispersion.223640/ | # Hyugens principle and laser wave dispersion
1. Mar 22, 2008
### Ulysees
According to wikipedia below:
> Huygen's principle states that each point of an advancing wave front is in fact the center of a fresh disturbance and the source of a new train of waves; and that the advancing wave as a whole may be regarded as the sum of all the secondary waves arising from points in the medium already traversed.
http://en.wikipedia.org/wiki/Photon_dynamics_in_the_double-slit_experiment [Broken]
Then how is it possible that laser light moves in a narrow beam with very small dispersion sideways?
Can radiowaves be produced to be like that? Cause that would make an extremely high directivity antenna for satellite communications etc.
Last edited by a moderator: May 3, 2017
2. Mar 24, 2008
### Andy Resnick
Laser beams disperse just like any other beam. Perhaps you are wondering how any well-collimated beam can exist, given Huygen's principle?
3. Mar 24, 2008
### Ulysees
Sure. It was mentioned:
Exactly. In fact I'd like to simulate it with sound waves if possible. Maybe with a large number of speakers driven differently or something.
4. Mar 24, 2008
### Staff: Mentor
A laser beam that is (say) 1 mm wide is still about ten thousand times wider than a single wavelegth which is on the order of $10^{-7}$ m.
5. Mar 24, 2008
### Ulysees
So what happens at the edge then? If we illuminate a camera chip directly with a laser beam, what does the edge look like?
6. Mar 24, 2008
### Staff: Mentor
7. Mar 24, 2008
### Ulysees
Alright. It's hard to imagine why with all the smoothness at the edge it still maintains the direction.
Does the Hyugens principle apply to sound waves too? Can the same directionality be achieved with sound waves?
8. Mar 25, 2008
### Andy Resnick
FWIW, you are in good company, some of the finest scientific minds in history have written explanations of this. I have been trying to think of a simple way to explain this- here goes:
First, consider an abstract representation of the source- a hole in an opaque screen, located at z = 0. You have no idea what happens at z < 0, all you know is that at z = 0, the intensity is 1 in the hole and 0 everywhere else. If that's all there was, then indeed, you would expect the light to wildly diverge at z>0. But it doesn't- why?
Because light is a *vector* field rather than a scalar field. The field in the hole is not simply 1- it's got a direction associated with it. Intuitively, you can see that if the vectors all point along z, then the light should continue to go along z. If the vectors point 'out', then the divergence will be large (or conversely, can converge). If you like, it's possible to work in funny coordinates that take this into account- look up the "Rayeligh length" for a Gaussian beam.
The edge is a problem- the field is discontinuous, which implies that something diverged. Since infinities do not actually exist, what this means is that the field 'spreads out' in order to maintain a smooth function.
In practical terms, to make a perfectly collimated beam, the beam diameter must be infinitely large. Gaussian beams (beams from laser cavities) can be made well-collimated by passing them through a spatial filter (to make a pinhole source), and then to a lens. *But*, the best performance is when the pinhole clips the Gaussian near the zeros, not arbitrarily in the middle. This is to minimize the 'spreading' effect at the edge (which I now call diffraction).
Does this help?
9. Mar 25, 2008
### Ulysees
Is a Gaussian beam one with a Gaussian density function along a line perpendicular to its direction?
An example of a spatial filter that makes a pinhole source? Is a small hole through an opaque sheet an example? Is yes, wouldn't passing the beam through the hole make it diverge a lot?
That makes sense. If a beam diverges 1 mm per metre, then a lens with a focal distance of 1 metre should make a 1 mm-wide beam perfectly parallel?
I don't understand this bit. Do you mean that illuminating a pinhole with the very edge of the beam, would be ideal?
Last edited: Mar 25, 2008
10. Mar 26, 2008
### Andy Resnick
Ulysees,
Let's go through point-by-point.
1)Is a Gaussian beam one with a Gaussian density function along a line perpendicular to its direction?
Yes, the beam within an optical cavity behaves as Hermite-Gaussian (or Laguerre-Gaussian) functions for the transverse field density. A TEM00 beam has a Gaussian (~exp^-r^2) dependence.
2)An example of a spatial filter that makes a pinhole source? Is a small hole through an opaque sheet an example? Is yes, wouldn't passing the beam through the hole make it diverge a lot?
A proper spatial filter is a microscope objective focused to a pinhole. And yes, the beam can diverge a lot: the numerical aperture of the microscope objective sets the angle.
3) That makes sense. If a beam diverges 1 mm per metre, then a lens with a focal distance of 1 metre should make a 1 mm-wide beam perfectly parallel?
Well, usually the NA is closer to 0.5, so the divergence is much higher. But the raw beam coming out of a laser has a rough divergence of 1 mm/meter (whatever that is in radians). The 'collimated' beam will not be perfectly parallel in either case, because the source (pinhole or exit pupil) has a size. Using smaller pinholes will in the end give a better collimation.
4) I don't understand this bit. Do you mean that illuminating a pinhole with the very edge of the beam, would be ideal?
No, the pinhole needs to be correctly sized. It's always centered on the beam axis. But the size of the pinhole should be matched to the size of the focused beam- specifically, the size should be equal to the size of the central peak of the Airy disk.
11. Mar 26, 2008
### Ulysees
If we imagine a long, thin and straight tube made of light-absorbent material with a light source at one end, will the light coming out the other end be as collimated as the ratio diametre/length suggests?
I think not, but do you know the maths for it, any analytical solution how divergent the output beam would be?
If this works, it could be done with sound too, and make a sound laser.
Last edited: Mar 26, 2008
12. Mar 26, 2008
### Ulysees
And by the way, no need to say laser is monochromatic or coherent, sound can have these properties easily.
13. Mar 27, 2008
### Andy Resnick
Definitely not. The tube is not relevant- the only thing that matters for the propagation of light once it leaves the tube are the field values at the end face of the tube.
"light pencils" cannot exist. That is a fundamental conceptual error unfortunately aided by geometrical optics. A related conceptual error is the concept of "combustion from afar" (Archimedes legend). The two errors are:
1) parallel rays to not exist
2) even if they did, they could not carry energy.
If you can find it, an excellent read is "On the possible and impossible in optics", by G. G. Slyusarev (1962). It's a DTIC document, FTD-TT-62-175 (accession number 886286, I think).
I appreciate the idea of manipulating coherent sound- there's been some interesting work (odd overlap between military and advertising interests) with directional broadcast techniques:
http://www.temple.edu/ispr/examples/ex02_06_23.html [Broken]
Last edited by a moderator: May 3, 2017
14. Mar 27, 2008
### Ulysees
Which is as was expected, but what are the equations here? I'd like to know the maths by which we derive the intensity profile below.
http://img262.imageshack.us/img262/4459/image1ls6.gif [Broken]
The flat part I can do from geometric optics: it has a width given by:
$$w = D(L+d)/L$$
How do we derive the part due to diffusion, above and below the flat part? I think it can only be done numerically with a method like TLM. But do you know an analytical solution? Maybe something based on Hyugens principle, so we get a chance to understand how the principle accounts for collimated waves?
Last edited by a moderator: May 3, 2017
15. Mar 27, 2008
### Claude Bile
Firstly, I think you mean diffraction (not diffusion).
It is possible to get a solution analytically by solving the scalar wave equation and using the paraxial approximation $d^2E/dz^2 = 0$. You do however need to make some assumptions about the solution (that it is beam-like for example). Note too that since the solution is a scalar function, it does not take into account vector effects such as polarisation.
Claude.
16. Mar 28, 2008
### Andy Resnick
I don't understand the picture- there's a point source within a long tube, the interior walls of which are covered by absorbent material?
Usually, when an incoherent source is used, people work in terms of the intensity instead of the field. In this case, sometimes people speak of the 'diffusion' of energy (intensity) rather than diffraction (field). They are similar concepts, but lead to very different and surprising results: for example, using incoherent light when imaging can provide twice the resolution over using coherent illumination.
17. Mar 28, 2008
### Ulysees
Exactly. The curved edges of the beam that are labeled "diffusion" are not rays of light, they are just the edges beyond which the intensity is below a threshold.
Claude, I definitely didn't mean diffraction, diffraction requires different materials, eg glass and air, or a varying diffraction index causing the rays to bend. "Diffusion" is not accurate either, because it is normally used for the scattering due to air molecules. I want to understand the spreading due to the wave nature of light, in a perfect vacuum. What is that spreading called and how is it calculated?
Thanks. But what is that paraxial approximation? Looks like assuming the E field varies linearly along the axis of the beam:
$E = az + b$
at x=0, y=0
What equation would you be solving analytically under this assumption? Maxwell's?
Last edited: Mar 28, 2008
18. Mar 30, 2008
### Claude Bile
I think you do mean diffraction (it sounds like you are confusing diffraction with refraction?).
With regard to the paraxial wave approximation (I made a slight error in my previous post by the way...);
First, if you assume a monochromatic wave, the wave equation (which can be derived from Maxwell's equations) reduces to the Helmholtz equation;
$$\nabla^2E+k^2E=0$$
Now define an envelope function, U such that;
$$E(r) = U(r)e^{ikz}$$
Substituting this into the Helmholtz equation yields;
$$\frac{d^2U}{dx^2}e^{ikz} + \frac{d^2U}{dy^2}e^{ikz} + \frac{d^2U}{dz^2}e^{ikz} + 2ik\frac{dU}{dz}e^{ikz}=0$$
The paraxial approximation is that the $d^2U/dz^2$ term reduces to 0 (not the $d^2E/dz^2$ term as I mistakenly said in my previous post). This approximation is, in essence, saying that since the solution is beam-like, the envelope is slowly varying with z.
This yields the Paraxial Wave Equation;
$$\frac{d^2U}{dx^2}e^{ikz} + \frac{d^2U}{dy^2}e^{ikz} + 2ik\frac{dU}{dz}e^{ikz}=0$$
The Gaussian Beam is a solution to this equation.
EDIT: The derivatives should be partial derivatives.
Claude.
19. Mar 30, 2008
### Ulysees
That's what I was looking for, thank you.
So if instead you do not assume the envelope U(r) varies linearly along z, you get something close to a Gaussian U(r) which can only be derived numerically.
It seems strange to me that the paraxial approximation is close to the real thing. Isn't the approximation saying that the envelope U(r) does NOT approach zero at z=infinity asymptotically, but linearly, like below?
U(r)=(az+b)G(r),
G(r)=Gaussian
Do actual light beams look more like an inverse square function of z as we approach infinity?
Last edited: Mar 30, 2008
20. Mar 31, 2008
### Claude Bile
Well, the paraxial approximation is essentially saying that a beam diverges at a fixed angle for large z, which is a fairly accurate approximation.
If you assume a solution of the form (noting that r = distance from the beam axis - $x^2 + y^2$ not the position vector);
$$U(x,y,z) = E_0e^{iQ(z)r^2}e^{iP(z)}$$
The Gaussian solution can be derived analytically. The actual solution is of the form;
$$U(x,y,z) = E_0\frac{w_0}{w(z)}e^{-itan(z/z_0)}e^{-r^2/w^2(z)}e^{ikr^2/2R(z)}$$
Where;
$E_0$ is the E-field amplitude.
$w_0$ is the spot radius at the beam waist.
$w(z)$ is the spot radius as a function of distance.
$z_0$ is the Rayleigh Range (or the Confocal parameter) of the beam.
$R(z)$ is the Radius of Curvature of the beam as a function of distance, where;
$$R(z) = \frac{1}{z}(z^2+z_0^2)$$
The 1st exponent is the Guoy (longitudinal) Phase term, the 2nd exponent is the transverse beam amplitude and the 3rd exponent is the transverse phase term. Also, note that there is some curvature in the beam profile, however it is very small, which is why the $d^2U/dz^2$ term is assumed to be negligible.
Claude. | 2017-09-21 03:50:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7356180548667908, "perplexity": 1019.5451634560665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687606.9/warc/CC-MAIN-20170921025857-20170921045857-00059.warc.gz"} |
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=BSSHB5_2015_v9n7_439 | The Fabrication and Property Evaluation of Poly-crystalline CdTe based Photon Counting X-ray Sensor
Title & Authors
The Fabrication and Property Evaluation of Poly-crystalline CdTe based Photon Counting X-ray Sensor
Kang, Sang Sik; Park, Ji Koon;
Abstract
An electrical signals of a conventional radiation medical imaging sensor are obtained by charge integration method. In this study, the polycrystalline cadmium telluride(p-CdTe) film was fabricated by a thermal evaporation method for the photon counting sensor development with excellent resolution in low exposure dose. From the fabricated p-CdTe sensor, the physical properties(SEM, XRD) and the electrical properties(leakage current, x-ray sensitivity, SNR) were evaluated. As a result, the leakage current of below $\small{5nA/cm^2}$ and $\small{7{\mu}C/cm^2-R}$ of the X-ray sensitivity were showed in below $\small{1V/{\mu}m}$. In addition, the signal to noise ratio showed the values of above 5000 at operating voltage.
Keywords
photon counting;polycrystalline;cadmium telluride;charge collection efficiency;thermal evaporation;
Language
Korean
Cited by
References
1.
Kyung-O KIM, "Measurement of the electrical properties of a polycrystalline cadmium telluride for direct conversion flat panel x-ray detector", 2014 JINST, pp. 1-4, 2014
2.
Kyung-O KIM, et al, "Radiation detector material development with milti-layer by hetero-junction for the reduction of leakage current", The Korean Society of Radiology, Vol. 3, No. 1, pp 11-15, 2009.
3.
Ji-Koon Park, et al, "X-ray sensitivity of hybrid-type sensor based on CaWO4-selenium for digital X-ray imager", Transactions on Electrical and ELectronic Materials, Vol. 5, No. 4, pp 133-134, 2004.
4.
Ji-Koon Park, et al, "Performance evaluation of a selcenium based prototype digital radiation detector", J. Biomed. Eng, Res, pp. 300-305, 2007.
5.
Heo-Ye-Ji, et al, "A study of the photon counting sensor setup for the evaluation of the photon count efficiency, The Korean Society of Radiology Proceeding of 2012 Autumn conference. pp. 177-180, 2012.
6.
Fred P. Vaccaro, "Limitations of the Hecht equation encountered in measuring $\mu\tau$ products in Mercuric Iodine", Vol. 50, pp. 1-5, 2003.
7.
Safa Kasap, et al, "Amorphous and polycrystalline photoconductors for direct coversion flat panel x-ray image sensors", Sensor, Vol. 11, pp. 5112-5157, 2011. | 2018-07-23 00:31:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 3, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4165920317173004, "perplexity": 10960.565702546726}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676594675.66/warc/CC-MAIN-20180722233159-20180723013159-00254.warc.gz"} |
https://www.academic-quant-news.com/2021/04/research-articles-for-2021-04-12.html | Research articles for the 2021-04-12
A Fast Evidential Approach for Stock Forecasting
Tianxiang Zhan,Fuyuan Xiao
arXiv
In the framework of evidence theory, data fusion combines the confidence functions of multiple different information sources to obtain a combined confidence function. Stock price prediction is the focus of economics. Stock price forecasts can provide reference data. The Dempster combination rule is a classic method of fusing different information. By using the Dempster combination rule and confidence function based on the entire time series fused at each time point and future time points, and the preliminary forecast value obtained through the time relationship, the accurate forecast value can be restored. This article will introduce the prediction method of evidence theory. This method has good running performance, can make a rapid response on a large amount of stock price data, and has far-reaching significance.
A New Macro-Financial Condition Index for the Euro Area
Morana, Claudio
SSRN
In this paper, we introduce a new time-domain decomposition for weakly stationary or trend stationary processes, based on trigonometric polynomial modelling of the underlying component of an economic time series. The method is explicitly devised to disentangle medium to long-term and short-term fluctuations in macroeconomic and financial series, in order to accurately measure the financial cycle and the concurrent long swings in economic activity. The implementation of this decomposition is straightforward and relies on standard regression analysis and general to specific model reduction. Full support to the proposed method is provided by Monte Carlo simulation. In the paper, we also provide a multivariate extension, involving sequential univariate decompositions and Principal Components Analysis. Based on this multivariate approach, we introduce a set of new composite indexes of macro-financial conditions for the euro area and assess their information content. In particular, with reference to the current pandemics, the indicators suggest that most of the GDP contraction has been of short-term, cyclical nature. This is likely due to the prompt monetary and fiscal policy responses. Yet our evidence suggests that the financial cycle might have currently achieved a peak area. Hence, the risk for further, deeper disruptions is high, particularly in so far as a new sovereign/corporate debt crisis were not eventually avoided.
Cobandag Guloglu, Zeynep,Ekinci, Cumhur
SSRN
We analyze the performance of five different methods appearing in the market microstructure literature in predicting effective and quoted bid-ask spreads (Roll, LOT Mixed, Effective Tick, High-Low and Closing Percent Quoted Spread proxies). With data from index futures, currency futures and gold futures traded in Borsa Istanbul and taking percent effective and percent quoted spreads obtained from intraday trade and quote data as benchmarks, we calculate and compare the correlations and root mean square errors of the spread measures. Results show that none of the proxies is successful enough in estimating effective or quoted spread although under normal market conditions, Effective Tick appears to perform best.
Pascal Michaillat,Emmanuel Saez
arXiv
This paper develops a new model of business cycles. The model is economical in that it is solved with an aggregate demand-aggregate supply diagram, and the effects of shocks and policies are obtained by comparative statics. The model builds on two unconventional assumptions. First, producers and consumers meet through a matching function. Thus, the model features unemployment, which fluctuates in response to aggregate demand and supply shocks. Second, wealth enters the utility function, so the model allows for permanent zero-lower-bound episodes. In the model, the optimal monetary policy is to set the interest rate at the level that eliminates the unemployment gap. This optimal interest rate is computed from the prevailing unemployment gap and monetary multiplier (the effect of the nominal interest rate on the unemployment rate). If the unemployment gap is exceedingly large, monetary policy cannot eliminate it before reaching the zero lower bound, but a wealth tax can.
An algorithm for the pricing and timing of the option to make a two-stage investment with loan guarantees
Dong, Linjia,Yang, Zhaojun
SSRN
We develop a jump-diffusion model for a guarantee-investment combination financing mode (G-I mode) that is recently popular in financial practice. We assume that a borrower has exclusively an option to invest in a project in two stages. The project's cash flow follows a double exponential jump-diffusion process and it is increased by a growth factor once the second-stage investment is exercised. The first-stage investment cost is financed by a bank loan with the guarantee provided by an insurer, who promises to provide the second-stage investment cost as well as take the lender's all default losses. In return for the guarantee and investment, the borrower pays the insurer a guarantee fee upon first investment and a fraction of equity upon second investment. The fraction of equity depends on the uncertain cash flow level when the first investment is exercised, which makes the timing and pricing of the option to invest in a project interesting and challenging. We provide closed-form solutions and produce a numerical algorithm for the timing and pricing of the real option.
Analysis of bank leverage via dynamical systems and deep neural networks
Fabrizio Lillo,Giulia Livieri,Stefano Marmi,Anton Solomko,Sandro Vaienti
arXiv
We consider a model of a simple financial system consisting of a leveraged investor that invests in a risky asset and manages risk by using Value-at-Risk (VaR). The VaR is estimated by using past data via an adaptive expectation scheme. We show that the leverage dynamics can be described by a dynamical system of slow-fast type associated with a unimodal map on [0,1] with an additive heteroscedastic noise whose variance is related to the portfolio rebalancing frequency to target leverage. In absence of noise the model is purely deterministic and the parameter space splits in two regions: (i) a region with a globally attracting fixed point or a 2-cycle; (ii) a dynamical core region, where the map could exhibit chaotic behavior. Whenever the model is randomly perturbed, we prove the existence of a unique stationary density with bounded variation, the stochastic stability of the process and the almost certain existence and continuity of the Lyapunov exponent for the stationary measure. We then use deep neural networks to estimate map parameters from a short time series. Using this method, we estimate the model in a large dataset of US commercial banks over the period 2001-2014. We find that the parameters of a substantial fraction of banks lie in the dynamical core, and their leverage time series are consistent with a chaotic behavior. We also present evidence that the time series of the leverage of large banks tend to exhibit chaoticity more frequently than those of small banks.
Assessing the Impact of COVID-19 on Trade: a Machine Learning Counterfactual Analysis
Marco Dueñas,Víctor Ortiz,Massimo Riccaboni,Francesco Serti
arXiv
By interpreting exporters' dynamics as a complex learning process, this paper constitutes the first attempt to investigate the effectiveness of different Machine Learning (ML) techniques in predicting firms' trade status. We focus on the probability of Colombian firms surviving in the export market under two different scenarios: a COVID-19 setting and a non-COVID-19 counterfactual situation. By comparing the resulting predictions, we estimate the individual treatment effect of the COVID-19 shock on firms' outcomes. Finally, we use recursive partitioning methods to identify subgroups with differential treatment effects. We find that, besides the temporal dimension, the main factors predicting treatment heterogeneity are interactions between firm size and industry.
Assessing the practicability of the condition used for dynamic equilibrium in Pasinetti theory of distribution
A Jayakrishnan,Anil Lal S
arXiv
In this note an assessment of the condition $$K_w/K=S_w/S$$ is made to interpret its meaning to the Passineti's theory of distribution\cite{pasinetti1962rate}. This condition leads the theory to enforce the result $$s_w\rightarrow0$$ as $$P_w\rightarrow 0$$, which is the Pasinetti's description about behavior of the workers. We find that the Pasinetti's claim, of long run worker's propensity to save as not influencing the distribution of income between profits and the wage can not be generalized. This claim is found to be valid only when $$W>>P_w$$ or $$P_w=0$$ with $$W\ne0$$. In practice, the Pasinetti's condition imposes a restriction on the actual savings by one of the agents to a lower level compared to its full saving capacity. An implied relationship between the propensities to save by workers and capitalists shows that the Passineti's condition can be practiced only through a contract for a constant value of $$R=s_w/s_c$$, to be agreed upon between the workers and the capitalists. It is showed that the Passineti's condition can not be described as a dynamic equilibrium of economic growth. Implementation of this condition (a) may lead to accumulation of unsaved income, (b) reduces growth of capital, (c)is not practicable and (d) is not warranted. We have also presented simple mathematical steps for the derivation of the Pasinetti's final equation compared to those presented in \cite{pasinetti1962rate}
Board Committees and Director Departures
Jagannathan, Murali,Krishnamurthy, Srinivasan,Spizman, Joshua D.
SSRN
We examine whether directors utilize private information obtained through their committee memberships to depart from firms prior to the revelation of their poor performance. Such departures raise the concern that directors leave the firm when they are most needed. Utilizing private information to make decisions in their personal interest may also violate the directors’ fiduciary duties. We focus on departures of audit committee members since information regarding earnings quality should be available to them prior to public release. The departure of audit committee members who serve on multiple boards is coincident with a deterioration in earnings quality. Other directors do not appear to time their departure based on declines in earnings quality. Results from examining the reasons behind this finding are consistent with the director’s preference to lead a “quiet life†and a desire to lower their exposure to litigation risk rather than to protect their reputation in the director market.
Changes in the DJIA: Market Reactions and Impact of Estimation Window
Ryan, Patricia A,Villupuram, Sriram V.
SSRN
Changes in the DJIA from 1929-2019 are examined to evaluate the immediate and long-term market reaction after a component change in the DJIA. Using multiple event study methodologies, there is a clear increase in wealth when a firm is added to the DJIA and a decrease in wealth around the time of deletion from the DJIA. Additions earn positive abnormal returns regardless of estimation window. The choice of estimation window is critical for deletions as we show that this is the reason for the difference in results in the literature. Using a post-estimation window, deletions have a more significant negative wealth effect. Using pre-estimation window, returns are negative post announcement, but not at the announcement. Long term, firms added to the DJIA have positive abnormal returns in the second year after inclusion. Deletions from the DJIA after the Great Depression have negative returns three years after removal thus implying a potential investment opportunity upon DJIA changes.
Closed-form option pricing for exponential Lévy models: a residue approach
Aguilar, Jean-Philippe,Kirkby, Justin
SSRN
Exponential L\'evy processes provide a natural and tractable generalization of the classic Black-Scholes-Merton model which are capable of capturing observed market implied volatility skews. In the existing literature, closed-form option pricing formulas are sparse for exponential L\'evy models, outside of special cases such as Merton's jump diffusion, and complex numerical techniques are required even to price European options.To bridge the gap, this work provides a comprehensive and unified pricing framework for vanilla and exotic European options under the Variance Gamma (VG), Finite Moment Log Stable (FMLS), one-sided Tempered Stable (TS), and Normal Inverse Gaussian (NIG) models. We utilize the Mellin Transform and residue calculus to obtain closed-form series representations for the price of several European options, including vanillas, digitals, power, and log options. These formulas provide nice theoretical representations, but are also efficient to evaluate in practice, as numerous numerical experiments demonstrate. The closed-form nature of these option pricing formulas makes them ideal for adoption in practical settings, as they do not require complicated pricing methods to achieve high accuracy prices, and the resulting pricing error is reliably controllable.
Combining dimensionality reduction with neural networks for realized volatility forecasting
He, Lidan,Bucci, Andrea,Liu, Zhi
SSRN
The application of artificial neural networks to finance has received a great deal of attention from both investors and researchers, especially as a forecasting method. When the number of predictors is high, these methods suffer from the so-called "curse of dimensionality" and produce biased forecasts. In this paper, we relied on dimensionality reduction methods to alleviate such issue when a wide set of financial and macroeconomic variables is considered in the prediction of stock market volatility. Specifically, we combined Bayesian Model Averaging (BMA), Principal Component Analysis (PCA), Non-negative Matrix Factorization (NMF) and Least Absolute Shrinkage and Selection Operator (LASSO) with hybrid artificial neutral networks to forecast realized volatility. The results showed that reduced models were able to perform in a similar way or even outperforms the compared full models in terms of predictive accuracy.
Crash Probability Anomaly in the Chinese Stock Market
Fang, Yi,Niu, Hui,Tong, Xiangda
SSRN
This study investigates the cross-sectional relationship of stock price crash probability in the Chinese stock market. We find that there is a negative cross-sectional correlation between crash probability and stock return. Meanwhile, we discover that the anomaly of crash probability is affected by market-wide sentiment, which is stronger in high-priced stocks, but not related to company size. Those above findings are diametrically opposite of those of the U.S. market.
Demand-pull and technology-push: What drives the direction of technological change? -- An empirical network-based approach
Kerstin Hötte
arXiv
Demand-pull and technology-push are linked to an empirical two-layer network-based on coupled cross-industrial input-output (IO) and patent citation links among 155 4-digit (NAICS) US-industries in 1976-2006 to study the evolution of industry hierarchies and link formation. Both layers co-evolve, but differently: The patent network became denser and increasingly skewed, while market hierarchies are balanced and sluggish in change. Industries became more similar by patent citations, but less by IO linkages. Having similar R&D capabilities as other big industries is positively related to innovation and growth, but relying on the same market inputs is unfavorable but may incite industries to explore other technological pathways. A tentative interpretation is the non-rivalry of intangible knowledge. This may strengthen existing R&D trajectories. Growth in the market is constrained by competition and market pressure may trigger a re-direction in both layers. This work is limited by its reliance on endogenously evolving classifications.
Discount Rate Risk in Private Equity: Evidence from Secondary Market Transactions
Boyer, Brian H.,Nadauld, Taylor,Vorkink, Keith,Weisbach, Michael S.
SSRN
Standard measures of PE performance based on cash flows overlook discount rate risk. An index constructed from prices paid in secondary market transactions indicates that PE discount rates vary considerably. While the standard alpha for our index is zero, measures of performance based on cash flow data for funds in our index are large and positive. To illustrate that results are not driven by idiosyncrasies of PE secondary markets, we obtain similar results using cash flows and returns of synthetic funds that invest in small cap stocks. Ignoring variation in PE discount rates can lead to a misallocation of capital.
Disposition effect across distinct investor categories in a tax-free context: are team dynamics and cognitive dissonance behind it?
de Groot, Alexander,Núñez-Letamendia, Laura,de Groot, Olivier
SSRN
Building on recent literature and employing survival analysis on trading data, we document differences in the strength and direction of the disposition effect on distinct categories of investors: (i) individuals not receiving professional advice; (ii) individuals receiving it; (iii) professional managers of delegated retail portfolios; (iv) professional managers of funds/institutional portfolios. We also find that the disposition effect is contingent upon paper gain-loss magnitudes in a more complex way than the V-shape proposed by the literature, and that market ups-downs do not exert influence on the propensity to this bias. We interpret our results using alternative explanations of investor behavior.
Earning management, agency cost and value: Non-rent seeking and rent seeking prone firms - Indian evidence
Ganguli, Santanu K.
SSRN
The study explores and contrasts the earning management and its nature of non-rent seeking and rent seeking prone firms followed by their respective valuation implication in the light of agency problem (cost) in India, where rent seeking is central to the economy. The empirical results suggest that the cash flow management by the non-rent seeking firms having lesser agency cost, is likely to be beneficial from valuation point as the same might aid in addressing information asymmetry as to future growth potential. Opposed to the existing literature, accrual management is considered detrimental from valuation perspective for these firms. For rent seeking prone firms where agency cost is documented to be higher, both cash flow management and cost of production management are opportunistic because they are likely to be aimed at reaping private benefit, as such, both have an inverse relation with the value. The findings of the study are robust and consistent.
Environmental assessment of a new generation battery: The magnesium-sulfur system
Claudia Tomasini Montenegro,Jens F. Peters,Manuel Baumann,Zhirong Zhao-Karger,Christopher Wolter,Marcel Weil
arXiv
As environmental concerns mostly drive the electrification of our economy and the corresponding increase in demand for battery storage systems, information about the potential environmental impacts of the different battery systems is required. However, this kind of information is scarce for emerging post-lithium systems such as the magnesium-sulfur (MgS) battery. Therefore, we use life cycle assessment following a cradle-to-gate perspective to quantify the cumulative energy demand and potential environmental impacts per Wh of the storage capacity of a hypothetical MgS battery (46 Wh/kg). Furthermore, we also estimate global warming potential (0.33 kg CO2 eq/Wh) , fossil depletion potential (0.09 kg oil eq / Wh), ozone depletion potential (2.5E-08 kg CFC-11/Wh) and metal depletion potential (0.044 kg Fe eq/Wh), associated with the MgS battery production. The battery is modelled based on an existing prototype MgS pouch cell and hypothetically optimised according to the current state of the art in lithium-ion batteries (LIB), exploring future improvement potentials. It turns out that the initial (non-optimised) prototype cell cannot compete with current LIB in terms of energy density or environmental performance, mainly due to the high share of non-active components, decreasing its performance substantially. Therefore, if the assumed evolutions of the MgS cell composition are achieved to overcome current design hurdles and reach a comparable lifespan, efficiency, cost and safety levels to that of existing LIB; then the MgS battery has significant potential to outperform both existing LIB, and lithium-sulfur batteries.
Financial Heterogeneity and the Dynamics of Credit Rationing in Japan
Mizobata, Hirokazu
SSRN
A perceptual gap between banks and firms exists in Japan, preventing the credit channel of monetary policy. Banks believe that bankable customers are scarce, while firms believe that banks do not issue loans without collateral or guarantees. To explain this gap, I focus on the dispersion in the degree of financial constraints across listed Japanese firms from FY1991 to FY2015. I construct a firm-specific and time-varying measure of financial constraints through the structural estimation, and investigate its distribution over time. The results reveal a right-skewed distribution for the index of financial constraints, indicating that many firms face minor financial constraints, while a few face severe financial constraints. The spread between the 75th and 25th percentiles of the index of financial constraints increased after the bubble burst, indicating that Japan's financial heterogeneity is becoming outstanding recently. Finally, decomposing financial heterogeneity into within- and between-industry effects shows that the observed financial inequality is due to the increase in inequality among firms within narrowly defined industries.
Financial Markets Prediction with Deep Learning
Jia Wang,Tong Sun,Benyuan Liu,Yu Cao,Degang Wang
arXiv
Financial markets are difficult to predict due to its complex systems dynamics. Although there have been some recent studies that use machine learning techniques for financial markets prediction, they do not offer satisfactory performance on financial returns. We propose a novel one-dimensional convolutional neural networks (CNN) model to predict financial market movement. The customized one-dimensional convolutional layers scan financial trading data through time, while different types of data, such as prices and volume, share parameters (kernels) with each other. Our model automatically extracts features instead of using traditional technical indicators and thus can avoid biases caused by selection of technical indicators and pre-defined coefficients in technical indicators. We evaluate the performance of our prediction model with strictly backtesting on historical trading data of six futures from January 2010 to October 2017. The experiment results show that our CNN model can effectively extract more generalized and informative features than traditional technical indicators, and achieves more robust and profitable financial performance than previous machine learning approaches.
Financial vulnerability and seeking expert advice: Evidence from a survey experiment
Delis, Manthos D.,Galariotis, Emilios C.,Monne, Jerome
SSRN
The role of a bank advisor is especially important for guiding and counseling financially distressed individuals. Using a randomized controlled survey experiment conducted on a representative sample of French individuals and priming the financial vulnerability of half the respondents, we examine attitudes toward bank advisors. We find that priming deters low-income individuals from showing an extremely negative attitude toward seeking banking advice (positive effect); it also deters them from showing an extremely positive attitude (negative effect). We also find that acute financial distress partially drives the positive effect, and a lack of financial literacy partially drives the negative effect.
Frequency-Dependent Higher Moment Risks
BarunÃk, Jozef,Kurka, Josef
SSRN
Based on intraday data for a large cross-section of individual stocks and exchange traded funds, we show that short-term as well as long-term fluctuations of realized market and average idiosyncratic higher moments risks are priced in the cross-section of asset returns. Specifically, we find that market and average idiosyncratic volatility and kurtosis are significantly priced by investors mainly in the long-run even if controlled by market moments and other factors, while skewness is mostly short-run phenomenon. A conditional pricing model capturing the time-variation of moments confirms downward-sloping term structure of skewness risk and upward-sloping term structure of kurtosis risk, moreover the term structures connected to market skewness risk and average idiosyncratic skewness risk exhibit different dymanics.
Geographic income diversification of large European banks: better or worse?
Gerek, Caner,Tuncez, Ahmet M.
SSRN
This study examines the impact of geographic income diversification of large European banks on performance by using unique hand-collected European banking data. By dividing the total operating income into three regions as home country, the rest of Europe and the rest of the world, we find evidence that geographic income diversification reduces bank performance. Moreover, we separately analyze the net effects of shifting operations from home country to the rest of Europe and the rest of world income and find that they reduce the bank performance except for the banks that are already more concentrated in these regions. We also analyze only two regions (home and foreign) and control the effect of board nationality diversity, and show that our results hold.
Global pathways to sustainable development to 2030 and beyond
Enayat A. Moallemi,Sibel Eker,Lei Gao,Michalis Hadjikakou,Jan Kwakkel,Patrick M. Reed,Michael Obersteiner,Brett A. Bryan
arXiv
Progress to-date towards the UN Agenda 2030 has fallen short of expectations. We undertake a model-based global integrated assessment to project future progress by 2030, 2050, and 2100 and to characterise the transformations needed to deliver the global Sustainable Development Goals and an increasingly ambitious 21st century sustainability agenda. Our results quantify the scale and pace of transformations required through eight key entry points: increasing education access, powering sustainable economic development, controlling global population growth, lowering energy intensity across sectors, decarbonising energy systems, promoting healthy food diets, limiting agricultural land expansion, and reducing global emissions intensity. Our findings indicate many actions that appear to make a limited contribution to initial progress are in fact vital for accelerating change towards sustainable development later in the century.
Google Search and Stock Returns: A Study on Bist 100 Stocks
Bulut, Ali Eray,Ekinci, Cumhur
SSRN
This study investigates whether there is a relationship between Google search and stock returns after we account for market, size, and value. We analyze weekly data on BIST 100 stocks from 2012 to 2017. Our results reveal that Google search is associated with positive returns, especially in small-capitalization stocks, but high search volume in the current period does not predict positive returns in the next period. The relationship is stronger (weaker) for sports and real estate (commercial and banking) firms. We provide additional evidence for market, size, and value factors. Institutional interest in the stock, more than firm size, can explain the relation between search volume and stock returns.
Higher Realized Moments and Stock Return Predictability
Rehman, Seema,Sharif, Saqib,Ullah, Wali
SSRN
This study exploits information contained in high frequency sample data by computing higher realized moments of individual firms in the emerging stock market of Pakistan. Furthermore, the relation of higher moments with future stock returns is examined by constructing decile portfolios based on weekly realized volatility, skewness and kurtosis to predict next week return of the trading strategy that takes long position for portfolio of stocks having high realized moment and takes short position for portfolio of stocks having low realized moment. The long short spread is significant for equal weighted weekly returns based on realized volatility. The long short weekly return is positive and highly significant for realized skewness, 1.659 and 1.969 (in bps) with t-statistics of 7.92 and 14.027 for value and equal weighted portfolios respectively. The result for realized skewness is also supported by Carhart’s Alphas. Similar results are obtained for realized kurtosis, 0.427 and 0.664 (in bps) of long short return, with t-statistics of 2.079 and 4.049 for value and equal weighted portfolios respectively. The evidence suggests that realized skewness and kurtosis can predict the next week’s moment based cross sectional stock returns.
Influences on Brand Loyalty Among Thai Female Cosmetic Consumers
Taghipour, Amirhossein
SSRN
This study aims to examine and identify variables and factors which influence Thai women’spurchasing decision of Korean cosmetics. The data were compiled from questionnaires givento 400 female respondents living in Bangkok, who were between the ages of 14-30 and whohad purchased Korean cosmetics in the past. The methodology used in this study is thecorrelation coefficient relationship and ANOVA to test the hypotheses. The results illustratedthat the country of origin (COO) has a relationship with the perceived quality of thecosmetics and consequently, to brand equity. There are differences between packaging, price,and perceived quality for customers, in which packaging has more influence on satisfaction.In addition, customer loyalty was affected indifferently by brand equity and customersatisfaction.
Least Squares Monte Carlo applied to Dynamic Monetary Utility Functions
Hampus Engsner
arXiv
In this paper we explore ways of numerically computing recursive dynamic monetary risk measures and utility functions. Computationally, this problem suffers from the curse of dimensionality and nested simulations are unfeasible if there are more than two time steps. The approach considered in this paper is to use a Least Squares Monte Carlo (LSM) algorithm to tackle this problem, a method which has been primarily considered for valuing American derivatives, or more general stopping time problems, as these also give rise to backward recursions with corresponding challenges in terms of numerical computation. We give some overarching consistency results for the LSM algorithm in a general setting as well as explore numerically its performance for recursive Cost-of-Capital valuation, a special case of a dynamic monetary utility function.
Long Term Bias, Incentives, and Agency Costs
Kastiel, Kobi
SSRN
The problem of managerial short-termism has long preoccupied policymakers, researchers, and practitioners. These groups have given much less attention, however, to the converse problem of managerial long-termism. Michal Barzuza and Eric Talley fill this gap in their pioneering article, Long-Term Bias. Relying on the behavioral finance and psychology literatures, the authors provide a novel and thought-provoking analysis of managerial long-term bias, which may be just as detrimental as the more widely condemned short-term bias. This invited Comment to Barzuza and Talley’s article advances three claims. First, it argues that proper incentivesâ€"created by executive compensation, heightened risk of early termination, market responses and shareholder pressuresâ€"are likely to turn most managers more realistic and thus to mitigate their long-term biases. Second, it explains how, in reality, it could be almost impossible to distinguish between long-term bias and traditional agency theories of empire building and pet projects. Ultimately, both long-termist and self-interested managers systematically harm shareholders; both choose to ignore shareholder interests and waste free cash flow on inferior business investments. This also explains why the cure to both long-term bias and agency costs is similar: reducing the relative insulation of the board from shareholders’ disciplinary power. Finally, this Comment expresses strong support for most of Barzuza and Talley’s normative conclusions, with one important exception: their acceptance of the use of dual-class stock. With a perpetual lock on control and a limited equity stake, corporate leaders will be immune to any “institutional brake†on all forms of long-termist overinvestment. If anything, the analysis of Barzuza and Talley provides an additional strong justification to oppose the use of perpetual dual-class stock.
Modelling uncertainty in financial tail risk: a forecasting combination and weighted quantile approach
Giuseppe Storti,Chao Wang
arXiv
A novel forecasting combination and weighted quantile based tail risk forecasting framework is proposed, aiming to reduce the impact of modelling uncertainty in financial tail risk forecasting. The proposed approach is based on a two-step estimation procedure. The first step involves the combination of Value-at-Risk (VaR) forecasts at a grid of different quantile levels. A range of parametric and semi-parametric models is selected as the model universe which is incorporated in the forecasting combination procedure. The quantile forecasting combination weights are estimated by optimizing the quantile loss. In the second step, the Expected Shortfall (ES) is computed as a weighted average of combined quantiles. The quantiles weighting structure used to generate the ES forecast is determined by minimizing a strictly consistent joint VaR and ES loss function of the Fissler-Ziegel class. The proposed framework is applied to six stock market indices and its forecasting performance is compared to each individual model in the model universe and a simple average approach. The forecasting results based on a number of evaluations support the proposed framework.
Momentum Effect in the Oman Stock Market Over the Period of 2005-2018
Gharaibeh, Omar
SSRN
The purpose of this paper is to investigate the profitability of the momentum effects on the Oman Stock Market (OSM). This study uses the monthly returns of all stocks listed on the OSM, with a total of 107 companies used in the study for the period from 2005 to 2018. According to the methodology developed by Jegadeesh and Titman (1993), this study builds momentum portfolios based on various sizes. Moreover, the January effect is also examined to recognize if this effect is related to the momentum effect. The results find that there is evidence of momentum returns and these returns are statistically and economically significant. The sub-periods confirmed the profitability of the momentum strategy. This paper shows that momentum returns are evident at different sizes; big, medium, and small-sized portfolios. Besides, the result shows that the classic January effect does not play an important role in the momentum returns. Thus, the implication is that the momentum should not take into account the annual, seasonal, and size returns. The capital asset pricing model (CAPM) or the three-factor model cannot explain momentum returns generated by individual stocks in the Oman Stock Market. These results are useful to academia and investors alike.
Monetary Policy and Asset Valuation
Bianchi, Francesco,Lettau, Martin,Ludvigson, Sydney C.
SSRN
We document large, longer-term, joint regime shifts in asset valuations and the real federal funds rate-r* spread. To interpret these findings, we estimate a novel macro-finance model of monetary transmission and find that the documented regimes coincide with shifts in the parameters of a policy rule, with long-term consequences for the real interest rate. Estimates imply that two-thirds of the decline in the real interest rate since the early 1980s is attributable to regime changes in monetary policy. The model explains how infrequent changes in the monetary policy stance can generate persistent changes in asset valuations and the equity premium.
Monetary policy, Twitter and financial markets: evidence from social media traffic
Masciandaro, Donato,Romelli, Davide,Rubera, Gaia
SSRN
How does central bank communication affect financial markets? This paper shows that the monetary policy announcements of three major central banks, i.e. the European Central Bank, the Federal Reserve and the Bank of England, trigger significant discussions on monetary policy on Twitter. Using machine learning techniques we identify Twitter messages related to monetary policy around the release of monetary policy decisions and we build a metric of the similarity between the policy announcement and Twitter traffic before and after the announcement. We interpret large changes in the similarity of tweets and announcements as a proxy for monetary policy surprise and show that market volatility spikes after the announcement whenever changes in similarity are high. These findings suggest that social media discussions on central bank communication are aligned with bond and stock market reactions.
Mortality Forecasting Using Stacked Regression Ensembles
SSRN
We present a stacked regression ensemble method that optimally combines different mortality models to reduce the mean squared errors of mortality rate forecasts and mitigate model selection risk. Stacked regression uses a supervised machine learning algorithm to approximate the horizon-specific weights by minimizing the cross-validation criterion for each forecasting horizon. The horizon-specific weights facilitate the development of a mortality model combination customized to each horizon. Unlike other model combination methods, stacked regression simultaneously solves model selection and estimates model combinations to improve model forecasts. Our numerical illustrations based on 44 populations from the Human Mortality Database demonstrate that stacking mortality models increases predictive accuracy. Using one-year-ahead to 15-year-ahead out-of-sample mean squared errors, we find that stacked regression improves mortality forecast accuracy by 13% - 49% and 19% - 90% over the individual mortality models for males and females, respectively. Therefore, combining the mortality rate forecasts provides lower out-of-sample point forecast errors than selecting the single best individual mortality method. Stacked regression ensemble also achieves better predictive accuracy than other model combination methods, namely Simple Model Averaging, Bayesian Model Averaging, and Model Confidence Set. Our results support the stacked regression ensemble approach over individual mortality models and other model combination methods in forecasting mortality rates. We also provide a user-friendly open-source R package, CoMoMo, that combines multiple mortality rate forecasts using different model combination techniques.
Multivariate Systemic Optimal Risk Transfer Equilibrium
Alessandro Doldi,Marco Frittelli
arXiv
A Systemic Optimal Risk Transfer Equilibrium (SORTE) was introduced in: "Systemic optimal risk transfer equilibrium", Mathematics and Financial Economics (2021), for the analysis of the equilibrium among financial institutions or in insurance-reinsurance markets. A SORTE conjugates the classical B\"{u}hlmann's notion of a Risk Exchange Equilibrium with a capital allocation principle based on systemic expected utility optimization. In this paper we extend such a notion to the case when the value function to be optimized is multivariate in a general sense, and it is not simply given by the sum of univariate utility functions. This takes into account the fact that preferences of single agents might depend on the actions of other participants in the game. Technically, the extension of SORTE to the new setup requires developing a theory for multivariate utility functions and selecting at the same time a suitable framework for the duality theory. Conceptually, this more general framework allows us to introduce and study a Nash Equilibrium property of the optimizer. We prove existence, uniqueness, and the Nash Equilibrium property of the newly defined Multivariate Systemic Optimal Risk Transfer Equilibrium.
Oil-US Stock Market Nexus: Some insights about the New Coronavirus Crisis
Claudiu Albulescu,Michel Mina,Cornel Oros
arXiv
We provide a new investigation of the relationship between oil and stock prices in the context of the outbreak of the new coronavirus crisis. Specifically, we assess to what extent the uncertainty induced by COVID-19 affects the interaction between oil and the United States (US) stock markets. To this end, we use a wavelet approach and daily data from February 18, 2020 to August 15, 2020. We identify the lead-lag relationship between oil and stock prices, and the intensity of this relationship at different frequency cycles and moments in time. Our unique findings show that co-movements between oil and stock prices manifest at 3-5-day cycle and are stronger in the first part of March and the second part of April 2020, when oil prices are leading stock prices. The partial wavelet coherence analysis, controlling for the effect of COVID-19 and US economic policy-induced uncertainty, reveals that the coronavirus crisis amplifies the shock propagation between oil and stock prices.
On ESG Investing: Heterogeneous Preferences, Information, and Asset Prices
Goldstein, Itay,Kopytov, Alexandr,Shen, Lin,Xiang, Haotian
SSRN
We study how environmental, social and governance (ESG) investing reshapes information aggregation and price formation. We develop a rational expectations equilibrium model in which traditional and green investors are informed about monetary and non-monetary risks but have distinct preferences over them. Because of the preference heterogeneity, traditional and green investors trade in opposite directions based on the same information and make the price noisier to each other. We show that an increase in the share of green investors and an improvement in the quality of non-monetary information can reduce overall price informativeness and increase firm's cost of capital. Our analyses provide a rich set of testable implications.
Online Appendix: The Impact of Cryptocurrency Regulation on Trading Markets
Feinstein, Brian D.,Werbach, Kevin
SSRN
This file contains additional analyses to support Feinstein & Werbach, "The Impact of Cryptocurrency Regulation on Trading Markets," Journal of Financial Regulation (forthcoming).
Option to survive or surrender: carbon asset management and optimization in thermal power enterprises from China
Yue Liu,Lixin Tian,Zhuyun Xie,Zaili Zhen,Huaping Sun
arXiv
Carbon emission right allowance is a double-edged sword, one edge is to reduce emission as its original design intention, another edge has in practice slain many less developed coal-consuming enterprises, especially for those in thermal power industry. Partially governed on the hilt in hands of the authority, body of this sword is the prices of carbon emission right. How should the thermal power plants dance on the blade motivates this research. Considering the impact of price fluctuations of carbon emission right allowance, we investigate the operation of Chinese thermal power plant by modeling the decision-making with optimal stopping problem, which is established on the stochastic environment with carbon emission allowance price process simulated by geometric Brownian motion. Under the overall goal of maximizing the ultimate profitability, the optimal stopping indicates the timing of suspend or halt of production, hence the optimal stopping boundary curve implies the edge of life and death with regard to this enterprise. Applying this methodology, real cases of failure and survival of several Chinese representative thermal power plants were analyzed to explore the industry ecotope, which leads to the findings that: 1) The survival environment of existed thermal power plants becomes severer when facing more pressure from the newborn carbon-finance market. 2) Boundaries of survival environment is mainly drawn by the technical improvements for rising the utilization rate of carbon emission. Based on the same optimal stopping model, outlook of this industry is drawn with a demarcation surface defining the vivosphere of thermal power plants with different levels of profitability. This finding provides benchmarks for those enterprises struggling for survival and policy makers scheming better supervision and necessary intervene.
Realization Utility with Path-Dependent Reference Points
Kong, Linghui,Qin, Cong,Yue, Xingye
SSRN
In this paper, we propose a tractable model to study the impact of path-dependent reference points on optimal trading strategies of a realization utility investor. We find that when reference points are adaptive to prior paper gains and losses, two interesting effects arise endogenously: (a) Discount effect, i.e., a constant subjective discount factor in effect becomes a stochastic one; and (b) Mean-reverting effect, i.e., a constant investment opportunity in effect becomes a stochastic one with expected return being mean-reverting, which provides a new interpretation of belief in mean reversion. In addition, these two effects offset each other in the state of paper gains while get reinforced in the state of paper losses, leading to a more salient disposition effect. The model can be easily extended to incorporate other factors such as asymmetric adaptation of reference points, jump risks in the underlying stocks, and liquidation shocks, yielding new interesting trading behaviors.
Relationship between Framing Bias and Big Five Personality Traits of Individual Investors
Sachan, Abhishek,Chugan, Pawan K.
SSRN
Returns depend upon decisions of investors, but investors biases challenge the ability to take rational decisions. Study of biases and their relationships with personality traits helps to understand how biases originate, the way in which they possibly effect investors, and which personality types could be more susceptible to them. There are evidences that biases have relationships with personality traits of investors and this study focuses on one such relationship between framing bias and personality traits. Given the qualitative nature of variables under study, the relationship was established by statistically significant coefficients of logistic regression equation, where bias-variable was dependent and big five personality traits were independent. The score of personality trait, which had significant relationship, was cross tabulated with bias variable, the chi square test indicated a statistically significant relationship. The results lead to conclusion that an investor with higher score of agreeableness has higher probability of having framing bias. It is also discussed that an agreeable person may demonstrate irrationality discussed in prospect theory, more as compared to others, as the framing effects were measured using gain and loss frames. Since the study deals with frames of communication, it indicates towards the effects of personality traits on communication between portfolio manager and clients. The study contributes for portfolio managers that an agreeable client may not actually agree for rational decision if the communication is not in right frame.
Resource Mobilization by Listed Entities in International Capital Markets
Gosavi, Chinmay
SSRN
Raising funds from international markets is very key factor for any listed entity as it does not only cater the need of funds but also increase the goodwill and reputation of company on internationalplatform. It helps the company in broadening it’s shareholder base and to enhance investor quality.Nowadays many Indian listed entities attract towards international capital markets as the underlying instruments are listed and traded in international stock exchanges hence free from delivery and settlement problems. Further the foreign investors are not required to comply with rigid formalities and regulation which otherwise they would require in case of investment through other Foreign Direct Investment (FDI) routes. The present research paper undertakes critical study of various methods and instruments of tapping international capital markets. This paper examines the regulatory provisions to issue instruments in international capital markets.
Risk transference constraints in optimal reinsurance
Balbás, Alejandro,Balbás, Beatriz,Balbás, Raquel,Heras, Antonio
SSRN
This paper deals with the optimal reinsurance problem and involves the goals of both insurer and reinsurer. An important novelty may be the incorporation of the background risk that the reinsurer uses in order to diversify (or hedge) the risk ceded by the insurer. Accordingly, general methods to prevent the reinsurer moral hazard must be extended, and a new constraint must be satisfied by the selected reinsurance contract, namely, "the reinsurer increment of risk must be lower than the contract premium". Simultaneously, since the contract must be attractive to the insurer too, "the contract premium must be lower than the insurer risk reduction". Integrating both ideas, "the contract premium must be higher than the reinsurer risk growth and lower than the insurer risk mitigation". Bearing in mind both requirements, that is, the protection against the moral hazard and the spread containing the contract premium, the optimal reinsurance problem is studied under very general conditions about the involved risk measures and premium principles, general solutions are provided, and a practical illustrative example is presented.
Road safety for fleets of vehicles
Dionne, Georges,Desjardins, Denise,Angers, Jean-François
RePEC
Road safety for fleets of vehicles has been neglected in the insurance literature, mainly because appropriate data is not available. This paper makes a threefold contribution: 1) Produce statistics on current fleets' road safety offences using a panel of 20 years of data on truck fleets; 2) relate these offences to fleets' accidents; and 3) identify and classify the riskiest fleets for insurance ratemaking based on past experience in managing road safety. Our results show a substantial heterogeneity between fleets in terms of road safety.
Selfish Shareholders: Corporate Donations During COVID-19
Michele Fioretti,Victor Saint-Jean,Simon C. Smith
arXiv
During the onset of the COVID-19 pandemic, conflicting incentives caused most shareholders to adverse corporate social responsibility (CSR) -- measured by firms' charitable donations -- since it would further burden firms' already strained finances. Those shareholders that favored donations, large individual investors, did so to bolster their own images as they are typically synonymous with the donating firms. Image gains do not pass through to institutional shareholders, who instead preferred to donate themselves rather than having the firms they invested in donate. Taken together, our results cast doubts on large corporations' willingness to demand costly CSR measures across firms in their portfolios.
Semiparametric GARCH Models with Long Memory Applied to Value at Risk and Expected Shortfall
SSRN
In this paper new semiparametric GARCH models with long memory are introduced.The estimation of the nonparametric scale function is carried out by anadapted version of the SEMIFAR algorithm (Beran et al., 2002). Recurring on therevised recommendations by the Basel Committee to measure market risk in thebanks' trading books (Basel Committee on Banking Supervision, 2013), the semi-parametric GARCH models are applied to obtain rolling one-step ahead forecastsfor the Value at Risk (VaR) and Expected Shortfall (ES) for market risk assets.In addition, standard regulatory traffic light tests (Basel Committee on BankingSupervision, 1996) and a newly introduced traffic light test for the ES are carriedout for all models. The practical relevance of our proposal is demonstrated by acomparative study. Our results indicate that semiparametric long memory GARCHmodels are an attractive alternative to their conventional, parametric counterparts.
Short Term Trading Models Using Hurst Exponent and Machine Learning
Sidhu, Gursewak Singh,Ibrahim Ali Metwaly, Ali,Tiwari, Animesh,Bhattacharyya, Ritabrata
SSRN
Predicting the direction of Stock Indices has always been an appealing topic which has motivated researchers over the years to develop better predictive models. Recently, Machine learning (ML) based models have been frequently deployed to forecast the direction of classic financial time series data. In the 1950s, Hurst Exponent was introduced as a statistical measure to classify various Time Series. This research analyzes the effectiveness of using Machine Learning and Hurst Exponent along with popular Technical Indicators for short term trading predictions. In this study we explore the use of Hurst Exponent to segment data for a short-term machine learning model in order to improve trading strategy. A comparative analysis has been carried out between the performance of a standalone short-term model, and a Segmented model (Segments based on hurst exponent cut off) in S&P 500, SSE Composite Indices, Gold SPDR Shares and Bitcoin. This new approach is being introduced in order to reach the optimum integration between Machine learning & Hurst Exponent.
Social Networks and Market Reactions to Earnings News
Hirshleifer, David A.,Peng, Lin,Wang, Qiguang
SSRN
Using social network data from Facebook, we show that earnings announcements made by firms located in counties with higher investor social network centrality attract more attention from both retail and institutional investors. For such firms, the immediate price and volume reactions to earnings announcements are stronger, and post-announcement drift is weaker. Such firms have lower post-announcement persistence of return volatility but higher persistence in investor attention and trading volume. These effects are stronger for small firms, firms with poor analyst and media coverage, and for stocks with salient returns. Our evidence suggests a dual role of social networks---they facilitate the incorporation of public information into prices, but also trigger persistent excessive trading.
Squeezing Shorts Through Social News Platforms
Tengulov, Angel,Allen, Franklin,Nowak, Eric,Pirovano, Matteo
SSRN
At the end of January 2021, a group of stocks listed on US stock exchanges experienced sudden surges in their stock prices, which - coupled with high short interest â€" led to brief short squeeze episodes. We argue that these short squeezes were the result of coordinated trading by investors, who discussed their trading strategies on social news platforms. In addition, option markets played a central role in these events. Using hand-collected data we provide the first rigorous academic study of these short-squeezes and show that they significantly impeded market quality not only of the stocks at issue but also of their competitors. This evidence calls for tighter monitoring of social news platforms and a better understanding of the interlinkages between these platforms, derivatives markets and equity markets.
Strategic Choice of Presentation Format: The Case of ETR Reconciliations
Chychyla, Roman,Falsetta, Diana,Ramnath, Sundaresh
SSRN
To minimize costs related to unfavorable perceptions of their tax-related activities, firms with low effective tax rates (ETR) could avoid, where possible, explicit mentions of their effective tax rates. Using this reputational cost perspective we study an item of required disclosure in the income tax footnote of the 10-K, the ETR reconciliation table, where firms can choose a presentation format that reveals the tax rate (the percentage format) or one that avoids explicit mention of the effective tax rate (the dollar format). We find that firms with low ETRs are 24 percent more likely to use the dollar format, and are also less likely to mention their tax rates elsewhere in their disclosures, consistent with the choice of dollar format reflecting a firm's overall tax disclosure strategy. Analysts' tax expense forecasts are less accurate for dollar format firms, suggesting higher processing costs associated with tax-related disclosures for these firms.
THE RELIABILITY OF GEOMETRIC BROWNIAN MOTION FORECASTS OF S&P 500 INDEX VALUES
Sinha, Amit K.
SSRN
This manuscript extends the literature on the application of geometric Brownian motion. Forecasted drift and diffusion terms estimated separately and recursively are plugged into the framework to forecast S&P 500 index values. Expected index values are estimated from one hundred thousand simulated index values and probabilities. The results of comparing expected index values to actual values, indicate that while reliable predictions of S&P 500 index values can be obtained at monthly, quarterly and annual frequencies, the reliability may decrease in that order.
The Distribution of Investor Beliefs, Stock Ownership and Stock Returns
Hardouvelis, Gikas A.,Karalas, Georgios,Vayanos, Dimitri
SSRN
We study theoretically and empirically the relationship between investor beliefs, ownershipdispersion and stock returns. We find that high dispersion, measured by high breadth or lowHerfindahl index, forecasts returns positively for large stocks, as in Chen, Hong, and Stein (2002),but negatively for small stocks. We explain that relationship in a difference-of-opinion modelin which stocks differ in the size of investor disagreements and the extent of belief polarization.These differences are characterized by range and kurtosis, respectively. Proxying investor beliefsby analyst forecasts, we find that range and kurtosis affect ownership dispersion in the way thatour model predicts.
The EU Sustainable Governance Consultation and the Missing Link to Soft Law
Ferrarini, Guido ,Siri, Michele,Zhu, Shanshan
SSRN
In this paper, we investigate whether reform of EU company law is needed to make corporate governance more sustainable through an analysis of some of the key questions found in the European Commission's questionnaire in its public consultation on sustainable corporate governance. We also consider some issues, which the Commission paid scant attention to in its questionnaire, such as the role of corporate governance codes and other types of soft law, mainly of international origin, in promoting sustainable governance. In addition, we underline that the EU legislator has adopted several measures in recent years, which offer better prospects for sustainable governance than the reform of directors' duties the Commission is currently planning. We conclude that the failure to take corporate governance codes and the existing regulatory framework into account could seriously impair pending reforms of directors' duties and their link to sustainability.
The Effect of Sport in Online Dating: Evidence from Causal Machine Learning
Daniel Boller,Michael Lechner,Gabriel Okasa
arXiv
Online dating emerged as a key platform for human mating. Previous research focused on socio-demographic characteristics to explain human mating in online dating environments, neglecting the commonly recognized relevance of sport. This research investigates the effect of sport activity on human mating by exploiting a unique data set from an online dating platform. Thereby, we leverage recent advances in the causal machine learning literature to estimate the causal effect of sport frequency on the contact chances. We find that for male users, doing sport on a weekly basis increases the probability to receive a first message from a woman by 50%, relatively to not doing sport at all. For female users, we do not find evidence for such an effect. In addition, for male users the effect increases with higher income.
The Efficient Hedging Frontier with Deep Neural Networks
Zheng Gong,Carmine Ventre,John O'Hara
arXiv
The trade off between risks and returns gives rise to multi-criteria optimisation problems that are well understood in finance, efficient frontiers being the tool to navigate their set of optimal solutions. Motivated by the recent advances in the use of deep neural networks in the context of hedging vanilla options when markets have frictions, we introduce the Efficient Hedging Frontier (EHF) by enriching the pipeline with a filtering step that allows to trade off costs and risks. This way, a trader's risk preference is matched with an expected hedging cost on the frontier, and the corresponding hedging strategy can be computed with a deep neural network.
We further develop our framework to improve the EHF and find better hedging strategies. By adding a random forest classifier to the pipeline to forecast market movements, we show how the frontier shifts towards lower costs and reduced risks, which indicates that the overall hedging performances have improved. In addition, by designing a new recurrent neural network, we also find strategies on the frontier where hedging costs are even lower.
The Growth of Passive Indexing and Smart-Beta: Competitive Effects on Actively Managed Funds
Densmore, Michael
SSRN
Using a sample of US equity funds, I investigate the extent to which competition from low-cost index funds affects fees, performance, and survival rates of actively managed funds. I measure the intensity of competition using the market value of holdings overlap between the portfolios of index entrants and active incumbents. Disentangling the competitive effects of traditional index funds (market index) from smart-beta index funds (factor index), I provide evidence that factor index fund entry is negatively related to changes in actively managed net fees but no significant impact of market index fund entry. Additionally, I find that both factor and market index entry are negatively related to active incumbent survival rates and that this effect is most pronounced for relatively expensive active incumbents. Importantly, I show that entry of index funds has had an attenuating effect on dispersion in fees across actively managed funds. Lastly, I find evidence that factor index entry has had an attenuating effect on active incumbent future performance.
The Impact of Investor Sentiment on Catering Incentives around the World
Kim, Kihun,Byun, Jinho,Liao, Rose C.,Pan, Carrie H.
SSRN
This study tests catering theory of dividend policies in twenty-one countries from 1991 to 2017. First, we show that there are important differences in corporate dividend policies across countries. Second, we find that the catering incentive is stronger when investor sentiment is low. Third, firms domiciled in countries with strong legal protections for investors are more likely to catering to investors, especially when investor sentiment is low. Our findings shed light to the factors contributing to the fluctuations in dividend catering around the world.
The Kuroda Bazooka
Gunji, Hiroshi
SSRN
In this study, we examine the impact of the monetary easing policy announced by the Bank of Japan on October 31, 2014 on households’ willingness to borrow. This policy, called the Kuroda Bazooka, was not anticipated by the private sector, so it can be regarded as an exogenous shock. We use an interrupted time-series analysis to estimate the effects of the Kuroda Bazooka, a technique often used in medicine but not yet widely used in economics. The analysis of the data before and after the shock show that the Kuroda Bazooka increased household borrowing intention by about 10%.
The Mechanisms of Loan Market Efficiency
SSRN
This article develops an account of the mechanisms of efficiency of corporate loan markets or the secondary markets in which loans made to corporate borrowers are traded. In our account: 1) professionally informed trading incorporating information about the quality of the loan terms offered to borrowers, is the primary source of corporate loan market efficiency, and 2) antitrust law is among the principal policy tools that can foster loan market efficiency by policing market participants' efforts to restrict activist loan market investors from accessing information in the loan market. The main objective of fostering loan market efficiency is to allow activist investors incorporate information about the erosion of the quality of the underwritten terms into loan prices and prompt corrections in mispricing in primary markets thereby contributing to the tightening of the terms subsequently offered in primary markets. From a policy perspective, efficient loan markets can help alleviate the concerns around the erosion of underwriting standards that have become widespread in recent years.
The Microstructure of Cointegrated Assets
Stoikov, Sasha,Decrem, Peter,Hua, Yikai,Shen, Anne
SSRN
We define the micro price of multiple cointegrated assets. This yields a notion of fair prices, as a function of the observable state of multiple order books. We compute the microprices of two highly cointegrated assets, using Level-1 data collected on Interactive Brokers. We design an execution algorithm based on this two dimentional microprice and show that it can save half of the bid-ask spread cost.The code for this paper is available here: https://github.com/xhshenxin/Micro_Price
The link between unemployment and real economic growth in developed countries
Ivan Kitov
arXiv
Ten years ago we presented a modified version of Okun law for the biggest developed economies and reported its excellent predictive power. In this study, we revisit the original models using the estimates of real GDP per capita and unemployment rate between 2010 and 2019. The initial results show that the change in unemployment rate can be accurately predicted by variations in the rate of real economic growth. There is a discrete version of the model which is represented by a piece wise linear dependence of the annual increment in unemployment rate on the annual rate of change in real GDP per capita. The lengths of the country-dependent time segments are defined by breaks in the GDP measurement units associated with definitional revisions to the nominal GDP and GDP deflator (dGDP). The difference between the CPI and dGDP indices since the beginning of measurements reveals the years of such breaks. Statistically, the link between the studied variables in the revised models is characterized by the coefficient of determination in the range from R2=0.866 (Australia) to R2=0.977 (France). The residual errors can be likely associated with the measurement errors, e.g. the estimates of real GDP per capita from various sources differ by tens of percent. The obtained results confirm the original finding on the absence of structural unemployment in the studied developed countries.
Uncovering commercial activity in informal cities
Daniel Straulino,Juan C. Saldarriaga,Jairo A. Gómez,Juan C. Duque,Neave O'Clery
arXiv
Knowledge of the spatial organisation of economic activity within a city is key to policy concerns. However, in developing cities with high levels of informality, this information is often unavailable. Recent progress in machine learning together with the availability of street imagery offers an affordable and easily automated solution. Here we propose an algorithm that can detect what we call 'visible firms' using street view imagery. Using Medell\'in, Colombia as a case study, we illustrate how this approach can be used to uncover previously unseen economic activity. Applying spatial analysis to our dataset we detect a polycentric structure with five distinct clusters located in both the established centre and peripheral areas. Comparing the density of visible and registered firms, we find that informal activity concentrates in poor but densely populated areas. Our findings highlight the large gap between what is captured in official data and the reality on the ground.
Using Social Media to Identify the Effects of Congressional Partisanship on Asset Prices
Bianchi, Francesco,Gomez Cram, Roberto,Kung, Howard
SSRN
We measure the individual and collective viewpoints of US Congress members on various economic policies by scraping their Twitter accounts. Tweets that criticize (support) a particular company are associated with a significant negative (positive) stock price reaction in a narrow time window around the tweet. A sharp partisan divide emerges, with Republicans and Democrats coordinated in both their support and opposition for different industries emanating from disparate legislative agendas. Members of congress coordinate within parties to push legislation through their social media accounts. As an illustrative and relevant example, we analyze the Tax Cuts and Jobs Act of 2017 and document significant aggregate stock market responses to the real-time evolution of partisan viewpoints about the bill.
Vaccine allocation to blue-collar workers
László Czaller,Gergő Tóth,Balázs Lengyel
arXiv
Vaccination may be the solution to the pandemic-induced health crisis, but the allocation of vaccines is a complex task in which economic and social considerations can be important. The central problem is to use the limited number of vaccines in a country to reduce the risk of infection and mitigate economic uncertainty at the same time. In this paper, we propose a simple economic model for vaccine allocation across two types of workers: white-collars can work from home; while blue-collars must work on site. These worker types are complementary to each other, thus a negative shock to the supply of either one decreases the demand for the other that leads to unemployment. Using parameters of blue and white-collar labor supply, their infection risks, productivity losses at home office during lock-down, and available vaccines, we express the optimal share of vaccines allocated to blue-collars. The model points to the dominance of blue-collar vaccination, especially during waves when their relative infection risks increase and when the number of available vaccines is limited. Taking labor supply data from 28 European countries, we quantify blue-collar vaccine allocation that minimizes unemployment across levels of blue- and white-collar infection risks. The model favours blue-collar vaccination identically across European countries in case of vaccine scarcity. As more vaccines become available, economies that host large-shares of employees in home-office shall increasingly immunize them in case blue-collar infection risks can be kept down. Our results highlight that vaccination plans should include workers and rank them by type of occupation. We propose that prioritizing blue-collar workers during infection waves and early vaccination can also favour economy besides helping the most vulnerable who can transmit more infection.
Chen, Jun,Ewens, Michael
SSRN
Although an extensive literature shows that startups are financially constrained and that constraints vary by geography, the source of these constraints is still relatively unknown. We explore intermediary financing constraints, a channel studied in the banking literature, but only implicitly addressed in the venture capital (VC) literature. Our empirical setting is the VC fundraising and startup financing environment around the passage of the Volcker Rule, which restricted banks' ability to invest in venture capital funds as limited partners (LPs). The rule change disproportionately impacted regions of the U.S. historically lacking in VC financing. We find that a one standard deviation increase in VCs' exposure to the loss of banks as LPs led to an 18% decline in fund size and about a 10% decrease in the likelihood of raising a follow-on fund. Startups were not completely cushioned from the additional constraints on their VCs: capital raised fell and pre-money valuations declined. Overall, VC financing constraints manifest as fewer, smaller funds that change investment strategy and experience increases in bargaining power. Last, we show that the rule change increased the likelihood startups moved out of impacted states, thus exacerbating the geographic disparity in high-growth entrepreneurship.
WTO GPA and Sustainable Procurement as Tools for Transitioning to a Circular Economy
Sareesh Rawat
arXiv
We live in an age of consumption with an ever-increasing demand of already scarce resources and equally fast growing problems of waste generation and climate change. To tackle these difficult issues, we must learn from mother nature. Just like waste does not exist in nature, we must strive to create circular ecosystems where waste is minimized and energy is conserved. This paper focuses on how public procurement can help us transition to a more circular economy, while navigating international trade laws that govern it.
What is a Protection Gap? Homeowners Insurance as a Case Study
Feinman, Jay M.
SSRN
In the past few years, the insurance community has paid increasing attention to the “protection gapâ€â€"the extent to which significant losses are not covered by insurance. The Geneva Association, the insurers’ global think tank, has pioneered the concept, and it has become widely adopted. Insurance always presents gaps in coverage; not all risks are insured or indeed insurable. The protection gap concept necessarily embodies a normative componentâ€"that insureds with limited coverage, potential insureds who lack insurance, and society as a whole suffer when certain gaps in insurance exists. It is this normative component of the protection gap concept that has not been fully developed and is the subject of this article. Part I of the article explains the commonly used definitions of the protection gap. The most commonly used definitionâ€"the “risk protection gapâ€â€"is purely empirical, measuring the difference between total losses and insured losses. Analytically superior but harder to operationalize is the “insurance protection gap,†which is the difference between the amount of insurance that is economically beneficial and the amount of insurance in place. The insurance protection gap properly introduces a normative element to the concept, but it does not capture all of the considerations at stake. Part I offers a different definition: In a particular context, the protection gap is the difference between the amount of insurance that is in place and the amount of insurance that should be in place. Part II of the article expands on the definition and discusses how much insurance “should be†in place. The method begins by defining a particular insurance context and then constructs policyholder expectations in that context. To define a baseline against which a protection gap should be measured, however, policyholder expectations must be reasonable. Therefore, the risks at issue must be insurable, the insurance must not be undermined by other effectiveness issues, and the social effects of coverage or its absence must be taken into account. Part III illustrates how the article’s definition of the protection gap can be applied by analyzing several issues in homeowners insurance. A major problem, and a clear instance of the protection gap, is the extent to which homeowners frequently are underinsured for their losses. The most frequently discussed protection gap involves disaster losses, so this part applies the analysis to flood losses. The part concludes by considering whether several more mundane issues constitute protection gaps, damage caused by rain runoff, and matching of damaged and undamaged property.
When the banking gets tough, the large get going: How capital regulation is driving consolidation
Maragopoulos, Nikos
SSRN
In response to the Global Financial Crisis (2007-2009), the capital regulation was significantly enhanced through the adoption of regulatory measures aimed to improve the resilience of banks, among others, by increasing the required quantity and quality of capital held. Particular emphasis was placed on addressing the “too-big-to-fail†problem seeking to reduce the incentives for large banks to become ever larger and more systemically relevant. The present paper examines whether the banks’ size affects the level of applicable capital requirements and the amount of (CET1) capital that banks are required to hold. Based on an analysis of the capital requirements for 108 ECB-supervised banks, it is demonstrated that the arrangements governing the calibration of the bank-specific elements of capital requirements (i.e. Pillar 2 Requirement, systemic buffer, Pillar 2 Guidance), as well as other bank-specific characteristics relevant to capital requirements (i.e. banks’ ability to issue AT1 and Tier 2 instruments, RW density) tend to favour G-SIBs and other large banks with assets exceeding €200bn. On average, G-SIBs are subject to a CET1 requirement of c. 3pp lower than banks with assets less than €30bn, mainly due to the fact that they take advantage of the AT1 and Tier 2 allowances granted by CRR/CRDV. Also, given that large banks have a significantly lower RW density, for every billion of assets held, the amount of CET1 capital that G-SIBs are required to hold is nearly half the amount that small banks must keep under the applicable capital requirements.The discrepancies relating to the approach for the determination of the systemic buffer (i.e. highest of G-SII buffer, O-SII buffer, or systemic risk buffer), which is still determined at national level, contribute to the creation of an unlevel playing field in the Banking Union, as banks with similar asset size and systemic relevance are treated in a different manner because they are located in different Member States. Therefore, the SSMR should be amended to transfer the competence for the determination of the systemic buffer to the ECB to ensure that the Banking Union will be treated as a single jurisdiction and the determination of the systemic buffer will be made based on uniform conditions.As demonstrated in this analysis, the capital regulation functions as an incentive, rather than as an obstacle, to the further increase of banks’ size. In light of the need for further consolidation of the banking sector in the Banking Union, banks subject to lower (CET1) capital requirements, as is the case for large banks, need less CET1 capital to finance potential M&As, mostly by leveraging on their ability to tap capital markets for AT1 and Tier 2 issuances. Thus, from a capital requirements perspective, large banks have every incentive to expand their operations, either on domestic or cross-border basis, at the expense of smaller banks.
Why is Corporate Virtue in the Eye of the Beholder? The Case of ESG Ratings
Christensen, Dane M.,Serafeim, George,Sikochi, Siko
SSRN
Despite the rising use of environmental, social, and governance (ESG) ratings, there is substantial disagreement across rating agencies regarding what rating to give to individual firms. As what drives this disagreement is unclear, we examine whether a firm’s ESG disclosure helps explain some of this disagreement. We predict and find that greater ESG disclosure actually leads to greater ESG rating disagreement. These findings hold using firm fixed effects, and using a difference-in-differences design with mandatory ESG disclosure shocks. We also find that raters disagree more about ESG outcome metrics than input metrics (policies), and that disclosure appears to amplify disagreement more for outcomes. Lastly, we examine consequences of ESG disagreement and find that greater ESG disagreement is associated with higher return volatility, larger absolute price movements, and a lower likelihood of issuing external financing. Overall, our findings highlight that ESG disclosure generally exacerbates ESG rating disagreement rather than resolving it.
Überblick über die aktuelle Entwicklung von Financial Literacy unter Berücksichtigung der Möglichkeiten der privaten Altersvorsorge
Daus, Viktoria,Krahnhof, Philippe,Zureck, Alexander
SSRN
Financial Literacy hat seit dem Jahr 2005 zunehmend an Bedeutung gewonnen, ohne dass das Bewusstsein für diese Eigenschaft in den Köpfen der Bevölkerung präsent ist. Hierbei handelt es sich um den Fachterminus zur Messung der finanziellen Bildung von Schülern bzw. von Erwachsenen. Der Artikel dient der Beantwortung der Frage: „Was bedeutet Financial Literacy und warum hat es so eine große Bedeutung im Zusammenhang mit der finanziellen Vorsorge?“Folglich dient diese Ausarbeitung dem Überblick der Financial Literacy Thematik unter besonderer Berücksichtigung der Möglichkeiten der privaten Altersvorsorge aus Sicht von deutschen Privatkunden.Der Artikel dient u.a. der Illustration der notwendigen Investition in das Bildungssystem zum nachhaltigen Aufbau sowie Sicherstellung der Finanzbildung in Deutschland. Der Fokus dieser Arbeit liegt auf der Vermittlung allgemeiner Grundlagen im Kontext von Financial Literacy. | 2023-02-04 19:28:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2998131215572357, "perplexity": 3169.193992595241}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00852.warc.gz"} |
https://planetmath.org/WeierstrassEquationOfAnEllipticCurve | # Weierstrass equation of an elliptic curve
Recall that an over a field $K$ is a projective nonsingular curve $E$ defined over $K$ of genus $1$ together with a point $O\in E$ defined over $K$.
###### Definition.
Let $K$ be an arbitrary field. A Weierstrass equation for an elliptic curve $E/K$ is an equation of the form:
$y^{2}+a_{1}xy+a_{3}y=x^{3}+a_{2}x^{2}+a_{4}x+a_{6}$
where $a_{1},a_{2},a_{3},a_{4},a_{6}$ are constants in $K$.
All elliptic curves have a Weierstrass model in $\mathbb{P}^{2}(K)$, the projective plane over $K$. This is a simple application of the http://planetmath.org/node/RiemannRochTheoremRiemann Roch theorem for curves:
###### Theorem.
Let $E$ be an elliptic curve defined over a field $K$. Then there exists rational functions $x,y\in K(E)$ such that the map $\psi:E\to\mathbb{P}^{2}(K)$ sending $P$ to $[x(P),y(P),1]$ is an isomorphism of $E/K$ to the projective curve given by
$y^{2}+a_{1}xy+a_{3}y=x^{3}+a_{2}x^{2}+a_{4}x+a_{6}$
where $a_{1},a_{2},a_{3},a_{4},a_{6}$ are constants in $K$.
Moreover, the following proposition specifies any possible change of variables.
###### Proposition 1.
Let $E/K$ be an elliptic curve given by a Weierstrass model of the form:
$y^{2}+a_{1}xy+a_{3}y=x^{3}+a_{2}x^{2}+a_{4}x+a_{6}$
with $a_{i}\in K$. Then:
1. 1.
The only change of variables $(x,y)\mapsto(x^{\prime},y^{\prime})$ preserving the projective point $[0,1,0]$ and which also result in a Weierstrass equation, are of the form:
$x=u^{2}x^{\prime}+r,\quad y=u^{3}y^{\prime}+su^{2}x^{\prime}+t$
with $u,r,s,t\in K$ and $u\neq 0$.
2. 2.
Any two Weierstrass equations for $E/K$ differ by a change of variables of the form given in $(1)$.
Once we have one Weierstrass model for a given elliptic curve $E/K$, and as long as the characteristic of $K$ is not $2$ or $3$, there exists a change of variables (of the form given in the previous proposition) which simplifies the model considerably.
###### Corollary.
Let $K$ be a field of characteristic different from $2$ or $3$. Let $E$ be an elliptic curve defined over $K$. Then there exists a Weierstrass model for $E$ of the form:
$y^{2}=x^{3}+Ax+B$
where $A,B$ are elements of $K$.
Finally, remember that the $j$-invariant of an elliptic curve is invariant under isomorphism, but the discriminant depends on the model chosen.
###### Proposition 2.
Let $E/K$ be an elliptic curve and let
$E_{1}:y^{2}+a_{1}xy+a_{3}y=x^{3}+a_{2}x^{2}+a_{4}x+a_{6},\quad E_{2}:y^{\prime 2% }+a_{1}x^{\prime}y^{\prime}+a_{3}y^{\prime}=x^{\prime 3}+a_{2}x^{\prime 2}+a_{% 4}x^{\prime}+a_{6}$
be two distinct Weierstrass models for $E/K$. Then (by Prop. 1) there exists a change of variables $(x,y)\mapsto(x^{\prime},y^{\prime})$ of the form:
$x=u^{2}x^{\prime}+r,\quad y=u^{3}y^{\prime}+su^{2}x^{\prime}+t$
with $u,r,s,t\in K$ and $u\neq 0$. Moreover, $j(E_{1})=j(E_{2})$, i.e. the $j$ invariants are equal ($j(E)$ is defined in http://planetmath.org/node/JInvariantthis entry) and $\Delta(E_{1})=u^{12}\Delta(E_{2})$, where $\Delta(E_{i})$ is the discriminant (as defined in http://planetmath.org/node/JInvarianthere).
Title Weierstrass equation of an elliptic curve WeierstrassEquationOfAnEllipticCurve 2013-03-22 15:48:00 2013-03-22 15:48:00 alozano (2414) alozano (2414) 6 alozano (2414) Definition msc 11G05 msc 14H52 msc 11G07 Weierstrass model Weierstrass equation | 2020-03-29 21:44:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 59, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9721868634223938, "perplexity": 132.89907018428383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496227.25/warc/CC-MAIN-20200329201741-20200329231741-00318.warc.gz"} |
http://math.stackexchange.com/questions/204462/family-of-prime-ideals-in-mathbbzx-y | # Family of prime ideals in $\mathbb{Z}[x,y]$
Problem: let $m$ be a positive integer. Find a necessary and sufficient condition on $m$ so that $I=(m, x^2+y^2)$ is a prime ideal in $R=\mathbb{Z}[x,y]$.
An easy necessary condition is: $m$ is a prime number, and $m\neq 2$. (in fact, if $m=2$, then $x^2+2xy+y^2\in I\Rightarrow (x+y)^2\in I$ but $x+y\notin I$). I am stuck proving sufficiency (I don't know if this is really a sufficient condition).
-
The conditions $m\neq2$ and $m$ prime are indeed necessary but not sufficient.
So, conversely, given an odd prime $m=p$ what is the condition for $I$ to be prime or equivalently for the ring $(\mathbb Z/p\mathbb Z)[X,Y]/(X^2+Y^2)= \mathbb F_p[X,Y]/(X^2+Y^2)$ to be a domain ?
Since $\mathbb F_p[X,Y]$ is a unique factorization domain, the condition is exactly that the polynomial $X^2+Y^2$ be irreducible in $\mathbb F_p[X,Y]$.
A little calculation (that I'll leave to you) shows that this is the case exactly if $-1$ is not a square in $\mathbb F_p$. And finally, deciding whether $-1$ is a square modulo $p$ is a very classical question that you can look up in a textbook or solve for yourself, using the result that the multiplicative group $\mathbb F_p^*$ is cyclic (the answer involves the residue modulo $4$ of $p$) . | 2016-02-13 13:16:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9781036972999573, "perplexity": 47.02641304047958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166650.78/warc/CC-MAIN-20160205193926-00021-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://de.zxc.wiki/wiki/Magnetic_Marker_Monitoring | # Magnetic marker monitoring
The Magnetic Marker Monitoring ( english was) designed to movements in closed systems and are difficult to access record, analyze and optimize them. In gastroenterology , magnetic marker monitoring is used to identify specific motility patterns during the gastrointestinal passage of a magnetic marker and thus to be able to diagnose functional diseases of the gastrointestinal tract .
Particular attention in the motility analysis is given to gastric emptying disorders, inflammatory bowel diseases ( Crohn's disease , ulcerative colitis ), gastroparesis , celiac disease and diabetes mellitus . In these diseases, a significant change in motility within the digestive organs is assumed.
## principle
After the patient has been given a bio-inert capsule with a magnetic core (e.g. neodymium-iron-boron , NdFeB), it is placed under a magnetic field-sensitive sensor field (e.g. AMR sensors ). The sensors measure the quasi-static magnetic field that surrounds the marker. The exact alignment and position of the marker is determined by comparing the field distribution resulting from the simulation of the current marker position with the real, measured magnetic field distribution. The data obtained are recorded and analyzed using special software. This exact observation of the marker through the gastrointestinal tract (GI tract) enables the patient's passage time and motility pattern to be examined.
Another application of magnetic marker monitoring is in drug development. Knowledge of the absorption of active ingredients is of central importance for the manufacture of pharmaceutical products . In order to examine the absorption properties in different sections of the intestine, a magnetic capsule is used in the example (MAARS method), which releases the active ingredient in a controlled manner by the user. The capsule consists of individual segments that are held together by magnetic forces. The external magnetic stray field of the capsule is used for localization in the gastrointestinal tract. By means of a controlled demagnetization, the capsule disintegrates into the individual segments and releases the active substance contained in the target volume.
## Physical fundamentals for locating a magnetic marker
### Maxwell's equations
Field distribution for any dipole magnet in the coordinate system
All macroscopic properties of electromagnetic fields can be described with the help of Maxwell's equations . They contain the electric field , the magnetic field and the magnetic flux density in vacuum, the electric charge density ρ, the displacement density D and the current density j. In differential form they are: ${\ displaystyle E}$${\ displaystyle H}$${\ displaystyle B}$
${\ displaystyle {\ mbox {rot}} \, {\ varvec {H}} = {\ frac {\ partial D} {\ partial t}} {\ varvec {B}} + {\ varvec {j}} \ ,, {\ mbox {red}} \, {\ varvec {E}} = - {\ frac {\ partial B} {\ partial t}} \ ,, {\ mbox {div}} \, {\ varvec { D}} = {\ varvec {\ rho}} \ ,, {\ mbox {div}} \, {\ varvec {B}} = 0}$
The description for the field of a magnetic dipole can be derived from these equations:
${\ displaystyle {\ boldsymbol {H}} (r, \ mu) = {\ frac {1} {4 \ pi}} \ left (- {\ frac {\ mu} {\ r ^ {3}}} + {\ frac {3 \ cdot (r, \ mu), r} {r ^ {5}}} \ right) \,}$, with μ = magnetic moment
There are six degrees of freedom for the clear determination of the marker position, five for the position of the dipole in space (X, Y and Z as Cartesian coordinates and φ and θ for describing the orientation of the marker) and the magnetic moment μ as the sixth degree of freedom. If these variables are known, the magnetic field strength can be determined at any point in space. Since the marker localization is an inverse problem , the position of the marker cannot, conversely, be explicitly stated from six independent measurements. For this reason, the problem of determining the position is sensibly solved with the aid of the least squares method.
### The quality function
For a number of n sensors, which is at least equal to the number of degrees of freedom to be determined, the magnetic field H is calculated around a simulated marker and the sensor positions and compared with the measured sensor signals. For this purpose, all squared deviations from simulated and actual magnetic field strengths are added up to form an error or quality function :
${\ displaystyle {\ varvec {Q}} = \ sum _ {n} ^ {i = 1} \, \ {{\ varvec {H}} (r, \ mu) _ {i} ^ {M} - { \ boldsymbol {H}} (r, \ mu) _ {i} ^ {S} \} ^ {2}}$, with M = measurement, S = simulation
The position of the marker to be localized is changed with a suitable strategy ( gradient method or fuzzy methods) until the difference between the sensor signal and the simulated field is minimal. The position determined in this way corresponds to the true position of the marker. To increase the accuracy and to minimize the influence of the static errors in the sensor signals, a large number of sensors is used. In order to minimize the influence of external disturbances, various modifications of the quality function can be created. For example, the sensors can be evaluated with regard to their sensitivity. The quality function is then modified in the following form:
${\ displaystyle {\ varvec {Q}} = \ sum _ {n} ^ {i = 1} {\ frac {\ {{\ varvec {H}} (r, \ mu) _ {i} ^ {M} - {\ boldsymbol {H}} (r, \ mu) _ {i} ^ {S} \} ^ {2}} {\ Delta {\ boldsymbol {H}} _ {i} ^ {2}}}}$, with M = measurement, S = simulation
where Δ H ² i corresponds to the spread of the individual sensor signals. A further increase in accuracy is achieved through the introduction of the gradiometer principle . To do this, various sensor signals are linked to one another in order to eliminate external interference fields. The quality function for a 1st order gradiometer is as follows:
${\ displaystyle {\ varvec {Q}} = \ sum _ {n} ^ {i = 1} \, \ {({\ varvec {H}} (r, \ mu) _ {G} ^ {M} - {\ varvec {H}} (r, \ mu) _ {G} ^ {M}) - ({\ varvec {H}} (r, \ mu) _ {G} ^ {M} - {\ varvec { H}} (r, \ mu) _ {G} ^ {M}) \} ^ {2}}$, with M = measurement, S = simulation, G = gradio
## Used magnetic field sensors
There are various measuring device arrangements and magnetic field sensors, all of which are based on the three-dimensional localization of magnetic markers. “Superconducting Quantum Interference Devices” sensors ( SQUIDs ) enable the detection of the smallest signals up to 10 −15 Tesla. A measurement with these sensors is very complex because the sensors have to be cooled (low-temperature SQUIDs with liquid helium , high-temperature SQUIDs with liquid nitrogen ). Due to the high sensitivity of the sensors to magnetic fields, magnetic shielding is generally required. The application of the SQUIDs is therefore very expensive and remains limited to experimental purposes. Another type of sensor are Hall sensors (named after Edwin Hall ), which have a sensitivity of up to 10 −8 Tesla and are therefore above urban disturbances (magnetic disturbances e.g. from hospital beds, elevators). They don't need magnetic shielding and work at room temperature. In order to achieve a large range, the magnetic markers used in the Hall sensors prove to be very large and are therefore unsuitable for medical applications.
Thus, in clinical practice, v. a. AMR sensors are used. At 10 −10 Tesla, their sensitivity is slightly below urban disturbances. With this type of sensor, measurements can be made at room temperature in a normal examination room with small magnets and with sufficient accuracy. This method is therefore easy to carry out and inexpensive. The position of such a magnet is determined by evaluating the magnetic stray field surrounding it. After it has been ingested by humans, its current location, the respective frequencies , activities and speeds with which it is moved can be determined. The behavior of the magnetic marker corresponds to the indigestible food components in the GI tract e.g. B. that of cherry stones.
## application
Drug release profile with MAARS capsule
A mere monitoring of the passage of a capsule and the motility pattern can provide information about the course of a therapy or illness of all gastrointestinal dysfunctions in which a changed motility of the gastrointestinal tract is part of the symptoms. Particular mention should be made of gastroparesis , celiac disease , Crohn's disease , ulcerative colitis , diabetes mellitus and diarrhea . Changes in motility due to medication, food components and operations can also be assessed very well using magnetic marker monitoring. A targeted release of the active ingredient is of particular importance for the development of medicinal substances, as this enables absorption in different areas of the intestine to be determined and an optimized formulation of the medicinal product to be found. Through a combination of monitoring and controlled drug release , pharmacokinetic data regarding bioavailability and the drug release profile can be recorded. The release profile shown in the picture was generated with the "Magnetic active agent release system" (MAARS).
Probably the most important advantage over other diagnostic methods in gastroenterology, such as endoscopy , is the painless and minimally invasive examination of the patient. In contrast to scintigraphic methods, no radioactive substances are used. For drug research and development, there are advantages primarily from the fact that drug studies can be carried out quickly and easily.
## Individual evidence
1. H. Richert: Development of a magnetic 3-D monitoring system using the example of the non-invasive examination of the human gastrointestinal tract . (Dissertation, Friedrich Schiller University, Jena 2003).
2. Wilfried Andrä, Henri Danan, Klaus Eitner, Michael Hocke, Hans-Helmar Kramer, Henry Parusel, Pieter Saupe, Christoph Werner, Matthias E. Bellemann: A novel magnetic method for examination of bowel motility . In: Medical Physics . tape 32 , 2005, pp. 2942-2944 , doi : 10.1118 / 1.2012788 .
3. Michael Hocke, Ulrike Schöne, Hendryk Richert, Peter Görnert, Jutta Keller, Peter Layer, Andreas Stallmach: Every slow-wave impulse is associated with motor activity of the human stomach . In: American Journal of Physiology-Gastrointestinal and Liver Physiology . tape 296 , 2009, pp. G709-G716 , doi : 10.1152 / ajpgi.90318.2008 , PMID 19095766 .
4. Felber J., Pätzold S., Richert H., Stallmach A .: 3D-MAGMA: A novel way of measuring gastrointestinal motility in patients with infectious diarrhoea . In: Good . tape 60 , 2011, p. 153–154 , doi : 10.1136 / good.2011.239301.325 .
5. a b c Clinical magnetic monitoring system, 3D-MAGMA ( Memento from August 15, 2013 in the Internet Archive ), capsule, measuring system, marker path
6. a b c Release capsule Magnetic drug release, MAARS process ( Memento from March 10, 2013 in the Internet Archive ), Maars process
7. ^ O. Kosch, W. Weitschies, L. Trahms: On-line localization magnetic markers for clinical applications and drug delivery studies . In: Biomag 2004: Proceedings of the 14th International Conference on Biomagnetism: Boston, Massachusetts, USA, August 8-12, 2004 . 2004, p. 261-262 .
8. Werner Weitschies, Olaf Kosch, Hubert Mönnikes, Lutz Trahms: Magnetic Marker Monitoring: An application of biomagnetic measurement instrumentation and principles for the determination of the gastrointestinal behavior of magnetically marked solid dosage forms . In: Advanced Drug Delivery Reviews . tape 57 , no. 8 , 2005, p. 1210-1222 , doi : 10.1016 / j.addr.2005.01.025 .
9. V. Schlageter, B. Thevoz, Y. de Ribaupierre, B. Meyrat, N. Lutz, P. Kucera: Noninvasive examination of gastrointestinal motility by using magneto-detection . In: Neurogastroenterol Motil . No. 10 , 1998, pp. 105 .
10. E. Stathopoulos, V. Schlageter, B. Meyrat, Y. Ribaupierre, P. Kucera: Magnetic pill tracking: a novel non ‐ invasive tool for investigation of human digestive motility . In: Neurogastroenterology & Motility . tape 17 , no. 1 , 2005, p. 148-154 , doi : 10.1111 / j.1365-2982.2004.00587.x .
11. ^ H. Richert, S. Wangemann, O. Surzhenko, J. Heinrich, K. Eitner, M. Hocke, P. Görnert: Magnetic Monitoring of the Human Gastrointestinal Tract . In: Biomedical Engineering . tape 49 , 2004, pp. 718-719 .
12. Hendryk Richert, Olaf Kosch, Peter Görnert: Magnetic Monitoring as a Diagnostic Method for Investigating Motility in the Human Digestive System . In: W. Andrae, H. Nowak (Ed.): Magnetism in Medicine . WILEY-VCH, Weinheim, p. 481-498 , doi : 10.1002 / 9783527610174.ch4b .
13. Biopharmacy and Pharmaceutical Technology ( Memento from January 1, 2009 in the Internet Archive ), example with the Maars method | 2022-10-02 03:19:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6405437588691711, "perplexity": 3072.3012861476263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00280.warc.gz"} |
https://www.autoitscript.com/forum/topic/29080-array-file-and-dir-list/ | Followers 0
# Array file and dir list
## 35 posts in this topic
#1 · Posted (edited)
I am trying to break out both the dir and files from a folder. What I want is for each folder to be listed and each file under the folder to be listed as well, but with the folder heading listed as well for those files.
edit - new code - almost there
#include <GuiConstants.au3>
#include <GuiListView.au3>
#Include <File.au3>
#Include <Array.au3>
Dim $Btn_TestScroll,$Btn_Exit, $msg,$Status, $i$dirList = _FileListToArray (@FavoritesDir, '*', 2)
If (Not IsArray($dirList)) and (@error = 1) Then MsgBox(0, "", "No Files\Folders Found.") Exit EndIf GUICreate("ListView Scroll", 392, 322)$listview = GUICtrlCreateListView("URL", 40, 30, 310, 149)
;$FileList =12 For$TEST1 In $dirList GUICtrlCreateListViewItem($TEST1, $listview)$FileList = _FileListToArray (@FavoritesDir & "\" & $TEST1, '*.url', 1) If (Not IsArray($FileList)) and (@error = 1) Then
MsgBox(0, "", "No Files\Folders Found.")
Exit
EndIf
$rows = UBound($FileList)
$dims = UBound($FileList, 0)
For $r = 0 To UBound($FileList, 1) - 1
For $dims in$FileList -$r GUICtrlCreateListViewItem($dims, $listview) Next Next Next$Btn_TestScroll = GUICtrlCreateButton("Scroll Test", 150, 230, 90, 40)
$Btn_Exit = GUICtrlCreateButton("Exit", 300, 260, 70, 30)$Status = GUICtrlCreateLabel("Remember items are zero-indexed", 0, 302, 392, 20, BitOR($SS_SUNKEN,$SS_CENTER))
GUISetState()
While 1
$msg = GUIGetMsg() Select Case$msg = $GUI_EVENT_CLOSE Or$msg = $Btn_Exit ExitLoop Case$msg = $Btn_TestScroll For$i = 10 To 50 Step 10
_GUICtrlListViewScroll ($listview, 0,$i)
Sleep(500)
_GUICtrlListViewScroll ($listview, 0,$i - ($i * 2)) Sleep(500) Next EndSelect WEnd Exit When I do this I get each folder listed countless times. Can anyone point me in the right direction. Thanks EDIT =- Changed code - but same thing happens EDIT 2 - thought I found my error - but now it says it must be declared as an object? Edited by nitekram All by me: "Sometimes you have to go back to where you started, to get to where you want to go." "Everybody catches up with everyone, eventually" "As you teach others, you are really teaching yourself." From my dad "Do not worry about yesterday, as the only thing that you can control is tomorrow." Programming Tips Excel Changes ControlHover.UDF GDI_Plus Draw_On_Screen GDI Basics GDI_More_Basics GDI Rotate GDI Graph GDI CheckExistingItems GDI Trajectory Replace$ghGDIPDll with $__g_hGDIPDll DLL 101? Array via Object GDI Swimlane GDI Plus French 101 Site GDI Examples UEZ GDI Basic Clock GDI Detection # Ternary operator #### Share this post ##### Link to post ##### Share on other sites BUMP - plus I changed the code - but still same issue. All by me: "Sometimes you have to go back to where you started, to get to where you want to go." "Everybody catches up with everyone, eventually" "As you teach others, you are really teaching yourself." From my dad "Do not worry about yesterday, as the only thing that you can control is tomorrow." Programming Tips Excel Changes ControlHover.UDF GDI_Plus Draw_On_Screen GDI Basics GDI_More_Basics GDI Rotate GDI Graph GDI CheckExistingItems GDI Trajectory Replace$ghGDIPDll with $__g_hGDIPDll DLL 101? Array via Object GDI Swimlane GDI Plus French 101 Site GDI Examples UEZ GDI Basic Clock GDI Detection # Ternary operator #### Share this post ##### Link to post ##### Share on other sites #3 · Posted (edited) BUMP - plus I changed the code - but still same issue. Never mind I found it - I had declared a variable and never took it out. Edit - I feal really stupid saying I found it - but it still is messed up - any help Edited by nitekram All by me: "Sometimes you have to go back to where you started, to get to where you want to go." "Everybody catches up with everyone, eventually" "As you teach others, you are really teaching yourself." From my dad "Do not worry about yesterday, as the only thing that you can control is tomorrow." Programming Tips Excel Changes ControlHover.UDF GDI_Plus Draw_On_Screen GDI Basics GDI_More_Basics GDI Rotate GDI Graph GDI CheckExistingItems GDI Trajectory Replace$ghGDIPDll with $__g_hGDIPDll DLL 101? Array via Object GDI Swimlane GDI Plus French 101 Site GDI Examples UEZ GDI Basic Clock GDI Detection # Ternary operator #### Share this post ##### Link to post ##### Share on other sites #4 · Posted (edited) Never mind I found it - I had declared a variable and never took it out. Edit - I feal really stupid saying I found it - but it still is messed up - any help OK - I have to BUMP this as I believe those that read it the last time might have thought that I was able to figure it out. I think I need to use Ubound but I am unable to figure it out. EDIT - Added code that I am trying to figure out. EDIT - Cleand up code with TidyIT EDIT - DELETED CODE - ALL$#$!#$$!## UP - I have no clue. Can you tell? Edited by nitekram All by me: "Sometimes you have to go back to where you started, to get to where you want to go." "Everybody catches up with everyone, eventually" "As you teach others, you are really teaching yourself." From my dad "Do not worry about yesterday, as the only thing that you can control is tomorrow." Programming Tips Excel Changes ControlHover.UDF GDI_Plus Draw_On_Screen GDI Basics GDI_More_Basics GDI Rotate GDI Graph GDI CheckExistingItems GDI Trajectory Replace ghGDIPDll with __g_hGDIPDll DLL 101? Array via Object GDI Swimlane GDI Plus French 101 Site GDI Examples UEZ GDI Basic Clock GDI Detection # Ternary operator #### Share this post ##### Link to post ##### Share on other sites OK - I have to BUMP this as I believe those that read it the last time might have thought that I was able to figure it out. I think I need to use Ubound but I am unable to figure it out. EDIT - Added code that I am trying to figure out. EDIT - Cleand up code with TidyIT EDIT - DELETED CODE - ALL #!#$$!##$ UP - I have no clue. Can you tell?
New Code - still does not do what I want
#include <GuiConstants.au3>
#include <GuiListView.au3>
#Include <File.au3>
#Include <Array.au3>
Dim $Btn_TestScroll,$Btn_Exit, $msg,$Status, $i$dirList=_FileListToArray(@FavoritesDir,'*',2)
If (Not IsArray($dirList)) and (@Error=1) Then MsgBox (0,"","No Files\Folders Found.") Exit EndIf GUICreate("ListView Scroll", 392, 322)$listview = GUICtrlCreateListView("URL",40, 30, 310, 149)
For $TEST1 IN$dirlist
GUICtrlCreateListViewItem($TEST1,$listview)
$FileList=_FileListToArray(@FavoritesDir & "\" &$TEST1,'*.url',1)
If (Not IsArray($FileList)) and (@Error=1) Then MsgBox (0,"","No Files\Folders Found.") Exit EndIf$rows = UBound($FileList)$dims = UBound($FileList, 0) ;MsgBox(0, "The " &$dims & "-dimensional array has", _
; $rows & " rows, " &$cols & " columns")
;Display $myArray's contents$output = ""
For $r = 0 to UBound($FileList,1) - 1
$output =$output & @LF
Next
;MsgBox(4096,"Array Contents", $output) ;MsgBox (0,"test",$output)
GUICtrlCreateListViewItem($fileList[$rows], $listview) Next$Btn_TestScroll = GUICtrlCreateButton("Scroll Test", 150, 230, 90, 40)
$Btn_Exit = GUICtrlCreateButton("Exit", 300, 260, 70, 30)$Status = GUICtrlCreateLabel("Remember items are zero-indexed", 0, 302, 392, 20, BitOR($SS_SUNKEN,$SS_CENTER))
GUISetState()
While 1
$msg = GUIGetMsg() Select Case$msg = $GUI_EVENT_CLOSE Or$msg = $Btn_Exit ExitLoop Case$msg = $Btn_TestScroll For$i = 10 To 50 Step 10
_GUICtrlListViewScroll($listview,0,$i)
Sleep( 500 )
_GUICtrlListViewScroll($listview,0,$i - ($i * 2)) Sleep( 500 ) Next EndSelect WEnd Exit All by me: "Sometimes you have to go back to where you started, to get to where you want to go." "Everybody catches up with everyone, eventually" "As you teach others, you are really teaching yourself." From my dad "Do not worry about yesterday, as the only thing that you can control is tomorrow." Programming Tips Excel Changes ControlHover.UDF GDI_Plus Draw_On_Screen GDI Basics GDI_More_Basics GDI Rotate GDI Graph GDI CheckExistingItems GDI Trajectory Replace$ghGDIPDll with $__g_hGDIPDll DLL 101? Array via Object GDI Swimlane GDI Plus French 101 Site GDI Examples UEZ GDI Basic Clock GDI Detection # Ternary operator #### Share this post ##### Link to post ##### Share on other sites BUMP since no one is reader or replying newest code - so close #include <GuiConstants.au3> #include <GuiListView.au3> #Include <File.au3> #Include <Array.au3> Dim$Btn_TestScroll, $Btn_Exit,$msg, $Status,$i
$dirList=_FileListToArray(@FavoritesDir,'*',2) If (Not IsArray($dirList)) and (@Error=1) Then
MsgBox (0,"","No Files\Folders Found.")
Exit
EndIf
GUICreate("ListView Scroll", 392, 322)
$listview = GUICtrlCreateListView("URL",40, 30, 310, 149) ;$FileList =12
For $TEST1 IN$dirlist
GUICtrlCreateListViewItem($TEST1,$listview)
$FileList=_FileListToArray(@FavoritesDir & "\" &$TEST1,'*.url',1)
If (Not IsArray($FileList)) and (@Error=1) Then MsgBox (0,"","No Files\Folders Found.") Exit EndIf$rows = UBound($FileList)$dims = UBound($FileList, 0) ;MsgBox(0, "The " &$dims & "-dimensional array has", _
; $rows & " rows, " &$cols & " columns")
;Display $myArray's contents For$r = 0 to UBound($FileList,1) - 1 GUICtrlCreateListViewItem($r, $listview)$FileList=_FileListToArray(@FavoritesDir & "\" & $TEST1,'*.url',1) If (Not IsArray($FileList)) and (@Error=1) Then
MsgBox (0,"","No Files\Folders Found.")
Exit
EndIf
For $rows in$fileList
GUICtrlCreateListViewItem($rows,$listview)
Next
Next
;MsgBox(4096,"Array Contents", $output) ;MsgBox (0,"test",$output)
Next
#cs
For $test2 in$fileList
GUICtrlCreateListViewItem($TEST1,$listview)
Next
#ce
$Btn_TestScroll = GUICtrlCreateButton("Scroll Test", 150, 230, 90, 40)$Btn_Exit = GUICtrlCreateButton("Exit", 300, 260, 70, 30)
$Status = GUICtrlCreateLabel("Remember items are zero-indexed", 0, 302, 392, 20, BitOR($SS_SUNKEN, $SS_CENTER)) GUISetState() While 1$msg = GUIGetMsg()
Select
Case $msg =$GUI_EVENT_CLOSE Or $msg =$Btn_Exit
ExitLoop
Case $msg =$Btn_TestScroll
For $i = 10 To 50 Step 10 _GUICtrlListViewScroll($listview,0,$i) Sleep( 500 ) _GUICtrlListViewScroll($listview,0,$i - ($i * 2))
Sleep( 500 )
Next
EndSelect
WEnd
Exit
All by me:
"Sometimes you have to go back to where you started, to get to where you want to go."
"Everybody catches up with everyone, eventually"
"As you teach others, you are really teaching yourself."
"Do not worry about yesterday, as the only thing that you can control is tomorrow."
Programming Tips
Excel Changes
ControlHover.UDF
GDI_Plus
Draw_On_Screen
GDI Basics
GDI_More_Basics
GDI Rotate
GDI Graph
GDI CheckExistingItems
GDI Trajectory
Replace $ghGDIPDll with$__g_hGDIPDll
DLL 101?
Array via Object
GDI Swimlane
GDI Plus French 101 Site
GDI Examples UEZ
GDI Basic Clock
GDI Detection
# Ternary operator
##### Share on other sites
#include <GuiConstants.au3>
#include <GuiListView.au3>
#Include <File.au3>
#Include <Array.au3>
Dim $Btn_TestScroll,$Btn_Exit, $msg,$Status, $i GUICreate("ListView Scroll", 392, 322)$listview = GUICtrlCreateListView("URL", 40, 30, 310, 149)
_GUICtrlListViewSetColumnWidth($listview, 0,$LVSCW_AUTOSIZE_USEHEADER)
$Btn_TestScroll = GUICtrlCreateButton("Scroll Test", 150, 230, 90, 40)$Btn_Exit = GUICtrlCreateButton("Exit", 300, 260, 70, 30)
$Status = GUICtrlCreateLabel("Remember items are zero-indexed", 0, 302, 392, 20, BitOR($SS_SUNKEN, $SS_CENTER)) GUISetState() _LoadListView($listview)
While 1
$msg = GUIGetMsg() Select Case$msg = $GUI_EVENT_CLOSE Or$msg = $Btn_Exit ExitLoop Case$msg = $Btn_TestScroll For$i = 10 To 50 Step 10
_GUICtrlListViewScroll($listview, 0,$i)
Sleep(500)
_GUICtrlListViewScroll($listview, 0,$i - ($i * 2)) Sleep(500) Next EndSelect WEnd Exit Func _LoadListView(ByRef$listview)
$dirList = _FileListToArray(@FavoritesDir, '*', 2) If (Not IsArray($dirList)) and (@error = 1) Then
MsgBox(0, "", "No Files\Folders Found.")
Exit
EndIf
For $TEST1 = 1 To$dirList[0]
GUICtrlCreateListViewItem($dirList[$TEST1], $listview)$FileList = _FileListToArray(@FavoritesDir & "\" & $dirList[$TEST1], '*.url', 1)
If (Not IsArray($FileList)) and (@error = 1) Then MsgBox(0, "", "No Files\Folders Found.") Exit EndIf ;Display$myArray's contents
For $r = 1 To$FileList[0]
GUICtrlCreateListViewItem($dirList[$TEST1] & " :--: " & $FileList[$r], $listview) ;================================================================================================ ; not sure what your trying to do here ;~$FileList2 = _FileListToArray(@FavoritesDir & "\" & $dirList[$TEST1], '*.url', 1)
;~ If (Not IsArray($FileList)) and (@error = 1) Then ;~ MsgBox(0, "", "No Files\Folders Found.") ;~ Exit ;~ EndIf ;~ ;~ For$rows = 1 To $FileList2[0] ;~ GUICtrlCreateListViewItem($FileList2[$rows],$listview)
;~ Next
;================================================================================================
Next
Next
EndFunc ;==>_LoadListView
Don't argue with an idiot; people watching may not be able to tell the difference.
##### Share on other sites
Bump - I added the newest code to the first post - it is really close now. If anyone wants to help - please do.
All by me:
"Sometimes you have to go back to where you started, to get to where you want to go."
"Everybody catches up with everyone, eventually"
"As you teach others, you are really teaching yourself."
"Do not worry about yesterday, as the only thing that you can control is tomorrow."
Programming Tips
Excel Changes
ControlHover.UDF
GDI_Plus
Draw_On_Screen
GDI Basics
GDI_More_Basics
GDI Rotate
GDI Graph
GDI CheckExistingItems
GDI Trajectory
Replace $ghGDIPDll with$__g_hGDIPDll
DLL 101?
Array via Object
GDI Swimlane
GDI Plus French 101 Site
GDI Examples UEZ
GDI Basic Clock
GDI Detection
# Ternary operator
##### Share on other sites
Close?
Don't argue with an idiot; people watching may not be able to tell the difference.
##### Share on other sites
#10 · Posted (edited)
#include <GuiConstants.au3>
#include <GuiListView.au3>
#Include <File.au3>
#Include <Array.au3>
Dim $Btn_TestScroll,$Btn_Exit, $msg,$Status, $i GUICreate("ListView Scroll", 392, 322)$listview = GUICtrlCreateListView("URL", 40, 30, 310, 149)
_GUICtrlListViewSetColumnWidth($listview, 0,$LVSCW_AUTOSIZE_USEHEADER)
$Btn_TestScroll = GUICtrlCreateButton("Scroll Test", 150, 230, 90, 40)$Btn_Exit = GUICtrlCreateButton("Exit", 300, 260, 70, 30)
$Status = GUICtrlCreateLabel("Remember items are zero-indexed", 0, 302, 392, 20, BitOR($SS_SUNKEN, $SS_CENTER)) GUISetState() _LoadListView($listview)
While 1
$msg = GUIGetMsg() Select Case$msg = $GUI_EVENT_CLOSE Or$msg = $Btn_Exit ExitLoop Case$msg = $Btn_TestScroll For$i = 10 To 50 Step 10
_GUICtrlListViewScroll($listview, 0,$i)
Sleep(500)
_GUICtrlListViewScroll($listview, 0,$i - ($i * 2)) Sleep(500) Next EndSelect WEnd Exit Func _LoadListView(ByRef$listview)
$dirList = _FileListToArray(@FavoritesDir, '*', 2) If (Not IsArray($dirList)) and (@error = 1) Then
MsgBox(0, "", "No Files\Folders Found.")
Exit
EndIf
For $TEST1 = 1 To$dirList[0]
GUICtrlCreateListViewItem($dirList[$TEST1], $listview)$FileList = _FileListToArray(@FavoritesDir & "\" & $dirList[$TEST1], '*.url', 1)
If (Not IsArray($FileList)) and (@error = 1) Then MsgBox(0, "", "No Files\Folders Found.") Exit EndIf ;Display$myArray's contents
For $r = 1 To$FileList[0]
GUICtrlCreateListViewItem($dirList[$TEST1] & " :--: " & $FileList[$r], $listview) ;================================================================================================ ; not sure what your trying to do here ;~$FileList2 = _FileListToArray(@FavoritesDir & "\" & $dirList[$TEST1], '*.url', 1)
;~ If (Not IsArray($FileList)) and (@error = 1) Then ;~ MsgBox(0, "", "No Files\Folders Found.") ;~ Exit ;~ EndIf ;~ ;~ For$rows = 1 To $FileList2[0] ;~ GUICtrlCreateListViewItem($FileList2[$rows],$listview)
;~ Next
;================================================================================================
Next
Next
EndFunc ;==>_LoadListView
I thank you and wish that I had seen your reply an hour ago - my mind hurts from changing things around - and I still never got close. I am going to study this code and see what I do not understand and hope to ask more questions. I have to go to bed as I am done for the night.
Again I thank you and would have never ever been able to do this without your help. I wish I just could get this.
EDIT - you never even used Ubound() - now I am completly lost.
Edited by nitekram
All by me:
"Sometimes you have to go back to where you started, to get to where you want to go."
"Everybody catches up with everyone, eventually"
"As you teach others, you are really teaching yourself."
"Do not worry about yesterday, as the only thing that you can control is tomorrow."
Programming Tips
Excel Changes
ControlHover.UDF
GDI_Plus
Draw_On_Screen
GDI Basics
GDI_More_Basics
GDI Rotate
GDI Graph
GDI CheckExistingItems
GDI Trajectory
Replace $ghGDIPDll with$__g_hGDIPDll
DLL 101?
Array via Object
GDI Swimlane
GDI Plus French 101 Site
GDI Examples UEZ
GDI Basic Clock
GDI Detection
# Ternary operator
Close?
Well I thought I was close - meaning that the folders were displaying and the files were displaying - just the files were displaying 3 times 3. I know have taken a look at your code and found that you are using [0] for what I was trying to use Ubound() for. Is there another example of using Ubound() as I have stated in another post - $output does not display anything (from the help file). Here is the thread in question. Thanks for any help - I am still trying to understand these terms and the way they work together. All by me: "Sometimes you have to go back to where you started, to get to where you want to go." "Everybody catches up with everyone, eventually" "As you teach others, you are really teaching yourself." From my dad "Do not worry about yesterday, as the only thing that you can control is tomorrow." Programming Tips Excel Changes ControlHover.UDF GDI_Plus Draw_On_Screen GDI Basics GDI_More_Basics GDI Rotate GDI Graph GDI CheckExistingItems GDI Trajectory Replace$ghGDIPDll with $__g_hGDIPDll DLL 101? Array via Object GDI Swimlane GDI Plus French 101 Site GDI Examples UEZ GDI Basic Clock GDI Detection # Ternary operator #### Share this post ##### Link to post ##### Share on other sites #12 · Posted (edited) Well I thought I was close - meaning that the folders were displaying and the files were displaying - just the files were displaying 3 times 3. I know have taken a look at your code and found that you are using [0] for what I was trying to use Ubound() for. Is there another example of using Ubound() as I have stated in another post -$output does not display anything (from the help file).
Here is the thread in question.
Thanks for any help - I am still trying to understand these terms and the way they work together.
well being you are using _FileListToArray, the [0] = Number of Files\Folders returned
you only need to use UBound if you don't know the size of the array
quick example:
Dim $array[Random(1,100,1)] MsgBox(0,"Ubound",UBound($array))
The example doesn't populate anything into the array, therefore the msgbox displays nothing but line feeds.
Edited by gafrost
Don't argue with an idiot; people watching may not be able to tell the difference.
##### Share on other sites
Dim $myArray[10][20] ;element 0,0 to 9,19$rows = UBound($myArray)$cols = UBound($myArray, 2)$dims = UBound($myArray, 0) MsgBox(0, "The " &$dims & "-dimensional array has", _
$rows & " rows, " &$cols & " columns")
; populate for display purposes
For $r = 0 To UBound($myArray, 1) - 1
For $c = 0 To UBound($myArray, 2) - 1
$myArray[$r][$c] =$c
Next
Next
;Display $myArray's contents$output = ""
For $r = 0 To UBound($myArray, 1) - 1
$output =$output & @LF
For $c = 0 To UBound($myArray, 2) - 1
$output =$output & $myArray[$r][$c] & " " Next Next MsgBox(4096, "Array Contents",$output)
Don't argue with an idiot; people watching may not be able to tell the difference.
##### Share on other sites
Dim $myArray[10][20] ;element 0,0 to 9,19$rows = UBound($myArray)$cols = UBound($myArray, 2)$dims = UBound($myArray, 0) MsgBox(0, "The " &$dims & "-dimensional array has", _
$rows & " rows, " &$cols & " columns")
; populate for display purposes
For $r = 0 To UBound($myArray, 1) - 1
For $c = 0 To UBound($myArray, 2) - 1
$myArray[$r][$c] =$c
Next
Next
;Display $myArray's contents$output = ""
For $r = 0 To UBound($myArray, 1) - 1
$output =$output & @LF
For $c = 0 To UBound($myArray, 2) - 1
$output =$output & $myArray[$r][$c] & " " Next Next MsgBox(4096, "Array Contents",$output)
I guess my confusion is that Ubound is for arrays that are unknown (as you stated) - yet in the help file example, the array is defined as $myArray[10][20]. But now that you have given an example it is a little clearer (like mud ) I am going to do some searches on the forum and read some more. Thanks again All by me: "Sometimes you have to go back to where you started, to get to where you want to go." "Everybody catches up with everyone, eventually" "As you teach others, you are really teaching yourself." From my dad "Do not worry about yesterday, as the only thing that you can control is tomorrow." Programming Tips Excel Changes ControlHover.UDF GDI_Plus Draw_On_Screen GDI Basics GDI_More_Basics GDI Rotate GDI Graph GDI CheckExistingItems GDI Trajectory Replace$ghGDIPDll with $__g_hGDIPDll DLL 101? Array via Object GDI Swimlane GDI Plus French 101 Site GDI Examples UEZ GDI Basic Clock GDI Detection # Ternary operator #### Share this post ##### Link to post ##### Share on other sites being you do know the dimensions of the array it could of been written as such: Dim$myArray[10][20] ;element 0,0 to 9,19
$rows = UBound($myArray)
$cols = UBound($myArray, 2)
$dims = UBound($myArray, 0)
MsgBox(0, "The " & $dims & "-dimensional array has", _$rows & " rows, " & $cols & " columns") ; populate for display purposes For$r = 0 To 9
For $c = 0 To 19$myArray[$r][$c] = $c Next Next ;Display$myArray's contents
$output = "" For$r = 0 To 9
$output =$output & @LF
For $c = 0 To 19$output = $output &$myArray[$r][$c] & " "
Next
Next
MsgBox(4096, "Array Contents", $output) Don't argue with an idiot; people watching may not be able to tell the difference. #### Share this post ##### Link to post ##### Share on other sites #16 · Posted (edited) Sorry about this - but I copied the code from the forum - removed comments for the most part and then got an error. But the same script runs fine on my home PC - any ideas? #include <GuiConstants.au3> #include <GuiListView.au3> #Include <File.au3> #Include <Array.au3> Dim$Btn_TestScroll, $Btn_Exit,$msg, $Status,$i
GUICreate("ListView Scroll", 392, 322)
$listview = GUICtrlCreateListView("URL", 40, 30, 310, 149) _GUICtrlListViewSetColumnWidth($listview, 0, $LVSCW_AUTOSIZE_USEHEADER)$Btn_TestScroll = GUICtrlCreateButton("Scroll Test", 150, 230, 90, 40)
$Btn_Exit = GUICtrlCreateButton("Exit", 300, 260, 70, 30)$Status = GUICtrlCreateLabel("Remember items are zero-indexed", 0, 302, 392, 20, BitOR($SS_SUNKEN,$SS_CENTER))
GUISetState()
_LoadListView($listview) While 1$msg = GUIGetMsg()
Select
Case $msg =$GUI_EVENT_CLOSE Or $msg =$Btn_Exit
ExitLoop
Case $msg =$Btn_TestScroll
For $i = 10 To 50 Step 10 _GUICtrlListViewScroll($listview, 0, $i) Sleep(500) _GUICtrlListViewScroll($listview, 0, $i - ($i * 2))
Sleep(500)
Next
EndSelect
WEnd
Exit
Func _LoadListView(ByRef $listview)$dirList = _FileListToArray(@FavoritesDir, '*', 2)
If (Not IsArray($dirList)) and (@error = 1) Then MsgBox(0, "", "No Files\Folders Found.") Exit EndIf For$TEST1 = 1 To $dirList[0] GUICtrlCreateListViewItem($dirList[$TEST1],$listview)
$FileList = _FileListToArray(@FavoritesDir & "\" &$dirList[$TEST1], '*.url', 1) If (Not IsArray($FileList)) and (@error = 1) Then
MsgBox(0, "", "No Files\Folders Found.")
Exit
EndIf
;Display $myArray's contents For$r = 1 To $FileList[0] GUICtrlCreateListViewItem($dirList[$TEST1] & " :--: " &$FileList[$r],$listview)
Next
Next
EndFunc
>Running: (3.1.1.111):C:\Program Files\AutoIt3\beta\autoit3.exe "Z:\scripts\favorites_list.au3"
Z:\scripts\favorites_list.au3 (56) : ==> Subscript used with non-Array variable.:
EDIT - on difference between the computers - I have a lot more favorites on my work computer than my home computer - could that be the cause?
Edited by nitekram
All by me:
"Sometimes you have to go back to where you started, to get to where you want to go."
"Everybody catches up with everyone, eventually"
"As you teach others, you are really teaching yourself."
"Do not worry about yesterday, as the only thing that you can control is tomorrow."
Programming Tips
Excel Changes
ControlHover.UDF
GDI_Plus
Draw_On_Screen
GDI Basics
GDI_More_Basics
GDI Rotate
GDI Graph
GDI CheckExistingItems
GDI Trajectory
Replace $ghGDIPDll with$__g_hGDIPDll
DLL 101?
Array via Object
GDI Swimlane
GDI Plus French 101 Site
GDI Examples UEZ
GDI Basic Clock
GDI Detection
# Ternary operator
##### Share on other sites
#17 · Posted (edited)
had same problem at work try:
Func _LoadListView(ByRef $listview)$dirList = _FileListToArray(@FavoritesDir, '*', 2)
If (Not IsArray($dirList)) Or (@error = 1) Then MsgBox(0, "", "No Files\Folders Found.") Return EndIf For$TEST1 = 1 To $dirList[0] GUICtrlCreateListViewItem($dirList[$TEST1],$listview)
$FileList = _FileListToArray(@FavoritesDir & "\" &$dirList[$TEST1], '*.url', 1) If (Not IsArray($FileList)) Or (@error = 1) Then
ContinueLoop
EndIf
;Display $myArray's contents For$r = 1 To $FileList[0] GUICtrlCreateListViewItem($dirList[$TEST1] & " :--: " &$FileList[$r],$listview)
Next
Next
EndFunc
Edited by gafrost
Don't argue with an idiot; people watching may not be able to tell the difference.
##### Share on other sites
had same problem at work try:
Func _LoadListView(ByRef $listview)$dirList = _FileListToArray(@FavoritesDir, '*', 2)
If (Not IsArray($dirList)) Or (@error = 1) Then MsgBox(0, "", "No Files\Folders Found.") Return EndIf For$TEST1 = 1 To $dirList[0] GUICtrlCreateListViewItem($dirList[$TEST1],$listview)
$FileList = _FileListToArray(@FavoritesDir & "\" &$dirList[$TEST1], '*.url', 1) If (Not IsArray($FileList)) Or (@error = 1) Then
ContinueLoop
EndIf
;Display $myArray's contents For$r = 1 To $FileList[0] GUICtrlCreateListViewItem($dirList[$TEST1] & " :--: " &$FileList[$r],$listview)
Next
Next
EndFunc
That worked - can you explain why?
All by me:
"Sometimes you have to go back to where you started, to get to where you want to go."
"Everybody catches up with everyone, eventually"
"As you teach others, you are really teaching yourself."
"Do not worry about yesterday, as the only thing that you can control is tomorrow."
Programming Tips
Excel Changes
ControlHover.UDF
GDI_Plus
Draw_On_Screen
GDI Basics
GDI_More_Basics
GDI Rotate
GDI Graph
GDI CheckExistingItems
GDI Trajectory
Replace $ghGDIPDll with$__g_hGDIPDll
DLL 101?
Array via Object
GDI Swimlane
GDI Plus French 101 Site
GDI Examples UEZ
GDI Basic Clock
GDI Detection
# Ternary operator
##### Share on other sites
looking at the function in the include error is not set but 0 is returned when no files are found.
To my thinking error should be set in this scenario also.
Don't argue with an idiot; people watching may not be able to tell the difference.
##### Share on other sites
looking at the function in the include error is not set but 0 is returned when no files are found.
To my thinking error should be set in this scenario also.
So, because I have folders at work that include folders and no files the first code did not work, but because it continues the loop either way in the second code it worked?
All by me:
"Sometimes you have to go back to where you started, to get to where you want to go."
"Everybody catches up with everyone, eventually"
"As you teach others, you are really teaching yourself."
"Do not worry about yesterday, as the only thing that you can control is tomorrow."
Programming Tips
Excel Changes
ControlHover.UDF
GDI_Plus
Draw_On_Screen
GDI Basics
GDI_More_Basics
GDI Rotate
GDI Graph
GDI CheckExistingItems
GDI Trajectory
Replace $ghGDIPDll with$__g_hGDIPDll
DLL 101?
Array via Object
GDI Swimlane
GDI Plus French 101 Site
GDI Examples UEZ
GDI Basic Clock
GDI Detection | 2017-06-29 07:16:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2910841405391693, "perplexity": 14054.551527556934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323889.3/warc/CC-MAIN-20170629070237-20170629090237-00616.warc.gz"} |
https://www.parabola.unsw.edu.au/2010-2019/volume-47-2011/issue-3/article/editorial | # Editorial
Welcome to a packed issue to close the year in 2011.
Congratulations to all of the students, their parents and teachers who had success in the 50th Annual UNSW School Mathematics Competition. The competition problems, solutions and names of prizewinners are included in this issue.
Mathematical problems might seem far removed from everyday life but really this is not the case. In fact, as Mrs Fibonacci says in the wonderful picture book by Jon Scieszka and Lane Smith, Maths Curse: YOU KNOW, you can think of almost everything as a maths problem''. | 2021-09-26 10:30:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4452787935733795, "perplexity": 1695.7352350302012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057857.27/warc/CC-MAIN-20210926083818-20210926113818-00388.warc.gz"} |
https://infrageeks.com/post/2023-02-23.how-to-safely-run-esxi-on-the-internet-part-2/ | # How to (safely) run ESXi on the internet part 2
The first part of this series was about getting the basics setup so you can more safely use a rented ESXi bare metal server on the internet. However, this may not cover all of your needs for making this part of your production environment so here’s a few additional pieces.
### Site to site VPN connection
In the original article we were content with just having a VPN connection from your computer to manage the ESXi and the virtual machines that are hosted there. This is OK, but in many cases, you’ll probably want to just have access directly to the VMs without having to explicitly open a VPN connection every time and perhaps the VMs may need to be able to reach back to your local network to get some information from other machines. For this, we’re going to add a site-to-site VPN connection so that the hosted LAN subnet is reachable transparently from your local network.
I’m doing this with pfSense on both ends to make things easy.
On the hosted pfSense go to VPN > OpenVPN > Servers > Add.
I tend to name these with both end of the connection so it’s going to be office2ovh01:
• Server mode: Peer to peer (Shared Key)
• Device mode: tun - Layer 3 Tunnel Mode
• Protocol: UDP in ipv4 only
• Interface: WAN
• Local Port: pick something other than 1194 which is already reserved for the client OpenVPN connection
• Check Automatically generated a shared key
• Enable Data Encryption Negotiation (optional since you control both ends, you can manually select the protocol you wish to use)
• otherwise the defaults are fine
Tunnel Settings (sticking to IP v4 only):
• IPv4 Tunnel Network: an unused subnet
• IPv4 Remote Network(s): This will be the subnet at the other location eg. your office
• Concurrent connections: 1
Otherwise, the defaults should be fine.
If you edit the VPN Server configuration now, there will be a 2048 bit key in the Share Key value. We need to copy this for the configuration on the client VPN connection.
At the other end with an another pfSense router, VPN > OpenVPN > Clients > Add. We use the same values, but with the additional fields:
• Server host or address: the WAN IP address of the hosted pfSense
• Server port: the value we chose for the server to listen on
Tunnel settings:
• IPv4 Tunnel Network: the same subnet used on the server
• IPv4 Remote Network(s): This will be the subnet on the hosted LAN segment (192.168.20.0/24 in the example)
• Concurrent connections: 1
Instead of autogenerating a shared key, we uncheck this box and paste in the key generated on the other side.
Now the one thing that will be missing since we didn’t use the wizard is a firewall rule allowing connections on the port number that we chose. For this: Firewall > Rules > WAN and duplicate the existing OpenVPN rule, and modify the port number to custom and enter the port number that was used in the previous configuration.
Once the configuration is finished on both sides, it should connect automatically and you can check under Status > OpenVPN.
OVH Note: In my initial testing, I ran across a number of issues with the standard configuration, especially once I got to the later stage of running replication jobs that push a lot of traffic across the tunnel. It seems that OVH does quite a number of networking shenanigans about UDP traffic, probably as anti-DDOS mitigation and BitTorrent throttling. As a result, I had to switch these connections over to using TCP instead of UDP, which is generally frowned upon, but this is one of the rare cases where it’s probably the best choice at the moment. You’ll need to make a couple of changes to make this work. In the OpenVPN configuration on both sides, select TCP on IPv4 as the protocol, and update the firewall rule to allow for TCP instead of UDP.
### Making our VMs available on the internet
The best way to do this is and keep things relatively secure and not have to buy lots of additional IP addresses is to use a reverse proxy. Nginx does a really good job of this and has the very useful features that it can map hostnames in URL requests to different internal IP addresses as well as tight integration with certbot from to handle all of your certificate management.
There are a ton of tutorials on the web so I won’t go over this in detail, but it’s another critical point in ensuring that your system is well protected.
## Data Protection
Now we’ve done the basics to protect our VMs from the internet, but now all of our data is in a single place. You could use a regular backup product like Veeam, Nakivo or even Active Backup for Business from Synology or we could leverage a ZFS solution to add some smarts to the basic ESXi storage in order to replicate it elsewhere. It’s worth noting that running both a backup system and a replication copy is a good idea.
You can use something like TrueNAS for this kind of thing, which gets you a nice graphic interface on top ZFS and includes many nice affordances for setting up remote replication, but it comes at the price of requiring 8 GB of RAM. So it’s going to be a tradeoff of pretty UI vs RAM requirements. In my case, the new box has 64 GB of RAM, so I can afford it here as I have bunch of little VMs that aren’t too greedy about RAM. In other installations I use a barebones Ubuntu VM with 2-4GB of RAM, noting that ZFS replication is OS agnostic, so you can replicate from an Ubuntu server to a TrueNAS server and vice-versa.
If you go the roll-your-own approach with a Linux machine with ZFS, there are a number of tools to help with configuring and automating the replication such as zrepl, zfs-replicate or zettarepl. I’ve also got a set of scripts that you can wrap up in a bash script, but frankly the the other options are more mature.
So it’s on to uploading the TrueNAS ISO. In this case I’m using TrueNAS Core which is a little lighter weight than TrueNAS Scale and none of the new features are particularly useful since this is going to be a dedicated file server, not a generic host for VMs, HCI, Kubernetes and so on.
### Passthrough disk
The basics of this approach are to install a VM with the OS on the standard datastore and then find some additional storage to assign to this VM as an additional drive. In a simple configuration this can just be creating a large virtual disk using up the rest of the available space on the existing VMFS datastore (leaving at least 10% for snapshots) or some other storage available on the server.
In many cases, the bare metal servers that are rented out often have two drives that are configured in a software RAID 1 for operating systems like Windows and Linux. However, ESXi has no such built in option for software RAID. So this means that you will usually have a second drive available that can be reserved for the storage drive for your storage VM. You can check this under Storage > Devices and if there are two disks, clicking on them will show the partition layout. The first one on my system is clearly the drive hosting the default VMFS datastore:
While the second one is empty:
Before we get to installing the VM though, we’re going to set things up to by reserving the second drive to be physically dedicated to the storage VM, rather than formatting it as VMFS and creating a virtual disk. We can set the drive to work in passthrough mode under Host > Manage > Hardware > PCI Devices where we find the two drives. Unless the standard installation is really strange, it should be the drive with the highest address:
Hit the Toggle passthrough button at the top and it should be immediately available to assign to a VM. In some cases, it may require you to reboot the ESXi.
Note: This may not work on some configurations where a single SATA controller is responsible for both of the drives. In this case, you’ll need to create a VMFS datastore on the second disk and then create a large VMDK file. It’s not optimal from a performance perspective, but gets the job done for small environments. I had one running this way for over 5 years without any issues. NVMe devices are guaranteed to be autonomous, so that may factor into your server selection criteria in addition to the performance benefits over a SATA SSD.
### Storage network
Then there are a few little best practices to follow that are not necessarily important in an all-in-one configuration, but I like to do them anyway. Specifically, ESXi storage traffic should be on an isolated network so it’s not competing with the VM traffic. So for this we’re going to create another vSwitch with no uplinks and call it Storage.
Networking > Virtual switches > Add standard virtual switch.
The create a port group named VM Storage. Networking > Port groups > Add port group.
So now we have an interface for connecting the VM to a private network reserved for storage. But the ESXi needs an interface in this network too. Networking > VMkernel NICs > Add VMkernel NIC. We’ll want to set a static IP in a private subnet of your choice, 192.168.100.0/24 in this case.
### Create the TrueNAS VM
Virtual Machines > Create/register VM :
1. Create Virtual Machine
2. Name : ovh-truenas01
Compatibility: default
Guest OS Family: Other
Guest OS version: FreeBSD 13 or later versions (64-bit)
3. Select storage: the default datastore
4. Customise settings:
For the custom settings, you’ll want to attach the TrueNAS ISO image, and then use the Add other device > PCI Device which will automatically add the drive since it’s the only PCI device available for passthrough. We’ll also add a second network adapter that will be connected to the Storage-VM port group.
Before booting the VM, there’s a limitation on VMs that use passthrough devices and that’s that they need to have their memory reserved. So we’ll need to go back into the VM configuration, unroll the Memory tab and set the reservation to the 8 GB we have assigned to this machine.
Note: There appears to be a bug in the Host UI in the version of ESXi 7.0 that I’m using and the option to set the reservation is greyed out and inaccessible. If this is the case, you can set the required values manually by editing the VM under the VM Options tab > Advanced > Edit Configuration > Add Parameter to add the following entries:
Key: sched.mem.min Value: 8192
Key: sched.mem.minSize Value: 8192
Key: sched.mem.shares Value: normal
Key: sched.mem.pin Value: TRUE
Then it’s the standard installation process, making sure to install the OS onto the 16GB disk, and not the second drive. Once the VM is booted you can get the IP address by checking the console and use a browser to finish the configuration.
### TrueNAS Configuration
First steps involve the networking. Under Networking > Global Configuration you’ll want to set:
• hostname
• domain
• DNS servers
• IPv4 default gateway
Under Networking > Interfaces > vmx0 > Edit, disable DHCP and set a static IP address for the management interface on the LAN subnet. After hitting Apply, you’ll need to confirm that you want to test the changes and then immediately connect to the new IP address so that you can confirm that it worked.
Then we need to edit the vmx1 interface that is connected to the Storage subnet, giving it an appropriate IP address in the subnet you defined for the ESXi interface with the same routine of Apply, Test Changes, and then Save Changes.
And finally onto the storage side of things. Under Storage > Pools > Add follow the wizard through following steps:
1. Create new pool
2. Give it a name (ovh-nfs01), select the disk that was passed through and click ADD VDEV or the arrow pointing to the right. Since this is a single disk, it can’t be protected by the ZFS RAID features so we’ll have to force the creation of a stripe vdev.
3. Finally Create the pool
Now it’s time to configure the volume to be presented to the ESXi server. You can choose either iSCSI or NFS. iSCSI tends to be better performing and more complicated to setup, but on the TrueNAS side, you just have an opaque blob of storage so you can’t go in directly and look at the files. NFS is generally a little simpler and you can more easily manipulate the files, so that’s what I tend to use in these cases.
The first step is going to be creating a file system in the pool to hold the VMs, This is hidden in the button with 3 vertical dots on the pool and select Add Dataset.
Well need to give it a name (ovh-vm01) and set Sync to disabled as otherwise the NFS performance is awful. Then we will share this filesystem over NFS under Sharing > Unix Shares (NFS) > Add. In the Paths section, we navigate our new file system so we get a path like: /mnt/ovh-nfs01/ovh-vm01. If we want to push the security up a notch, you can set the Authorised Hosts to the IP address that we gave to the ESXi VMkernel interface in the Storage subnet in the Advanced Options. This will offer the option to enable the NFS Service.
We also need to update the permissions to allow the ESXi root connection to work properly. I tend to just open the shell and use chmod:
chmod -R a+rwx /mnt/ovh-nfs01/ovh-vm01
Back to the ESXi, we’re going to mount the NFS datastore under Storage > Datastore > New Datastore.
1. Mount NFS Datastore
2. Name: ovh-vm01
NFS server: IP address of the TrueNAS server on the storage subnet
NFS share: the full path of the filesystem, starting with /mnt (/mnt/ovh-nfs01/ovh-vm01)
NFS version: 3 will do fine
Note: In the “I’m completely baffled” category, this fails for no apparent reason on the machine I’m using at the moment. None of the usual troubleshooting suspects for NFS issues seem to be in play. If this happens you should be able to fall back on the ESXi command-line with:
esxcli storage nfs add --host=192.168.100.2 --share=/mnt/ovh-nfs01/ovh-vm01 --volume-name=ovh-vm01
Which works fine. 🤷♂️
### Replication
In my case, I already have another TrueNAS server at the office, so I’m going to replicate the ovh-vm01 filesystem over to it on an hourly basis. On the TrueNAS VM this is configured under Tasks > Replication Tasks > Add.
There’s a lot to fill in here to get this working. On the left side, we select Source Location > On this System and unroll the folder and select ovh-vm01.
On the right, the destination is On a Different System and we create a new SSH Connection. We can use the Semi-automatic option if the destination is another TrueNAS Core system, with the URL for the other machine, the root account and password, and then generating a private key pair for the session. For internal replication I tend to disable the Cipher since this is internal traffic only.
Note: If you have configured MFA on the destination machine you will have to disable it temporarily to create the initial connection. You can reenable it afterwards.
For the destination path, I pick a pool and then add “/ovh-vm01” to the end to create a new filesystem to hold the copy. Then it’s about choosing a schedule (I use hourly) and a retention rule on the destination. I’ve got a lot more space at the office so I bump this up to four weeks. By default the local retention is 2 weeks and can be modified in the automatically created Periodic Snapshot Tasks.
So now even if the datacenter burns down you have an up to date copy of your VMs that you can quickly put into production locally or upload to another datacenter or service provider. | 2023-03-28 05:49:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22914238274097443, "perplexity": 2154.2492392107733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948765.13/warc/CC-MAIN-20230328042424-20230328072424-00725.warc.gz"} |
http://mathhelpforum.com/discrete-math/30991-how-many-paths.html | # Math Help - How many paths...?
1. ## How many paths...?
I know this is a simple question...however this is just to reassure my findings!
Any input would be v much appreciated!
2. What is the number in the first row and third column or the third row and first column of the matrix $A^2$? | 2015-08-03 20:43:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6548153758049011, "perplexity": 619.183049729524}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990112.92/warc/CC-MAIN-20150728002310-00153-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/from-quantum-field-theory-to-quantum-mechanics.341423/ | # From quantum field theory to quantum mechanics
1. Sep 29, 2009
### luxxio
i need references on the topics. thanx
2. Sep 29, 2009
### meopemuk
luxxio,
the answer depends on your definition of the term "quantum mechanics". There is a broad definition of this term and a narrow one.
In the broad definition "quantum mechanics" is a theory operating with Hilbert spaces, wave functions, Hermitian operators, etc. In this case, there is no separation between QFT and QM. QFT is simply a particular case of the general quantum mechanical formalism.
In the narrow definition "quantum mechanics" is a quantum theory describing systems with fixed numbers of particles. In this case the answer is not that simple, because the number of ("bare") particles in any QFT system (including the vacuum and 1-particle systems) is changing all the time: virtual particles and pairs are constantly emitted and absorbed. The best explanation of how the traditional QM with fixed number of particles follows from the QFT (where the number of particles is not fixed) can be found in the "dressed particle" formalism:
O. W. Greenberg and S. S. Schweber, "Clothed particle operators in simple models of quantum field theory", Nuovo Cim. 8 (1958), 378.
You can use Google Scholar to find more recent references to this rather old idea.
Last edited: Sep 29, 2009
3. Sep 29, 2009
### RUTA
If you want the path integral counterpart to the Schrodinger Equation from the transition amplitude of QFT, see Chapter 1 of Zee, A.: Quantum Field Theory in a Nutshell. Princeton University Press, Princeton (2003).
4. Sep 29, 2009
### ice109
5. Sep 29, 2009
### Born2bwire
I picked up that book just the other day and I agree that it is a good introductory source from what I have seen of the first couple of chapters. Feynman's Path Integral text is also a very good reference but it seems that it only had one printing so it may be difficult to get. | 2018-01-23 18:48:09 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8033087253570557, "perplexity": 598.1357113174934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892059.90/warc/CC-MAIN-20180123171440-20180123191440-00343.warc.gz"} |
http://icpc.njust.edu.cn/Problem/CF/839E/ | # Mother of Dragons
Time Limit: 2 seconds
Memory Limit: 256 megabytes
## Description
There are n castles in the Lannister's Kingdom and some walls connect two castles, no two castles are connected by more than one wall, no wall connects a castle to itself.
Sir Jaime Lannister has discovered that Daenerys Targaryen is going to attack his kingdom soon. Therefore he wants to defend his kingdom. He has k liters of a strange liquid. He wants to distribute that liquid among the castles, so each castle may contain some liquid (possibly zero or non-integer number of liters). After that the stability of a wall is defined as follows: if the wall connects two castles a and b, and they contain x and y liters of that liquid, respectively, then the strength of that wall is x·y.
Your task is to print the maximum possible sum of stabilities of the walls that Sir Jaime Lannister can achieve.
## Input
The first line of the input contains two integers n and k (1 ≤ n ≤ 40, 1 ≤ k ≤ 1000).
Then n lines follows. The i-th of these lines contains n integers ai, 1, ai, 2, ..., ai, n (). If castles i and j are connected by a wall, then ai, j = 1. Otherwise it is equal to 0.
It is guaranteed that ai, j = aj, i and ai, i = 0 for all 1 ≤ i, j ≤ n.
## Output
Print the maximum possible sum of stabilities of the walls that Sir Jaime Lannister can achieve.
Your answer will be considered correct if its absolute or relative error does not exceed 10 - 6.
## Sample Input
Input3 10 1 01 0 00 0 0Output0.250000000000Input4 40 1 0 11 0 1 00 1 0 11 0 1 0Output4.000000000000
## Sample Output
None
## Hint
In the first sample, we can assign 0.5, 0.5, 0 liters of liquid to castles 1, 2, 3, respectively, to get the maximum sum (0.25).
In the second sample, we can assign 1.0, 1.0, 1.0, 1.0 liters of liquid to castles 1, 2, 3, 4, respectively, to get the maximum sum (4.0)
None | 2019-07-23 01:03:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6104105710983276, "perplexity": 1040.0823200652994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528635.94/warc/CC-MAIN-20190723002417-20190723024417-00020.warc.gz"} |
https://www.physicsforums.com/threads/where-does-the-1-n-factor-in-the-dicke-model-arise-from.794474/ | # Where does the 1/√N factor in the Dicke model arise from?
Hello colleagues, hope you can help me.
The Dicke model describes a system of N two-level atoms cooperatively interacting with a single mode of an electromagnetic field as follows:
$$\hat{H}_{D}=\omega_{A}\hat{J}_{z}+\omega_{F}\hat{a}^{\dagger}\hat{a}-\frac{\gamma}{\sqrt{N}}\left(\hat{J}_{-}\hat{a}^{\dagger}+\hat{J}_{+}\hat{a}\right)-\frac{\gamma}{\sqrt{N}}\left(\hat{J}_{+}\hat{a}^{\dagger}+\hat{J}_{-}\hat{a}\right)$$
In the case of a single atom the factor turns into 1, and the hamiltonian of the N atoms should be the sum of N single atom hamiltonians, right? So, where does the 1/√N factor arise from?
Thanks! | 2022-06-28 00:19:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7665209174156189, "perplexity": 760.7968704362099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103344783.24/warc/CC-MAIN-20220627225823-20220628015823-00515.warc.gz"} |
https://socratic.org/questions/how-do-you-write-3y-2x-6-in-standard-form | # How do you write 3y= 2x+6 in standard form?
May 17, 2018
$3 y = 2 x + 6$ in standard form is color(blue)(2x-3y=-6
#### Explanation:
The formula for a linear equation in standard form is:
$A x + B y = C$,
where A and B are not both equal to $0$.
Convert the following equation to standard form.
$3 y = 2 x + 6$
Subtract $2 x$ from both sides.
$- 2 x + 3 y = 6$
Multiply both sides by $- 1$. This will reverse the signs, so that the coefficient for A becomes positive.
$2 x - 3 y = - 6$ | 2022-05-26 02:54:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8277341723442078, "perplexity": 911.6018565200242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662595559.80/warc/CC-MAIN-20220526004200-20220526034200-00017.warc.gz"} |
https://hal-cea.archives-ouvertes.fr/X-LSI/hal-03323780v2 | # The exact meaning of the angular-momentum and spin operators in quantum mechanics
Abstract : The theory of angular momentum and spin in quantum mechanics seems to defy common-sense intuition. We render the theory intelligible again by pointing out that this apparent impenetrability merely stems from an {\em undue} parallel interpretation of the algebraic expressions for the angular-momentum and spin operators in the group representation theory of SO(3) and SU(2). E.g. the correct meaning of ${\hat{L}}_{z} = {\hbar\over{\imath}}\,(x{\partial\over{\partial y}} - y{\partial\over{\partial x}} )$ is not that it is the operator for the $z$-component $L_{z}$ of the angular momentum ${\mathbf{L}}$, but rather the expression of the operator for the angular momentum ${\mathbf{L}}$ when it is aligned with the $z$-axis. Hence what we are used to note (erroneously) as ${\hat{L}}_{z}$ is not a scalar but a vector operator. The same applies {\em mutatis mutandis} for the spin operators. In the correct interpretation, the whole algebraic formalism is just the group representation theory for the rotations of three-dimensional Euclidean geometry. It is thus mere, elementary high-school mathematics (in a less usual, more technical guise) and as such totally exempt of any physics, let alone quantum mysteries. The change of interpretation has no impact on the algebraic results, such that they remain in agreement with experimental data. It is all only a matter of the correct geometrical meaning of the algebra. All these statements are proved within the framework of the group representation theory for SO(3) and SU(2) which is the basic tool used to describe rotational motion in quantum mechanics.
Keywords :
Document type :
Preprints, Working Papers, ...
Domain :
https://hal.archives-ouvertes.fr/hal-03323780
Contributor : Gerrit Coddens Connect in order to contact the contributor
Submitted on : Saturday, September 11, 2021 - 1:27:29 PM
Last modification on : Wednesday, September 15, 2021 - 3:24:34 AM
### File
Rotational-motions-3-for-HAL.p...
Files produced by the author(s)
### Identifiers
• HAL Id : hal-03323780, version 2
### Citation
Gerrit Coddens. The exact meaning of the angular-momentum and spin operators in quantum mechanics. 2021. ⟨hal-03323780v2⟩
Record views | 2021-12-04 17:06:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.792823851108551, "perplexity": 760.334969696183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362999.66/warc/CC-MAIN-20211204154554-20211204184554-00243.warc.gz"} |
https://www.gradesaver.com/textbooks/science/chemistry/general-chemistry-10th-edition/chapter-1-chemistry-and-measurement-questions-and-problems-page-35/1-84 | ## General Chemistry 10th Edition
9.6 x $10^{-8}$ mm is the distance between any atom hydrogen and oxygen in water molecule .
We are required to convert 0.96 Å to mm. We know that 1 Å =$10^{-10}$ m. 1 m =$10^{3}$ mm So we can write: 0.96 Å = 0.96 x $10^{-10}$ m 0.96 Å = 0.96 x $10^{-10}$ x $10^{3}$ mm 0.96 Å = 0.96 x $10^{-7}$ mm 0.96 Å = 9.6 x $10^{-8}$ mm So the distance between any atom hydrogen and oxygen is 9.6 x $10^{-8}$ mm . | 2018-09-24 17:59:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6868200302124023, "perplexity": 963.5479813249738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160620.68/warc/CC-MAIN-20180924165426-20180924185826-00368.warc.gz"} |
https://byjus.com/question-answer/a-ray-of-light-is-incident-at-50-o-on-the-middle-of-one-of/ | Question
A ray of light is incident at $${50^o}$$ on the middle of one of the two mirros arranged at angle of $${60^o}$$ between them. The ray then touches the second mirror, get reflected back to the first mirror, making an angle of incidence of
A
50o
B
60o
C
70o
D
80o
Solution
The correct option is D $${70^o}$$Let required angle be $$\theta$$From geometry of figureIn $$\Delta ABC;\alpha=180^o-(60^o+40^o)=80^o$$$$\Rightarrow \beta =90^o-80^o=10^o$$In $$\Delta ABD,\angle A=60^o,\angle B=(\alpha+2\beta)$$$$=(80+2\times 10)=100^o$$ and $$\angle D=(90^o-\theta)$$$$\because \angle A+\angle B +\angle D=180^o \Rightarrow 60^o+100^o+(90^o-\theta)=180^o \Rightarrow \theta$$$$=70^o$$Physics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More | 2022-01-23 09:10:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.466286301612854, "perplexity": 2863.502487967919}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304217.55/warc/CC-MAIN-20220123081226-20220123111226-00528.warc.gz"} |
http://tex.stackexchange.com/questions/18304/double-square-braces-like-these-exp | # Double square braces like these ⟦exp⟧?
How do I create mathematical equation double square braces like these ⟦exp⟧ in latex? I failed to find it on google (I created these using word 2010 equation editor). I would like these to be of adjustable height like regular braces if possible.
-
Have a look here, and you should be able to quickly find what you are looking for: Exact duplicate of How to look up a math symbol? – Alan Munn May 15 '11 at 20:41
@Alan: Agree, with the proviso that it took me three goes at drawing the symbol before it came up with the right answer. Detexify ought to have a "wait until I've finished drawing" option. – Andrew Stacey May 15 '11 at 20:44
@Andrew I use detexify on my phone, so it waits for me. But generally I find it faster to look things up in the Comprehensive Symbols guide. – Alan Munn May 15 '11 at 20:46
@Andrew: If you keep drawing it reaccesses the image. It came up in my first attempt. Ok, I got `[` after I finished the first part was finished but it jumped to the correct one when I was finished. – Martin Scharrer May 15 '11 at 20:49
Thanks for all these wonderful suggestion, I will look them up next time :) – LightningIsMyName May 15 '11 at 20:50
show 1 more comment
``````\usepackage{stmaryrd}
...
\$\llbracket a+b\rrbracket\$
``````
You find hundreds of symbols in the "Comprehensive LaTeX List of Symbols" which is on CTAN and, possibly, also on your system (`texdoc symbols`).
-
In this modern "internet age", it's useful to point people in the direction of detexify as well as the list of symbols. – Andrew Stacey May 15 '11 at 20:46
@Andrew: you're probably right, I keep forgetting "detexify". However this resource doesn't suggest `\llbracket` for ⟦, but only `\textlbrackdbl` :( – egreg May 15 '11 at 20:50
It shows `\llbracket`, but not always depending how I draw it. – Martin Scharrer May 15 '11 at 20:55
egreg: I agree that it's not always perfect - see my comment on the main question! But one thing that helps is us "training" it (which I did, once it had found the right symbol). – Andrew Stacey May 15 '11 at 21:03 | 2013-12-08 19:31:08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8036161065101624, "perplexity": 1949.251286937276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163800358/warc/CC-MAIN-20131204133000-00000-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://socratic.org/questions/how-do-you-simplify-5sqrt30-sqrt3 | # How do you simplify 5sqrt30-sqrt3?
May 18, 2015
We can factor this subtraction using the common element in both numbers, as follows:
$5 \sqrt{3 \cdot 10} - \sqrt{3}$
$5 \sqrt{3} \sqrt{10} - \sqrt{3}$
Note that both elements contain $\sqrt{3}$
Then,
$\sqrt{3} \left(5 \sqrt{10} - 1\right)$ | 2020-07-10 23:10:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5878716111183167, "perplexity": 1947.2208573051346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655912255.54/warc/CC-MAIN-20200710210528-20200711000528-00140.warc.gz"} |
http://mathoverflow.net/revisions/89601/list | As long as connected groups of isometries are concerned, Grassmann manifolds are symmetric spaces, so the identity component of its isometry group is $G$ in its symmetric presentation $G/H$ ($G$ connected) as a homogeneous space, namely, $SO(n)$ for $n$ odd and $SO(n)/\mathbf Z_2$ for $n$ even in the real case, and $PU(n)=SU(n)/\mathbf Z_n$ in the complex case. (Note that $U(n)$ acts on the left on the Grassmannian with a $U(1)$-kernel (its center), so the effectivized group is the projectivization $PU(n)$. Moreover the center of $U(n)$ meets $SU(n)\subset U(n)$ along its center, which consists of $\omega I$ where $\omega$ is an $n$-th root of unit.)
Further, Cartan described the full isometry groups of symmetric spaces, and an explicit result is easy to figure out in the case of Grassmann manifolds. I do not remember now, but you can find Cartan's description in the book of O. Loos on symmetric spaces, the second volume. I tend to agree with Ryan when he writes that in the case of Grassmann manifolds, the full isometry group should be $G\times N_G(H)$.
About Stiefel manifolds: with the metric you describe, they are normal homogeneous spaces $G/H$, i. e. have the metric induced from a bi-invariant Riemannian metric on $G$. There is a recent paper by S. Reggiani with a very effective way of computing the identity component of isometry groups of normal homogeneous spaces in here.
Since Stiefel manifolds fiber over Grassmann manifolds,
Added: I think it shouldn't be very hard to use this fiber bundle to figure out their looked up Loos, "Symmetric spaces, II", Theorem 4.4 and the ensuing Table 10 on page 156 for the full isometry group of the real and complex Grassmannians. If I understand correctly, indeed in the case of complex Grassmannians
$SU(n)/S(U(p)\times U(n-p))$, every isometry comes from left multiplication by elements from $SU(n)$ except for two cases: an isometry induced by complex conjugation; and mapping a $p$-plane to its orthogonal complement in case $n=2p\geq4$. In the case of real unoriented Grassmannians $SO(n)/S(O(p)\times O(n-p))$, every isometry comes from left multiplication by an element of $O(n)$ except for: mapping a $p$-plane to its orthogonal complement in case $n=2p\geq4$; the symmetric group $S_3$ in case $n=2p=8$, coming from outer automorphisms of $\mathfrak{so}(8)$.
1
As long as connected groups of isometries are concerned, Grassmann manifolds are symmetric spaces, so the identity component of its isometry group is $G$ in its symmetric presentation $G/H$ ($G$ connected) as a homogeneous space, namely, $SO(n)$ in the real case and $PU(n)=SU(n)/\mathbf Z_n$ in the complex case. (Note that $U(n)$ acts on the left on the Grassmannian with a $U(1)$-kernel (its center), so the effectivized group is the projectivization $PU(n)$. Moreover the center of $U(n)$ meets $SU(n)\subset U(n)$ along its center, which consists of $\omega I$ where $\omega$ is an $n$-th root of unit.)
Further, Cartan described the full isometry groups of symmetric spaces, and an explicit result is easy to figure out in the case of Grassmann manifolds. I do not remember now, but you can find Cartan's description in the book of O. Loos on symmetric spaces, the second volume. I tend to agree with Ryan when he writes that in the case of Grassmann manifolds, the full isometry group should be $G\times N_G(H)$.
About Stiefel manifolds: with the metric you describe, they are normal homogeneous spaces $G/H$, i. e. have the metric induced from a bi-invariant Riemannian metric on $G$. There is a recent paper by S. Reggiani with a very effective way of computing the identity component of isometry groups of normal homogeneous spaces in here.
Since Stiefel manifolds fiber over Grassmann manifolds, I think it shouldn't be very hard to use this fiber bundle to figure out their full isometry group. | 2013-05-19 01:18:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8659482002258301, "perplexity": 131.24936166105388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383081/warc/CC-MAIN-20130516092623-00091-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://socratic.org/questions/how-do-you-solve-the-system-of-equations-3x-4y-20-and-x-10y-16#639484 | # How do you solve the system of equations -3x-4y=20 and x-10y=16?
$x = - 4$ and $y = - 2$
#### Explanation:
Given system of equations
$- 3 x - 4 y = 20 \setminus \ldots \ldots . . \left(1\right)$
$x - 10 y = 16 \setminus \ldots \ldots . . \left(2\right)$
Multiplying (2) by $3$ & adding to (1) as follows
$- 3 x - 4 y + 3 \left(x - 10 y\right) = 20 + 3 \setminus \cdot 16$
$- 34 y = 68$
$y = - 2$
setting $y = - 2$ in (1) we get
$x = \setminus \frac{- 4 y - 20}{3}$
$= \setminus \frac{- 4 \left(- 2\right) - 20}{3}$
$= - 4$
Jul 9, 2018
Express $x$ as a function of $y$ and replace in the second equation.
#### Explanation:
To get a very fast answer to your problem, just use one of the two equations to express $x$ or $y$. In this case, let's do it with $x$.
So, we have the following system:
1) \ -3x-4y=20
2) \ x-10y=16
If we express $x$ in 2), then we have:
$x = 16 + 10 y$
Then we replace $x$ in 1) with what we obtained with 2) and we get:
$- 3 \cdot \left(16 + 10 y\right) - 4 y = 20$
Then develop the brackets:
$- 48 - 30 y - 4 y = 20$
Solve for $y$:
$- 34 y = 68$
$y = - 2$
Then replace $y$ in 2) by what you found:
$x - 10 \cdot \left(- 2\right) = 16$
Solve for $x$:
$x = - 4$
Finished! | 2021-09-22 17:49:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 35, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9270195960998535, "perplexity": 746.7955695672932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057371.69/warc/CC-MAIN-20210922163121-20210922193121-00413.warc.gz"} |
http://physics.stackexchange.com/questions/20854/altering-venus-rotational-speed-to-match-earths-via-weather-manipulation | # Altering Venus rotational speed to match Earth's via weather manipulation
Venus rotates approximately 6.5 km an hour Earth rotates approximately 1650 km/h how fast could we speed up Venus's rotation via only weather manipulation ( maybe a giant fractal lens between Venus and the Sun at a gravitationally equilibrium point, the lens redirecting all the energy hitting Venus shifting it so as to heat one portion, while shadowing the others, in such a way to create a belt of wind circling the equator) there was a Report recently showing the alteration of the planet Venus rotational Time. http://www.esa.int/esaSC/SEM0TLSXXXG_index_0.html at 90 PSI density atmosphere that much energy available, how long would it take for the frictional energy of the atmosphere to speed up the rotation of Venus to the Earth equivalent?
-
Using data from here, increasing Venus' rotational speed to match Earth's would require about $\:\mathrm{1.5\times 10^{29}\ J}$.
It's insolation is about $\:\mathrm{3\times 10^{17}\ W}$, so assuming that somehow all this energy could be transferred to rotation, it would take about 16000 years - not absurdly long actually.
-
## protected by Qmechanic♦May 26 '13 at 15:55
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). | 2016-07-28 04:54:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6722439527511597, "perplexity": 597.5238003685915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827791.21/warc/CC-MAIN-20160723071027-00148-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://en.wikipedia.org/wiki/Rectangular_potential_barrier | # Rectangular potential barrier
In quantum mechanics, the rectangular (or, at times, square) potential barrier is a standard one-dimensional problem that demonstrates the phenomena of wave-mechanical tunneling (also called "quantum tunneling") and wave-mechanical reflection. The problem consists of solving the one-dimensional time-independent Schrödinger equation for a particle encountering a rectangular potential energy barrier. It is usually assumed, as here, that a free particle impinges on the barrier from the left.
Although classically a particle behaving as a point mass would be reflected, a particle actually behaving as a matter wave has a non-zero probability of penetrating the barrier and continuing its travel as a wave on the other side. In classical wave-physics, this effect is known as evanescent wave coupling. The likelihood that the particle will pass through the barrier is given by the transmission coefficient, whereas the likelihood that it is reflected is given by the reflection coefficient. Schrödinger's wave-equation allows these coefficients to be calculated.
## Calculation
Scattering at a finite potential barrier of height ${\displaystyle V_{0}}$. The amplitudes and direction of left and right moving waves are indicated. In red, those waves used for the derivation of the reflection and transmission amplitude. ${\displaystyle E>V_{0}}$ for this illustration.
The time-independent Schrödinger equation for the wave function ${\displaystyle \psi (x)}$ reads
${\displaystyle H\psi (x)=\left[-{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}+V(x)\right]\psi (x)=E\psi (x)}$
where ${\displaystyle H}$ is the Hamiltonian, ${\displaystyle \hbar }$ is the (reduced) Planck constant, ${\displaystyle m}$ is the mass, ${\displaystyle E}$ the energy of the particle and
${\displaystyle V(x)=V_{0}[\Theta (x)-\Theta (x-a)]}$
is the barrier potential with height ${\displaystyle V_{0}>0}$ and width ${\displaystyle a}$. ${\displaystyle \Theta (x)=0,\;x<0;\;\Theta (x)=1,\;x>0}$
is the Heaviside step function, i.e.
${\displaystyle V(x)={\begin{cases}0&{\text{if }}x<0\\V_{0}&{\text{if }}0
The barrier is positioned between ${\displaystyle x=0}$ and ${\displaystyle x=a}$. The barrier can be shifted to any ${\displaystyle x}$ position without changing the results. The first term in the Hamiltonian, ${\displaystyle -{\frac {\hbar ^{2}}{2m}}{\frac {d^{2}}{dx^{2}}}\psi }$ is the kinetic energy.
The barrier divides the space in three parts (${\displaystyle x<0,0a}$). In any of these parts, the potential is constant, meaning that the particle is quasi-free, and the solution of the Schrödinger equation can be written as a superposition of left and right moving waves (see free particle). If ${\displaystyle E>V_{0}}$
${\displaystyle \psi _{L}(x)=A_{r}e^{ik_{0}x}+A_{l}e^{-ik_{0}x}\quad x<0}$
${\displaystyle \psi _{C}(x)=B_{r}e^{ik_{1}x}+B_{l}e^{-ik_{1}x}\quad 0
${\displaystyle \psi _{R}(x)=C_{r}e^{ik_{0}x}+C_{l}e^{-ik_{0}x}\quad x>a}$
where the wave numbers are related to the energy via
${\displaystyle k_{0}={\sqrt {2mE/\hbar ^{2}}}\quad \quad \quad \quad x<0\quad or\quad x>a}$
${\displaystyle k_{1}={\sqrt {2m(E-V_{0})/\hbar ^{2}}}\quad 0.
The index ${\displaystyle r}$/${\displaystyle l}$ on the coefficients ${\displaystyle A}$ and ${\displaystyle B}$ denotes the direction of the velocity vector. Note that, if the energy of the particle is below the barrier height, ${\displaystyle k_{1}}$ becomes imaginary and the wave function is exponentially decaying within the barrier. Nevertheless, we keep the notation r/l even though the waves are not propagating anymore in this case. Here we assumed ${\displaystyle E\neq V_{0}}$. The case ${\displaystyle E=V_{0}}$ is treated below.
The coefficients ${\displaystyle A,B,C}$ have to be found from the boundary conditions of the wave function at ${\displaystyle x=0}$ and ${\displaystyle x=a}$. The wave function and its derivative have to be continuous everywhere, so
${\displaystyle \psi _{L}(0)=\psi _{C}(0)}$
${\displaystyle {\frac {d}{dx}}\psi _{L}(0)={\frac {d}{dx}}\psi _{C}(0)}$
${\displaystyle \psi _{C}(a)=\psi _{R}(a)}$
${\displaystyle {\frac {d}{dx}}\psi _{C}(a)={\frac {d}{dx}}\psi _{R}(a)}$.
Inserting the wave functions, the boundary conditions give the following restrictions on the coefficients
${\displaystyle A_{r}+A_{l}=B_{r}+B_{l}}$
${\displaystyle ik_{0}(A_{r}-A_{l})=ik_{1}(B_{r}-B_{l})}$
${\displaystyle B_{r}e^{iak_{1}}+B_{l}e^{-iak_{1}}=C_{r}e^{iak_{0}}+C_{l}e^{-iak_{0}}}$
${\displaystyle ik_{1}(B_{r}e^{iak_{1}}-B_{l}e^{-iak_{1}})=ik_{0}(C_{r}e^{iak_{0}}-C_{l}e^{-iak_{0}})}$.
## E = V0
If the energy equals the barrier height, the second differential of the wavefunction inside the barrier region is 0, and hence the solutions of the Schrödinger equation are not exponentials anymore but linear functions of the space coordinate
${\displaystyle \psi _{C}(x)=B_{1}+B_{2}x\quad 0
The complete solution of the Schrödinger equation is found in the same way as above by matching wave functions and their derivatives at ${\displaystyle x=0}$ and ${\displaystyle x=a}$. That results in the following restrictions on the coefficients:
${\displaystyle A_{r}+A_{l}=B_{1}\,\!}$
${\displaystyle ik_{0}(A_{r}-A_{l})=B_{2}\,\!}$
${\displaystyle B_{1}+B_{2}a=C_{r}e^{iak_{0}}+C_{l}e^{-iak_{0}}}$
${\displaystyle B_{2}=ik_{0}(C_{r}e^{iak_{0}}-C_{l}e^{-iak_{0}})}$.
## Transmission and reflection
At this point, it is instructive to compare the situation to the classical case. In both cases, the particle behaves as a free particle outside of the barrier region. A classical particle with energy ${\displaystyle E}$ larger than the barrier height ${\displaystyle V_{0}}$ would always pass the barrier, and a classical particle with ${\displaystyle E incident on the barrier would always get reflected.
To study the quantum case, consider the following situation: a particle incident on the barrier from the left side (${\displaystyle A_{r}}$). It may be reflected (${\displaystyle A_{l}}$) or transmitted (${\displaystyle C_{r}}$).
To find the amplitudes for reflection and transmission for incidence from the left, we put in the above equations ${\displaystyle A_{r}=1}$ (incoming particle), ${\displaystyle A_{l}=r}$ (reflection), ${\displaystyle C_{l}}$=0 (no incoming particle from the right), and ${\displaystyle C_{r}=t}$ (transmission). We then eliminate the coefficients ${\displaystyle B_{l},B_{r}}$ from the equation and solve for ${\displaystyle r}$ and ${\displaystyle t}$.
The result is:
${\displaystyle t={\frac {4k_{0}k_{1}e^{-ia(k_{0}-k_{1})}}{(k_{0}+k_{1})^{2}-e^{2iak_{1}}(k_{0}-k_{1})^{2}}}}$
${\displaystyle r={\frac {(k_{0}^{2}-k_{1}^{2})\sin(ak_{1})}{2ik_{0}k_{1}\cos(ak_{1})+(k_{0}^{2}+k_{1}^{2})\sin(ak_{1})}}.}$
Due to the mirror symmetry of the model, the amplitudes for incidence from the right are the same as those from the left. Note that these expressions hold for any energy ${\displaystyle E>0}$.
## Analysis of the obtained expressions
### E < V0
Transmission probability of a finite potential barrier for ${\displaystyle {\sqrt {2mV_{0}}}a/\hbar }$=1, 3, and 7. Dashed: classical result. Solid line: quantum mechanics.
The surprising result is that for energies less than the barrier height, ${\displaystyle E there is a non-zero probability
${\displaystyle T=|t|^{2}={\frac {1}{1+{\frac {V_{0}^{2}\sinh ^{2}(k_{1}a)}{4E(V_{0}-E)}}}}}$
for the particle to be transmitted through the barrier, with ${\displaystyle k_{1}={\sqrt {2m(V_{0}-E)/\hbar ^{2}}}}$. This effect, which differs from the classical case, is called quantum tunneling. The transmission is exponentially suppressed with the barrier width, which can be understood from the functional form of the wave function: Outside of the barrier it oscillates with wave vector ${\displaystyle k_{0}}$, whereas within the barrier it is exponentially damped over a distance ${\displaystyle 1/k_{1}}$. If the barrier is much wider than this decay length, the left and right part are virtually independent and tunneling as a consequence is suppressed.
### E > V0
In this case
${\displaystyle T=|t|^{2}={\frac {1}{1+{\frac {V_{0}^{2}\sin ^{2}(k_{1}a)}{4E(E-V_{0})}}}}}$,
where ${\displaystyle k_{1}={\sqrt {2m(E-V_{0})/\hbar ^{2}}}}$.
Equally surprising is that for energies larger than the barrier height, ${\displaystyle E>V_{0}}$, the particle may be reflected from the barrier with a non-zero probability
${\displaystyle \,R=|r|^{2}=1-T.}$
The transmission and reflection probabilities are in fact oscillating with ${\displaystyle k_{1}a}$. The classical result of perfect transmission without any reflection (${\displaystyle T=1}$, ${\displaystyle R=0}$) is reproduced not only in the limit of high energy ${\displaystyle E\gg V_{0}}$ but also when the energy and barrier width satisfy ${\displaystyle k_{1}a=n\pi }$, where ${\displaystyle n=0,1,2,...}$ (see peaks near ${\displaystyle E/V_{0}=1.2}$ and 1.8 in the above figure). Note that the probabilities and amplitudes as written are for any energy (above/below) the barrier height.
### E = V0
The transmission probability at ${\displaystyle E=V_{0}}$ evaluates to
${\displaystyle T={\frac {1}{1+ma^{2}V_{0}/2\hbar ^{2}}}}$.
## Remarks and applications
The calculation presented above may at first seem unrealistic and hardly useful. However it has proved to be a suitable model for a variety of real-life systems. One such example are interfaces between two conducting materials. In the bulk of the materials, the motion of the electrons is quasi-free and can be described by the kinetic term in the above Hamiltonian with an effective mass ${\displaystyle m}$. Often the surfaces of such materials are covered with oxide layers or are not ideal for other reasons. This thin, non-conducting layer may then be modeled by a barrier potential as above. Electrons may then tunnel from one material to the other giving rise to a current.
The operation of a scanning tunneling microscope (STM) relies on this tunneling effect. In that case, the barrier is due to the gap between the tip of the STM and the underlying object. Since the tunnel current depends exponentially on the barrier width, this device is extremely sensitive to height variations on the examined sample.
The above model is one-dimensional, while space is three-dimensional. One should solve the Schrödinger equation in three dimensions. On the other hand, many systems only change along one coordinate direction and are translationally invariant along the others; they are separable. The Schrödinger equation may then be reduced to the case considered here by an ansatz for the wave function of the type: ${\displaystyle \Psi (x,y,z)=\psi (x)\phi (y,z)}$.
For another, related model of a barrier, see Delta potential barrier (QM), which can be regarded as a special case of the finite potential barrier. All results from this article immediately apply to the delta potential barrier by taking the limits ${\displaystyle V_{0}\to \infty ,\quad a\to 0}$ while keeping ${\displaystyle V_{0}a=\lambda }$ constant. | 2018-12-12 09:09:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 88, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8642085790634155, "perplexity": 584.0599111011753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823785.24/warc/CC-MAIN-20181212065445-20181212090945-00333.warc.gz"} |
https://quizlet.com/724408922/database-midterm-part-1-flash-cards/ | # Database Midterm Part 1
Four types of NoSQL databases
Click the card to flip 👆
1 / 18
Terms in this set (18)
- Store and process petabytes and terabytes of data in real time. So it handles and processes way more data than Relational systems can.
- Horizontal scaling with replication and distribution over inexpensive servers.
- Flexible schema: NoSQL systems are capable of handling structured, semi-structured, and unstructured data.
- Weaker concurrency model: NoSQL systems do not conform to the ACID (Atomicity, Consistency, Isolation, Durability) properties of relational systems, sacrificing the consistency of data in favor of availability and scalability (partition tolerance).
- Simple call-level interface
- Parallel processing: Leverage the Hadoop technology to support efficient parallel processing capabilities
- Volume: Sheer amount of data being generated in zettabytes.
- Variety: Structured and unstructured data are generated in various data types
- Velocity: the speed in which the data is being generated and moves around or stored.
- Veracity: detecting and correcting noise. So correctness and data validity
Large info that cannot be memorized
- Sharing data
- Security control for authentication and authorization
- Recoverability to deal with hardware/software crashes
- Integrity to maintain the meaning of data over updates
- Applications development is independent from data
- Platform and hardware storage are independent from data | 2023-03-30 07:17:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17333608865737915, "perplexity": 8697.618830958605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949107.48/warc/CC-MAIN-20230330070451-20230330100451-00243.warc.gz"} |
https://math.stackexchange.com/questions/507605/the-size-of-the-maximum-matching-is-bounded-by-the-size-of-the-minimum-vertex-co | The size of the maximum matching is bounded by the size of the minimum vertex cover
Prove, using the weak duality theorem of linear programming, that:
For any graph $G$ (not necessarily bipartite), the size of the maximum matching is at most the size of the minimum vertex cover.
I am a student doing advanced course in combinatorial and actually I do not know where to start in the proof, because this is a general graph, not a bipartite one. So hints would really appreciated. Thanks in advance.
• What are your thoughts on the problem, what have you tried? – Seirios Sep 28 '13 at 6:57
• the question is obvious, i need using weak duality theorem to prove that size of Max Matching <= Min Vertex Cover – Showen Disel Sep 28 '13 at 7:12
• Hello, welcome to Math.SE. Please read this post and the others there for information on writing a good question for this site. In particular, people will be more willing to help if you edit your question to include some motivation, and an explanation of your own attempts. – Lord_Farin Sep 28 '13 at 7:42
• I am a student doing advanced course in combinatorial and Actually i do not know where to start in the proof, because this is a general graph not bipartite. So hints would really appreciated, Thanks in advance – Showen Disel Sep 28 '13 at 8:32
• Thank you for your response. I will try to get the question reopened. – Lord_Farin Sep 28 '13 at 10:02
1 Answer
Let $G=(V,E)$ be a graph, let $M\subseteq E(G)$ be a maximum matching of the graph $G$, and let $C\subseteq V(G)$ be the minimum vertex cover of $G$. Since edges in $M$ are disjoint in the sense that no two share an endpoint, each vertex in $v\in C$ covers at most one edge in $M$. Thus $|C|\ge |M|$.
• can you give an example where the cardinality of C and M aren't equal? – Andrew Cassidy Nov 19 '15 at 17:27
• Yes: a $K_{1,4}$ with an edge added between "neighboring" vertices. In that case $|C|=1$ and $|M|=2$. – blazs Nov 19 '15 at 20:06
• thank you, but what does the symbol K mean? – Andrew Cassidy Nov 19 '15 at 20:19
• It's a star. My example is a star with 4 edges, where you add an edge between a pair of "neighboring" nonadjacent vertices. – blazs Nov 19 '15 at 20:21
• so K(1, 4) in an adjacency list is: {"center": ["1", "2", "3", "4"], "1": ["center"], "2": ["center"], "3": ["center"], "4": ["center"]}. let's add a connect between 1 and 2 which I think is what you're talking about: {"center": ["1", "2", "3", "4"], "1": ["center", "2"], "2": ["center", "1"], "3": ["center"], "4": ["center"]}. The C now is ("center", "2") or ("center", "2"). So cardinality is 2 not 1? – Andrew Cassidy Nov 19 '15 at 21:17 | 2019-09-19 13:02:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6115362048149109, "perplexity": 507.4264056563003}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573519.72/warc/CC-MAIN-20190919122032-20190919144032-00327.warc.gz"} |
http://sagemath.org/doc/reference/modfrm/sage/modular/modform/find_generators.html | # Graded Rings of Modular Forms¶
This module contains functions to find generators for the graded ring of modular forms of given level.
AUTHORS:
• William Stein (2007-08-24): first version
class sage.modular.modform.find_generators.ModularFormsRing(group, base_ring=Rational Field)
The ring of modular forms (of weights 0 or at least 2) for a congruence subgroup of $${\rm SL}_2(\ZZ)$$, with coefficients in a specified base ring.
INPUT:
• group – a congruence subgroup of $${\rm SL}_2(\ZZ)$$, or a positive integer $$N$$ (interpreted as $$\Gamma_0(N)$$)
• base_ring (ring, default: $$\QQ$$) – a base ring, which should be $$\QQ$$, $$\ZZ$$, or the integers mod $$p$$ for some prime $$p$$.
EXAMPLES:
sage: ModularFormsRing(Gamma1(13))
Ring of modular forms for Congruence Subgroup Gamma1(13) with coefficients in Rational Field
sage: m = ModularFormsRing(4); m
Ring of modular forms for Congruence Subgroup Gamma0(4) with coefficients in Rational Field
sage: m.modular_forms_of_weight(2)
Modular Forms space of dimension 2 for Congruence Subgroup Gamma0(4) of weight 2 over Rational Field
sage: m.modular_forms_of_weight(10)
Modular Forms space of dimension 6 for Congruence Subgroup Gamma0(4) of weight 10 over Rational Field
True
sage: m.generators()
[(2, 1 + 24*q^2 + 24*q^4 + 96*q^6 + 24*q^8 + O(q^10)),
(2, q + 4*q^3 + 6*q^5 + 8*q^7 + 13*q^9 + O(q^10))]
sage: m.q_expansion_basis(2,10)
[1 + 24*q^2 + 24*q^4 + 96*q^6 + 24*q^8 + O(q^10),
q + 4*q^3 + 6*q^5 + 8*q^7 + 13*q^9 + O(q^10)]
sage: m.q_expansion_basis(3,10)
[]
sage: m.q_expansion_basis(10,10)
[1 + 10560*q^6 + 3960*q^8 + O(q^10),
q - 8056*q^7 - 30855*q^9 + O(q^10),
q^2 - 796*q^6 - 8192*q^8 + O(q^10),
q^3 + 66*q^7 + 832*q^9 + O(q^10),
q^4 + 40*q^6 + 528*q^8 + O(q^10),
q^5 + 20*q^7 + 190*q^9 + O(q^10)]
TESTS:
Check that trac ticket #15037 is fixed:
sage: ModularFormsRing(3.4)
Traceback (most recent call last):
...
ValueError: Group (=3.40000000000000) should be a congruence subgroup
sage: ModularFormsRing(Gamma0(2), base_ring=PolynomialRing(ZZ,x))
Traceback (most recent call last):
...
ValueError: Base ring (=Univariate Polynomial Ring in x over Integer Ring) should be QQ, ZZ or a finite prime field
base_ring()
Return the coefficient ring of this modular forms ring.
EXAMPLE:
sage: ModularFormsRing(Gamma1(13)).base_ring()
Rational Field
sage: ModularFormsRing(Gamma1(13), base_ring = ZZ).base_ring()
Integer Ring
cuspidal_ideal_generators(maxweight=8, prec=None)
Calculate generators for the ideal of cuspidal forms in this ring, as a module over the whole ring.
EXAMPLE:
sage: ModularFormsRing(Gamma0(3)).cuspidal_ideal_generators(maxweight=12)
[(6, q - 6*q^2 + 9*q^3 + 4*q^4 + O(q^5), q - 6*q^2 + 9*q^3 + 4*q^4 + 6*q^5 + O(q^6))]
sage: [k for k,f,F in ModularFormsRing(13, base_ring=ZZ).cuspidal_ideal_generators(maxweight=14)]
[4, 4, 4, 6, 6, 12]
cuspidal_submodule_q_expansion_basis(weight, prec=None)
Calculate a basis of $$q$$-expansions for the space of cusp forms of weight weight for this group.
INPUT:
• weight (integer) – the weight
• prec (integer or None) – precision of $$q$$-expansions to return
ALGORITHM: Uses the method cuspidal_ideal_generators() to calculate generators of the ideal of cusp forms inside this ring. Then multiply these up to weight weight using the generators of the whole modular form space returned by q_expansion_basis().
EXAMPLES:
sage: R = ModularFormsRing(Gamma0(3))
sage: R.cuspidal_submodule_q_expansion_basis(20)
[q - 8532*q^6 - 88442*q^7 + O(q^8), q^2 + 207*q^6 + 24516*q^7 + O(q^8), q^3 + 456*q^6 + O(q^8), q^4 - 135*q^6 - 926*q^7 + O(q^8), q^5 + 18*q^6 + 135*q^7 + O(q^8)]
We compute a basis of a space of very large weight, quickly (using this module) and slowly (using modular symbols), and verify that the answers are the same.
sage: A = R.cuspidal_submodule_q_expansion_basis(80, prec=30) # long time (1s on sage.math, 2013)
sage: B = R.modular_forms_of_weight(80).cuspidal_submodule().q_expansion_basis(prec=30) # long time (19s on sage.math, 2013)
sage: A == B # long time
True
gen_forms(maxweight=8, start_gens=, []start_weight=2)
This function calculates a list of modular forms generating this ring (as an algebra over the appropriate base ring). It differs from generators() only in that it returns Sage modular form objects, rather than bare $$q$$-expansions; and if the base ring is a finite field, the modular forms returned will be forms in characteristic 0 with integral $$q$$-expansions whose reductions modulo $$p$$ generate the ring of modular forms mod $$p$$.
INPUT:
• maxweight (integer, default: 8) – calculate forms generating all forms up to this weight.
• start_gens (list, default: []) – a list of modular forms. If this list is nonempty, we find a minimal generating set containing these forms.
• start_weight (integer, default: 2) – calculate the graded subalgebra of forms of weight at least start_weight.
Note
If called with the default values of start_gens (an empty list) and start_weight (2), the values will be cached for re-use on subsequent calls to this function. (This cache is shared with generators()). If called with non-default values for these parameters, caching will be disabled.
EXAMPLE:
sage: A = ModularFormsRing(Gamma0(11), Zmod(5)).gen_forms(); A
[1 + 12*q^2 + 12*q^3 + 12*q^4 + 12*q^5 + O(q^6), q - 2*q^2 - q^3 + 2*q^4 + q^5 + O(q^6), q - 9*q^4 - 10*q^5 + O(q^6)]
sage: A[0].parent()
Modular Forms space of dimension 2 for Congruence Subgroup Gamma0(11) of weight 2 over Rational Field
generators(maxweight=8, prec=10, start_gens=, []start_weight=2)
If $$R$$ is the base ring of self, then this function calculates a set of modular forms which generate the $$R$$-algebra of all modular forms of weight up to maxweight with coefficients in $$R$$.
INPUT:
• maxweight (integer, default: 8) – check up to this weight for generators
• prec (integer, default: 10) – return $$q$$-expansions to this precision
• start_gens (list, default: []) – list of pairs $$(k, f)$$, or triples $$(k, f, F)$$, where:
• $$k$$ is an integer,
• $$f$$ is the $$q$$-expansion of a modular form of weight $$k$$, as a power series over the base ring of self,
• $$F$$ (if provided) is a modular form object corresponding to F.
If this list is nonempty, we find a minimal generating set containing these forms. If $$F$$ is not supplied, then $$f$$ needs to have sufficiently large precision (an error will be raised if this is not the case); otherwise, more terms will be calculated from the modular form object $$F$$.
• start_weight (integer, default: 2) – calculate the graded subalgebra of forms of weight at least start_weight.
OUTPUT:
a list of pairs (k, f), where f is the q-expansion to precision prec of a modular form of weight k.
gen_forms(), which does exactly the same thing, but returns Sage modular form objects rather than bare power series, and keeps track of a lifting to characteristic 0 when the base ring is a finite field.
Note
If called with the default values of start_gens (an empty list) and start_weight (2), the values will be cached for re-use on subsequent calls to this function. (This cache is shared with gen_forms()). If called with non-default values for these parameters, caching will be disabled.
EXAMPLES:
sage: ModularFormsRing(SL2Z).generators()
[(4, 1 + 240*q + 2160*q^2 + 6720*q^3 + 17520*q^4 + 30240*q^5 + 60480*q^6 + 82560*q^7 + 140400*q^8 + 181680*q^9 + O(q^10)), (6, 1 - 504*q - 16632*q^2 - 122976*q^3 - 532728*q^4 - 1575504*q^5 - 4058208*q^6 - 8471232*q^7 - 17047800*q^8 - 29883672*q^9 + O(q^10))]
sage: s = ModularFormsRing(SL2Z).generators(maxweight=5, prec=3); s
[(4, 1 + 240*q + 2160*q^2 + O(q^3))]
sage: s[0][1].parent()
Power Series Ring in q over Rational Field
sage: ModularFormsRing(1).generators(prec=4)
[(4, 1 + 240*q + 2160*q^2 + 6720*q^3 + O(q^4)), (6, 1 - 504*q - 16632*q^2 - 122976*q^3 + O(q^4))]
sage: ModularFormsRing(2).generators(prec=12)
[(2, 1 + 24*q + 24*q^2 + 96*q^3 + 24*q^4 + 144*q^5 + 96*q^6 + 192*q^7 + 24*q^8 + 312*q^9 + 144*q^10 + 288*q^11 + O(q^12)), (4, 1 + 240*q^2 + 2160*q^4 + 6720*q^6 + 17520*q^8 + 30240*q^10 + O(q^12))]
sage: ModularFormsRing(4).generators(maxweight=2, prec=20)
[(2, 1 + 24*q^2 + 24*q^4 + 96*q^6 + 24*q^8 + 144*q^10 + 96*q^12 + 192*q^14 + 24*q^16 + 312*q^18 + O(q^20)), (2, q + 4*q^3 + 6*q^5 + 8*q^7 + 13*q^9 + 12*q^11 + 14*q^13 + 24*q^15 + 18*q^17 + 20*q^19 + O(q^20))]
Here we see that for \Gamma_0(11) taking a basis of forms in weights 2 and 4 is enough to generate everything up to weight 12 (and probably everything else).:
sage: v = ModularFormsRing(11).generators(maxweight=12)
sage: len(v)
3
sage: [k for k, _ in v]
[2, 2, 4]
sage: dimension_modular_forms(11,2)
2
sage: dimension_modular_forms(11,4)
4
For congruence subgroups not containing -1, we miss out some forms since we can’t calculate weight 1 forms at present, but we can still find generators for the ring of forms of weight $$\ge 2$$:
sage: ModularFormsRing(Gamma1(4)).generators(prec=10, maxweight=10)
[(2, 1 + 24*q^2 + 24*q^4 + 96*q^6 + 24*q^8 + O(q^10)),
(2, q + 4*q^3 + 6*q^5 + 8*q^7 + 13*q^9 + O(q^10)),
(3, 1 + 12*q^2 + 64*q^3 + 60*q^4 + 160*q^6 + 384*q^7 + 252*q^8 + O(q^10)),
(3, q + 4*q^2 + 8*q^3 + 16*q^4 + 26*q^5 + 32*q^6 + 48*q^7 + 64*q^8 + 73*q^9 + O(q^10))]
Using different base rings will change the generators:
sage: ModularFormsRing(Gamma0(13)).generators(maxweight=12, prec=4)
[(2, 1 + 2*q + 6*q^2 + 8*q^3 + O(q^4)), (4, 1 + O(q^4)), (4, q + O(q^4)), (4, q^2 + O(q^4)), (4, q^3 + O(q^4)), (6, 1 + O(q^4)), (6, q + O(q^4))]
sage: ModularFormsRing(Gamma0(13),base_ring=ZZ).generators(maxweight=12, prec=4)
[(2, 1 + 2*q + 6*q^2 + 8*q^3 + O(q^4)), (4, O(q^4)), (4, q^3 + O(q^4)), (4, q^2 + O(q^4)), (4, q + O(q^4)), (6, O(q^4)), (6, O(q^4)), (12, O(q^4))]
sage: [k for k,f in ModularFormsRing(1, QQ).generators(maxweight=12)]
[4, 6]
sage: [k for k,f in ModularFormsRing(1, ZZ).generators(maxweight=12)]
[4, 6, 12]
sage: [k for k,f in ModularFormsRing(1, Zmod(5)).generators(maxweight=12)]
[4, 6]
sage: [k for k,f in ModularFormsRing(1, Zmod(2)).generators(maxweight=12)]
[4, 6, 12]
An example where start_gens are specified:
sage: M = ModularForms(11, 2); f = (M.0 + M.1).qexp(8)
sage: ModularFormsRing(11).generators(start_gens = [(2, f)])
Traceback (most recent call last):
...
ValueError: Requested precision cannot be higher than precision of approximate starting generators!
sage: f = (M.0 + M.1).qexp(10); f
1 + 17/5*q + 26/5*q^2 + 43/5*q^3 + 94/5*q^4 + 77/5*q^5 + 154/5*q^6 + 86/5*q^7 + 36*q^8 + 146/5*q^9 + O(q^10)
sage: ModularFormsRing(11).generators(start_gens = [(2, f)])
[(2, 1 + 17/5*q + 26/5*q^2 + 43/5*q^3 + 94/5*q^4 + 77/5*q^5 + 154/5*q^6 + 86/5*q^7 + 36*q^8 + 146/5*q^9 + O(q^10)), (2, 1 + 12*q^2 + 12*q^3 + 12*q^4 + 12*q^5 + 24*q^6 + 24*q^7 + 36*q^8 + 36*q^9 + O(q^10)), (4, 1 + O(q^10))]
group()
Return the congruence subgroup for which this is the ring of modular forms.
EXAMPLE:
sage: R = ModularFormsRing(Gamma1(13))
sage: R.group() is Gamma1(13)
True
modular_forms_of_weight(weight)
Return the space of modular forms on this group of the given weight.
EXAMPLES:
sage: R = ModularFormsRing(13)
sage: R.modular_forms_of_weight(10)
Modular Forms space of dimension 11 for Congruence Subgroup Gamma0(13) of weight 10 over Rational Field
sage: ModularFormsRing(Gamma1(13)).modular_forms_of_weight(3)
Modular Forms space of dimension 20 for Congruence Subgroup Gamma1(13) of weight 3 over Rational Field
q_expansion_basis(weight, prec=None, use_random=True)
Calculate a basis of q-expansions for the space of modular forms of the given weight for this group, calculated using the ring generators given by find_generators.
INPUT:
• weight (integer) – the weight
• prec (integer or None, default: None) – power series precision. If None, the precision defaults to the Sturm bound for the requested level and weight.
• use_random (boolean, default: True) – whether or not to use a randomized algorithm when building up the space of forms at the given weight from known generators of small weight.
EXAMPLES:
sage: m = ModularFormsRing(Gamma0(4))
sage: m.q_expansion_basis(2,10)
[1 + 24*q^2 + 24*q^4 + 96*q^6 + 24*q^8 + O(q^10),
q + 4*q^3 + 6*q^5 + 8*q^7 + 13*q^9 + O(q^10)]
sage: m.q_expansion_basis(3,10)
[]
sage: X = ModularFormsRing(SL2Z)
sage: X.q_expansion_basis(12, 10)
[1 + 196560*q^2 + 16773120*q^3 + 398034000*q^4 + 4629381120*q^5 + 34417656000*q^6 + 187489935360*q^7 + 814879774800*q^8 + 2975551488000*q^9 + O(q^10),
q - 24*q^2 + 252*q^3 - 1472*q^4 + 4830*q^5 - 6048*q^6 - 16744*q^7 + 84480*q^8 - 113643*q^9 + O(q^10)]
We calculate a basis of a massive modular forms space, in two ways. Using this module is about twice as fast as Sage’s generic code.
sage: A = ModularFormsRing(11).q_expansion_basis(30, prec=40) # long time (5s)
sage: B = ModularForms(Gamma0(11), 30).q_echelon_basis(prec=40) # long time (9s)
sage: A == B # long time
True
Check that absurdly small values of prec don’t mess things up:
sage: ModularFormsRing(11).q_expansion_basis(10, prec=5)
[1 + O(q^5), q + O(q^5), q^2 + O(q^5), q^3 + O(q^5), q^4 + O(q^5), O(q^5), O(q^5), O(q^5), O(q^5), O(q^5)]
sage.modular.modform.find_generators.basis_for_modform_space(*args)
This function, which existed in earlier versions of Sage, has now been replaced by the q_expansion_basis() method of ModularFormsRing objects.
EXAMPLE:
sage: from sage.modular.modform.find_generators import basis_for_modform_space
sage: basis_for_modform_space()
Traceback (most recent call last):
...
NotImplementedError: basis_for_modform_space has been removed -- use ModularFormsRing.q_expansion_basis()
sage.modular.modform.find_generators.find_generators(*args)
This function, which existed in earlier versions of Sage, has now been replaced by the generators() method of ModularFormsRing objects.
EXAMPLE:
sage: from sage.modular.modform.find_generators import find_generators
sage: find_generators()
Traceback (most recent call last):
...
NotImplementedError: find_generators has been removed -- use ModularFormsRing.generators()
#### Previous topic
Compute spaces of half-integral weight modular forms | 2015-04-18 19:20:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6475634574890137, "perplexity": 3421.1480342089526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246636104.0/warc/CC-MAIN-20150417045716-00035-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.researchgate.net/publication/9990259_Prolonged_action_potentials_from_single_nodes_of_Ranvier | Article
# Prolonged action potentials from single nodes of Ranvier
Authors:
To read the full-text of this research, you can request a copy directly from the author.
## Abstract
The duration of action potentials from single nodes of Ranvier can be increased by several methods. Extraction of water from the node (e.g. by 2 to 3 M glycerin) causes increased durations up to 1000 msec. 1 to 5 min. after application of the glycerin the duration of the action potential again decreases to the normal value. Another type of prolonged action potential can be observed in solutions which contain K or Rb ions at concentrations between 50 mM and 2 M. The nodes respond only if the resting potential is restored by anodal current. The kinetics of these action potentials is slightly different. Their maximal durations are longer (up to 10 sec.). Like the normal action potential, they are initiated by cathodal make or anodal break. They also occur in external solutions which contain no sodium. The same type of action potentials as in KCl is found when the node is depolarized for some time (15 to 90 sec., 100 to 200 mv.) and is then stimulated by cathodal current. These action potentials require no K or Na ions in the external medium. Their maximal duration increases with the strength and duration of the preceding depolarization. The possible origin of the action potentials in KCl and after depolarization, and their relation to the normal action potentials and the negative after-potential are discussed.
## No full-text available
... Furthermore, when for instance changing the temperature, ion concentration or media viscosity, "fast" (10m/s) pulses can be slowed down (Mueller, 1958) and "slow" pulses can be sped up (Hill and Osterhout, 1938). Importantly, the range of changes is at least 2 presumably rather 3 orders of magnitude within In our approach the penetration depth (Eq. ...
Article
This article attempts to review our work in the field since 2008, attempts to put it in a coherent framework and takes a courageous look vis-à-vis the bigger picture. It summarizes our approach, successes and open questions to start from physical principles when approaching living systems. It stresses the importance of conservation laws versus material and/or structural approaches to living systems commonly taken in (molecular) biology. Indeed, we claim that the crucial system in biology isn't a molecule or a molecular class whatsoever, but the interface created by biomolecules in water. It is the physical or thermodynamic state of this 2D interface and the action of conservation laws on it, which determines biological function, an approach I refer to as the “state-to-function-approach” in stark contrast to the structure-function approach.Three key ideas, all based on physical principles, particularly the 2nd law of thermodynamics and momentum conservation, are presented and experimentally confirmed. In Idea One we bridge physical state and biological function directly, e.g. by demonstrating the control of enzymatic activity and ion conductivity via thermodynamic state. Idea Two presents the role of momentum conservation in biological communication specifically applied to the principles of nerve pulse propagation. Idea Three finally introduces a physical concept of specificity, which is free of structural requirements and includs the specific interaction between pulses and enzymes.We finally discuss the extend of applicability and the universality of the mentioned ideas by presenting some impressive similarities between fairly remotely appearing biological processes, such as cell growth and pulse propagation. We close with the question in how far a thermodynamic approach can bring insight in the concept of cell adaptation, the evolution of organs or a deeper understanding of health and disease?
... Interestingly, the biphasic pulse shape obtained in the monolayer does not require separate proteins and accompanying ion fluxes to explain different phases (rising, falling, undershoot) as in an action potential. Despite these similarities, however, we believe that absolute velocity and pulse shape are not proper criteria for testing a new theory of nerve pulse propagation as both vary tremendously, in cells and as well as in lipid monolayers, depending on composition, excitation and state of the membrane interface [50][51][52]. Rather, it is the variation in velocity as a function of state c g (p, T ), variation in pulse shape as a function of degree of nonlinearity c 0 g (p, T) and the existence of a threshold that can be explained thermodynamically [22,[53][54][55], as seen in this study. But before making further such comparisons, the amplitude velocity relation [56,57], the existence of refractory period [58] and the behaviour of two pulses under collision need to be explored for a comprehensive understanding of these nonlinear effects. ...
Article
Full-text available
Biological membranes by virtue of their elastic properties should be capable of propagating localized perturbations analogous to sound waves. However, the existence and the possible role of such waves in communication in biology remains unexplored. Here we report the first observations of 2D solitary elastic pulses in lipid interfaces, excited mechanically and detected by FRET. We demonstrate that the nonlinearity near a maximum in the susceptibility of the lipid monolayer results in solitary pulses that also have a threshold for excitation. These experiments clearly demonstrate that the state of the interface regulates the propagation of pulses both qualitatively and quantitatively. We elaborate on the striking similarity of the observed phenomenon to nerve pulse propagation and a thermodynamic basis of cell signaling in general.
... Steady-state Vm-IK characteristics with a region of negative slope conductance have been observed in high [K]0 in a number of preparations including the node of Ranvier (Mueller, 1958), the giant axon of the squid (Moore, 1959;Ehrenstein and Gilbert, 1966;Lecar, Ehrenstein, Binstock, and Taylor, 1967), and the lobster giant axon used in our experiments (Julian et al., 1962 b). Fig. 15 shows plots of the intensity of the fluctuations N and of the potassium current 'K recorded from a node which exhibited negative slope conductance. ...
Article
Random fluctuations in the steady-state current of neural membrane were measured in the giant lobster axon by means of a low noise voltage-clamp system. The power density spectrum S(f) of the fluctuations was evaluated between 20 and 5120 Hz and found to be of the type 1/f. Mean values of the potassium, sodium, and leakage currents I(K), I(Na), and I(L) were also measured by usual voltage-clamp techniques. Comparisons between these two types of data recorded under a number of different experimental conditions, such as presence of tetrodotoxin (TTX), substitution of calcium by lanthanum, and changes in the external concentration of potassium, have strongly suggested that the intensity of the fluctuations is related to the magnitude of I(K).
... The evidence in Figs 9 and 10 that graded current pulses evoke PNPs in an all-or-nothing manner suggests that the PNP is a regenerative phenomenon (see Discussion). Regenerative depolarizations have also been recorded from single nodes bathed in high [K+] (Mueller, 1958). ...
Article
Full-text available
1. We have studied action potentials and after-potentials evoked in the internodal region of visualized lizard intramuscular nerve fibres by stimulation of the proximal nerve trunk. Voltage recordings were obtained using microelectrodes inserted into the axon (intra-axonal) or into the layers of myelin (peri-internodal), with the goal of studying conditions required to activate internodal K+ currents. 2. Peri-internodal recordings made using K2SO4-, KCl- or NaCl-filled electrodes exhibited a negligible resting potential (less than 2 mV), but showed action potentials with peak amplitudes of up to 78 mV and a duration less than or equal to that of the intra-axonally recorded action potential. 3. Following ionophoretic application of potassium from a peri-internodal microelectrode, the peri-internodal action potential was followed by a prolonged (hundreds of milliseconds) negative plateau. This plateau was not seen following peri-internodal ionophoresis of sodium. The prolonged negative potential (PNP) was confined to the K(+)-injected internode: it could be recorded by a second peri-internodal microelectrode inserted into the same internode, but not into an adjacent internode. 4. The peri-internodally recorded PNP was accompanied by an equally prolonged intra-axonal depolarizing after-potential, and by an increase in the conductance of the internodal axolemma. However, the K+ ionophoresis that produced the PNP had little or no detectable effect on the intra-axonally or peri-internodally recorded resting potential or action potential. These findings suggest that the PNP is generated by an inward current across the axolemma of the K(+)-injected internode, through channels opened following the action potential. 5. Following peri-internodal K+ ionophoresis a PNP could also be evoked by passage of depolarizing current pulses through an intra-axonal electrode or by passage of negative current pulses through an electrode in the K(+)-filled peri-internodal region. The threshold for evoking a PNP was less than the threshold for evoking an action potential, and the PNP persisted in 10 microM-tetrodotoxin. Thus the PNP is evoked by depolarization of the axolemma rather than by Na+ influx. 6. The PNP was reversibly blocked by tetraethylammonium (TEA, 2-10 mM), but was not blocked by 100 microM-3,4-diaminopyridine or 5 mM-4-aminopyridine.(ABSTRACT TRUNCATED AT 400 WORDS)
... Interestingly, Rb+ PNPs lasted much longer than K+ PNPs: in five experiments Rb+ PNPs lasted more than 10 s, and during 1 Hz stimulation the PNP lasted 794 + 373 ms (n = 5), compared to a mean duration of 359 ms for K+ PNPs at this frequency. This PNP prolongation may be related to rubidium's ability to slow inactivation of delayed rectifier channels (Plant, 1986; see also Mueller, 1958). The mean amplitude of action potentials recorded with Rb+-filled microelectrodes (37 3 + 14 mV, n = 10) did not differ significantly from that recorded with K+and Na+-filled microelectrodes. ...
Article
Full-text available
1. Voltage changes associated with currents crossing the internodal axolemma were monitored using a microelectrode inserted into the myelin sheath (peri-internodal region) of rat phrenic nerve fibres. This microelectrode was also used to change the potential and the ionic environment in the peri-internodal region. 2. Following stimulation of the proximal nerve trunk, the peri-internodal electrode recorded a positive-going action potential whose amplitude increased (up to 75 mV) with increasing depth of microelectrode penetration into the myelin. The resting potential recorded by the peri-internodal electrode remained within 4 mV of bath ground. 3. Confocal imaging of fibres injected peri-internodally with the fluorescent dye Lucifer Yellow revealed a staining pattern consistent with spread of dye throughout the myelin sheath of the injected internode. 4. After ionophoresis of K+ (but not Na+) into the peri-internodal region, the action potential was followed by a prolonged negative potential (PNP) lasting hundreds of milliseconds to several seconds. The duration of the PNP increased as the frequency of stimulation decreased. PNPs could also be evoked by sub-threshold depolarization of the internodal axolemma with peri-internodally applied current pulses. In the absence of action potentials or applied depolarization PNPs sometimes appeared spontaneously. 5. Peri-internodal application of Rb+ also produced evoked and spontaneous PNPs. These PNPs had longer durations (up to 20 s) than those recorded from K(+)-loaded internodes. 6. Spontaneous action potentials sometimes appeared during the onset of the PNP, suggesting that PNPs are associated with depolarization of the underlying axon. 7. Passage of current pulses during the PNP demonstrated that the PNP is associated with an increased conductance of the pathway linking the peri-internodal recording site to the bath. At least part of this conductance increase occurs across the internodal axolemma, since peri-internodally recorded action potentials evoked during the PNP had larger amplitudes than those evoked before or after the PNP. 8. PNPs were suppressed by tetraethylammonium (TEA, 10-20 mM) and by 4-aminopyridine (1 mM). 9. These results suggest that the PNPs recorded in K(+)- or Rb(+)-loaded myelin sheaths are produced by a regenerative K+ or Rb+ current that enters the internodal axolemma via K+ channels opened by action potentials or subthreshold depolarizations. 10. When normal extracellular [K+] was preserved (by using Na+ rather than K+ salts in the peri-internodal electrode), action potentials recorded within the myelin sheath were instead followed by a brief, positive after-potential that was inhibited by TEA.(ABSTRACT TRUNCATED AT 400 WORDS)
... On the other hand a system consisting only of a variable Na conductance and K conductance would also give the same potential forms. The results presented in a previous paper (Mueller, 1958 a) showed that the node is able to give action potentials in KCl alone. If in this case the electromotances are assumed to be due to changes of the K conductance, one has to postulate an additional K inactivation process which would correspond to Qs. ...
Article
The kinetics of interaction between potential, chemical equilibrium, and electromotance in the excitable system of nerve are analyzed. The theoretical system has the following properties: It gives rise to two electromotances each of which depends directly on a chemical equilibrium. The equilibria are determined by the potential across the system. After a sudden potential shift the equilibria reach their new value with an exponential time course, the time constant of which is determined by the rate constants of the two reactions. The rate constants are different due to different activation energies. The two electromotances give rise to potentials of opposite sign. The total potential produced by the system is equal to the sum of the two potentials. The two equilibria are thus determined by any externally applied potential as well as by the sum of the internally produced potentials. The dependence of the equilibria on the potential is calculated from first principles. The equations which describe this system are solved by an analogue computer, which gives instantaneous solutions of the total internal potential as a function of time and any voltage applied from an external source. Comparison between recorded and computed action potentials shows excellent agreement under all experimental conditions. The electromotances might originate from a Ca++—Na+—K+ exchange at fixed negative sites in the Schwann cell.
Article
Duration and amplitude of normal and prolonged action potentials from single nodes of Ranvier vary as functions of potential changes induced by currents from an external source. The quantitative relations between externally applied potential and the resulting potential generated within the system are analyzed in order to obtain information about the kinetics of the electromotance,—potential,—and chemical changes taking place during excitation. The following preliminary conclusions are drawn: A depolarizing and a repolarizing process (positive and negative electromotance) increase and decrease with the potential. For a sudden potential displacement the negative electromotance reaches its new value at a faster rate than the positive electromotance. Since the individual values of the two electromotances depend on the potential and since they both generate a potential which is proportional to the difference of their absolute values, the values of either electromotance are determined by this difference as well as by any externally induced potential change.
Article
Depolarizations applied to voltage-clamped cells bathed in the normal solution disclose an initial inward current followed by a delayed outward current. The maximum slope conductance for the peak initial current is about 30 times the leak conductance, but the maximum slope conductance for the delayed current is only about 10 times the leak conductance. During depolarizations for as long as 30 sec, the outward current does not maintain a steady level, but declines first exponentially with a time constant of about 6 msec; it then tends to increase for the next few seconds; finally, it declines slowly with a half-time of about 5 sec. Concomitant with the changes of the outward current, the membrane conductance changes, although virtually no change in electromotive force occurs. Thus, the changes in the membrane conductance represent two phases of K inactivation, one rapidly developing, the other slowly occurring, and a phase of K reactivation, which is interposed between the two inactivations. In isosmotic KCl solution after a conditioning hyperpolarization there occurs an increase in K permeability upon depolarization. When the depolarizations are maintained, the increase of K permeability undergoes changes similar to those observed in the normal medium. The significance of the K inactivation is discussed in relation to the after-potential of the nerve cells.
Article
The extra impulses of the slowly adapting stretch receptor neuron of the lobster were evoked by different kinds of after-depolarizations which appeared with rising temperature. One of them, called slow after-depolarization, was studied by recording extra-and intracellularly at different cell regions under varying experimental conditions. The slow after-depolarization developed after the first full-sized action potential of the multiple spike discharge. With rising temperature its amplitude increased from about 10 mV to 30–40 mV (from the resting membrane potential), and its duration from about 25 msec to several seconds. Depending on its length the slow after-depolarization was able to evoke between one and several hundred extra spikes. There was an inverse relationship between the safety factor for propagation of the active process along the somato-dendritic membrane and the duration and, to some extent, the amplitude of the slow after-depolarization. In any discharge a premature termination of the slow after-depolarization could be provoked by a short pulse of intracellularly injected anodal current or by activation of the inhibitory system of the stretch receptor organ. The membrane resistance was reduced during the course of the slow after-depolarization. The findings support the idea of an asynchronous activation and subsequent asynchronous activity in action current generating sites of the dendritic membrane.
Article
In electroplaques of several gymnotid fishes hyperpolarizing or depolarizing currents can evoke all-or-none responses that are due to increase in membrane resistance as much as 10- to 12-fold. During a response the emf of the membrane shifts little, if at all, when the cell either is at its normal resting potential, or is depolarized by increasing external K, and in the case of depolarizing responses when either Cl or an impermeant anion is present. Thus, the increase in resistance is due mainly, or perhaps entirely, to decrease in K permeability, termed depolarizing or hyperpolarizing K inactivation, respectively. In voltage clamp measurements the current-voltage relation shows a negative resistance region. This characteristic accounts for the all-or-none initiation and termination of the responses demonstrable in current clamp experiments. Depolarizing inactivation is initiated and reversed too rapidly to measure with present techniques in cells in high K. Both time courses are slowed in cells studied in normal Ringer's. Once established, the high resistance state is maintained as long as an outward current is applied. Hyperpolarizing inactivation occurs in normal Ringer's or with moderate excess K. Its onset is more rapid with stronger stimuli. During prolonged currents it is not maintained; i.e., there is a secondary increase in conductance. Hyperpolarizing inactivation responses exhibit a long refractory period, presumably because of persistence of this secondary increase in conductance.
Article
The effects of change in the osmotic pressure of the bathing medium on central neural activity in the isolated, hemisected frog spinal cord were studied. The bathing medium was made hyperosmotic by either increasing all constituents (except NaHCO3) proportionally or by adding a nonelectrolyte, mannitol; and hyposmotic by decreasing all constituents (except NaHCO3) proportionally. Changes in osmotic pressure depressed the monosynaptic lateral column-evoked ventral root response (LC-VRR). Depression occurred with changes of only 25 mosmol/kg H2O in bathing medium osmolality. This osmotic effect was fully expressed within the first 5 min of exposure. The osmotic effect was probably not due to a change in the excitability of the motoneurons as there was no change in the antidromic field potential recorded in the ventral horn following stimulation of the ventral root after exposure to the nonisosmotic bathing media that had depressed the LC-VRR.Exposure to nonisosmotic bathing media produced an increase in the excitability of the lateral column terminals as shown by an increase in the lateral column field potentials antidromically evoked by stimulation in the ventral horn. This increase mirrored the depression in the LC-VRR. The depression in the LC-VRR and the augmentation of the MN-LC are reversibly blocked by picrotoxin, a gamma-aminobutyric acid (GABA) antagonist. This suggests that the osmotic effect on isolated frog spinal cord may be due either to the release of endogenous GABA or perhaps to a change in extracellular K+ concentration.
Article
THE interesting recent findings in the frog node of Ranvier of an action potential' in isosmotic potassium chloride by Müeller1, and of two stable states in 20-40 mM potassium chloride by Stämpfli2, raise the question as to whether or not similar phenomena are to be found in the squid axon membrane.
Article
When bimolecular lipid membranes adsorb appropriate, as yet unidentified, molecules obtained from various biological sources their resistance falls from108 ω cm2 to103–105 ω cm2 and they then become “active” or “electrically excitable” in the sense that their resistance changes reversibly and regeneratively between two definite values in response to suprathreshold applied voltages. The detailed kinetics of these resistance changes are similar to the “action potential” of frog nerve in 0·1M KCI and are identical to those found in the marine alga Valonia and electronic semiconductor tunnel diodes. In this paper, the kinetic and steady state aspects of these resistance changes are presented, and a theory is developed which accounts for the observed phenomena. This theory is compared with the theory of tunnel diodes.
Article
FROG Ranvier's nodes immersed in potassium-rich media are capable of producing action potentials when the membrane is hyperpolarized by anodal currents1. This observation was confirmed in frog sartorius muscles by one of us, who found that action potentials were restored by anodal polarization in frog sartorius muscles which depolarized in calcium-free media2. The so-called hyper-polarizing response' was also observed in muscle fibres soaked in both potassium-rich and calcium-free media. This experimental evidence seems to suggest that removal of calcium from the membrane in these two solutions might be responsible for the loss of membrane excitability, as suggested in a previous communication2. The rate of output of calcium-45 loaded on muscles, therefore, was studied in both calcium-free and potassium-rich solutions.
Article
Full-text available
This series of three papers presents data on a system of neurons, the large supramedullary cells (SMC) of the puffer, Spheroides maculatus, in terms of the physiological properties of the individual cells, of their afferent and efferent connections, and of their interconnections. Some of these findings are verified by available anatomical data, but others suggest structures that must be sought for in the light of the demonstration that these cells are not sensory neurons. Analysis on so broad a scale was made possible by the accessibility of the cells in a compact cluster on the dorsal surface of the spinal cord. Simultaneous recordings were made intracellularly and extracellularly from individual cells or from several, frequently with registration of the afferent or efferent activity as well. The passive and active electrical properties of the SMC are essentially similar to those of other neurons, but various response characteristics have been observed which are related to different excitabilities of different parts of the neuron, and to specific anatomical features. The SMC produce spikes to direct stimuli by intracellular depolarization, or by indirect synaptic excitation from many afferent paths, including tactile stimulation of the skin. Responses that were evoked by intracellular stimulation of a single cell cause an efferent discharge bilaterally in many dorsal roots, but not in the ventral. Sometimes several distinct spikes occurred in the same root, and behaved independently. Thus, a number of axons are efferent from each neuron. They are large unmyelinated fibers which give rise to the elevation of slowest conduction in the compound action potential of the dorsal root. A similar component is absent in the ventral root action potential. Antidromic stimulation of the axons causes small potentials in the cell body, indicating that the antidromic spikes are blocked distantly to the soma, probably in the axon branches. The failure of antidromic invasion is correlated with differences in excitability of the axons and the neurite from which they arise. As recorded in the cell body, the postsynaptic potentials associated with stimulation of afferent fibers in the dorsal roots or cranial nerves are too small to discharge the soma spike. The indirect spike has two components, the first of which is due to the synaptically initiated activity of the neurite and which invades the cell body. The second component is then produced when the soma is fired. The neurite impulse arises at some distance from the cell body and propagates centrifugally as well as centripetally. An indirect stimulus frequently produces repetitive spikes which are observed to occur synchronously in all the cells examined at one time. Each discharge gives rise to a large efferent volley in each of the dorsal roots and cranial nerves examined. The synchronized responses of all the SMC to indirect stimulation occur with slightly different latencies. They are due to a combination of excitation by synaptic bombardment from the afferent pathways and by excitatory interconnections among the SMC. Direct stimulation of a cell may also excite all the others. This spread of activity is facilitated by repetitive direct excitation of the cell as well as by indirect stimulation.
Article
RECENT experiments have shown, that abnormal action potentials can be produced in (sodium-free) potassium-rich solutions1. In order to get more information on the ions which cross the membrane during this type of activity, the relation between the potassium concentration and the height of the action potential was examined.
Article
The crustacean single nerve fiber gives rise to trains of impulses during a prolonged depolarizing stimulus. It is well known that the alkaloid veratrine itself causes a prolonged depolarization; and consequently it was of interest to investigate the effect of this chemically produced depolarization on repetitive firing in the single axon and compare it with the effect of depolarization by an applied stimulating current or by a potassium-rich solution. It was found that veratrine depolarization, though similar in some respects to a potassium-rich depolarization of depolarizing current effect, was in many respects quite different.
Article
Cultured chick embryonic heart cells became partially depolarized and stopped beating when [K+]0 reached 50–70 mM by addition of KCl to the bathing medium. However, some cells beat spontaneously for many days when cultured chronically in media containing 65, 86, and 98 mM [K+]0 ([Na+]0 correspondingly reduced). In high [K+]0, neighboring cells contracted independently of each other, whereas in normal K+ (2.7 mM) they contracted synchronously. The cells contracted in response to mechanical prodding and to externally-applied stimulating current about tenfold greater than normal intensity. Intracellularly-recorded resting potentials (Em) were 1–11 mV and agree with values obtained from cells acutely exposed to high K+. Thus, [K+]i is not significantly changed when cells are chronically bathed in an elevated K+ solution. Large resting and action potentials rapidly returned following change to a low [K+]0 medium. Neither depolarizing nor hyperpolarizing current pulses elicited action potentials in high K+. Most cells stopped beating immediately after impalement. Of the few cells that continued to contract, some displayed small oscillations of Em (2–4 mV) synchronous with beating; however, no oscillations were observed in most cells. Hence, contraction of cells in high K+ need not be controlled by changes in Em. Ba2+ and Sr2+ (5–10 mM) enhanced excitability, allowing anodal-break action potentials to occur. Ba2+ caused no change in resting Em but increased input resistance; Sr2+ rapidly hyperpolarized by 8–25 mV and produced spontaneous action potentials. Since Em = EK in high [K+]0, the Sr2+-induced hyperpolarization may reflect an increased rate of ion pumping.In conclusion, the membrane properties of cultured heart cells chronically exposed to high K+ solutions are no different than those of cells acutely exposed. With either acute or chronic exposure to high K+ solutions, many K+-depolarized cells contract cyclically without accompanying changes in membrane potential.
Article
Membrane potentials of single Ranvier nodes of frog nerve fibres were measured by means of the [(V)\dot]\dot V A), and the rate of fall of the action potential are reduced. 2. The duration of the action potential of motor fibres of Rana esculenta is lengthened by a factor of more than 3; in sensory fibres of Rana esculenta and motor fibres of Xenopus laevis the factor is about 2. 3. The steady state relation between V S or [(V)\dot]\dot V Aand the membrane potential is not affected. 4. Following a depolarizing pre-pulse of increasing duration, [(V)\dot]\dot V Adecreases more slowly. Also, the rate of recovery of [(V)\dot]\dot V Aduring the relative refractory period is decreased. 5. The delayed rectification of the membrane in low sodium solution is reduced and develops more slowly. 6. The membrane resistance increases in normal and potassium-rich Ringer's solutions. The amplitude of potassium-depolarization is reduced; no hysteresis of the current-voltage curve is observed in potassium-rich solutions. 7. It is concluded that the prolongation of the action potential is due to the reduced potassium permeability and to the decreased rate of inactivation of the sodium permeability.
Chapter
The ionic theorySaltatory conductionHeat and metabolic measurementsThe motor endplate
Article
Single stable bimolecular lipid and proteolipid membranes having the inert physical properties of cell membrane can be reconstituted in saline solution. After adsorption of appropriate molecules, these membranes become electrically excitable.
Article
Summary The effect of hypertonic solutions on the action potential of single myelinated nerve fibres is described. Hypertonicity mainly changes the duration of the action potential: Short action potentials obtained in normal Ringer's solution at room temperature are prolonged, long action potentials due to 0.1–1.0 mM NiCl2-Ringer's solution and low temperature are shortened by hypertonicity. The changes in action potential duration are accompanied by small changes in action potential amplitude. In addition, hypertonicity reduces the depolarization produced by 20 mM KCl; inactivation of the sodium-carrying system under cathodal polarization is enhanced.
Article
This publication is concerned with the question whether excitability and action potential of the Ranvier node are based upon stationary electrical properties of the excitable membrane. For an answer the stationary current-voltage behaviour of Ranvier nodes was investigated by means of impressed voltage. The following results were obtained: 1. The stationary current-voltage characteristic of the Ranvier node has a region of negative resistance between 10 and 40 mV depolarization voltage. 2. Because this negative resistance can be shown independently of the ionic-milieu necessary for a normal excitement, the negative slope characteristic seems to be a stationary material property of the membrane. 3. Until now the stationary negative slope characteristic can be proved in depolarized membranes only, for only in the state of depolarization the inward current which is necessary for recording of the negative resistance, must not be drawn from the membrane batteries but can be delivered from the external circuit. 4. Depolarization — and repolarization — threshold as well as the amplitude of K.-action potentials can directly be referred to the current-voltage characteristic recorded during K.-depolarization. 5. Physical and physico-chemical influences lead to corresponding changes of normal action potential and stationary characteristic. These qualitative results were obtained from recordings which were alternatingly performed in the same preparation. They agree with the assumption that the normal action potential too depends on stationary properties of the membrane.
Article
A solution of novocain in small concentrations (1·10-8 and 1·10-7) does not cause changes of action potential (AP) amplitude and threshold of depolarization (?V) in the node of Ranvier of isolated frog's nerve fiber. A novocain concentration of 5·10-5 causes a reduction of response amplitude and a rise of the depolarization threshold. Responses are graded, i.e., increase with an increase in the strength of its stimulus. Novocain solution in a concentration of 1·10-4 depresses AP in the node and causes a rise of threshold of depolarization up to 200%. Changes caused by novocain are usually not completely reversible even after long wash-off. Electrical activity, depressed by novocain in the node of Ranvier restored by the action of a direct current anode. If anode voltage is sufficiently large, the AP is completely restored and the threshold of depolarization is at the same level as before. The effect of a direct current cathode is similar to that of novocain. A cathode current causes a sharp fall of AP and a considerable rise of the threshold of depolarization. The possible mechanism of the novocain action and the restorative effect of the direct current anode is discussed.
Article
This report presents general principles of operations in neuron networks and is composed of two parts. One is concerned with the theoretical aspects of operations in neuron nets; the other is concerned with the application of some of these principles to the particular problem of speech recognition by artificial neurons. The term “neuron” is used without distinction for real neurons—those found in the brain—and for artificial neurons—those made from electronic components. It is possible to construct artificial neurons which are, as far as input- output relations are concerned, complete analogs of their biological counterpart (Mueller, 1958). The networks shown in the figures in this report have been assembled and tested using artificial neurons.
Article
1. Voltage clamp measurements were performed on single myelinated nerve fibres of the frog Xenopus laevis. 2. During long-lasting depolarizations the potassium current decayed in a fast phase with a time constant of about 0.6 sec and a following slow phase with a time constant between 3.6 (V=0) and 20 sec (V=100 mV). 3. The decay of the potassium current was the result of an inactivation of the potassium permeability and not of a shift of the potassium equilibrium potential as shown by experiments in isotonic KCl solution. 4. At a hyperpolarization of –20 mV the potassium inactivation was fully removed. It remained incomplete even at large depolarizations. The steady-state inactivation curve was S-shaped but not symmetrical. 5. The experimental results could be described by extending the Hodgkin-Huxley equations introducing two terms of potassium inactivation.
Article
Macromolecular crowding is known to modulate chemical equilibria, reaction rates, and molecular binding events, both in aqueous solution and at lipid bilayer membranes, natural barriers which enclose the crowded environments of cells and their subcellular compartments. Previous studies on the effects that macromolecular crowding in aqueous compartments have on conduction through membranes have focused on single-channel ionic conduction through previously formed pores at thermodynamic equilibrium. Here, the effects of macromolecular crowding on the mechanism of pore formation itself were studied using the droplet interface bilayer (DIB) technique with the voltage-dependent pore-forming peptide alamethicin (alm). Macromolecular crowding was varied using 8 kDa molecular weight polyethylene glycol (PEG8k) or 500 kDa dextran (DEX500k) in the two aqueous droplets on both sides of the bilayer membrane. In general, voltage thresholds for pore formation in the presence of crowders in the droplets decreased compared to their values in the absence of crowders, due to excluded volume effects, water binding by PEG, and changes in the ordering of water molecules and hydrogen-bonding interactions involving the polar lipid headgroups. In addition, asymmetric crowder loading (e.g., PEG8k/DEX500k on either side of the membrane) resulted in transmembrane osmotic pressure gradients that either enhanced or degraded electric field induced insertion of alm monomers into the membrane and the subsequent formation of conductive pores.
Article
1. After-potentials following a single spike are measured on Ranvier nodes of isolated frog nerve fibres under various experimental conditions. 2. In ordinary Ringer's solution the spike is followed by a small, short lasting after-depolarization which has a mean amplitude of 4 mV and declines exponentially with a half-time of 0.1–1.5 msec. 3. The amplitude of the after-depolarization is increased by application of a constant anodal polarizing current; the relation of after-depolarization to resting potential is linear with a mean slope of 0.83. Cathodal polarizations, on the other hand, leads to an after-hyperpolarization which reaches a maximum with increasing strength of polarizing current. 4. Non-polarized nodes in K+-free Ringer's solution show an after-hyperpolarization with a mean amplitude of −3.4 mV. 5. Under slight cocaine narcosis cathodal impulses of short duration are followed by a transient hyperpolarization. 6. Nodes in Ringer's solution with 10–40 mM KCl develop long lasting after-depolarizations of great amplitude, if polarized with an anodal current of sufficient strength and duration. The time course of these after-depolarizations is almost linear; the rate of potential decline depends on the strength of the polarizing current and can be reduced to zero by suitable setting of the current strength. 7. Reduction of [Na+] and [Cl−] in the external solution and application of 2.4-dinitrophenol are without effect on the after-potentials of polarized and non-polarized nodes. 8. It is concluded that a short period of increased K+-permeability and increased Na+-permeability outlasts the spike; in order to explain the long lasting after-depolarizations of anodal polarized nodes in K+-rich solutions special reference is made to the N-shaped current-voltage relation which is observed under these conditions.
Article
An einzelnen Ranvier-Knoten isolierter markhaltiger Nervenfasern in K-reichen Lsungen lassen sich nach anodischer Repolarisation langdauernde Aktionspotentiale auslsen. In der vorliegenden Arbeit sind Versuche beschrieben, die dafr sprechen, da in der erregbaren Membran ein dem Na-Transportsystem entsprechendes K-Transport-system existiert und da die Aktionspotentiale durch einen Einstrom von Kalium in Richtung seines elektrochemischen Gradienten verursacht werden.Folgende Beweise liegen dafr vor:1. Die Hhe der Aktionspotentiale nimmt linear mit dem Logarithmus der ueren K-Konzentration zu. 2. Die Membranwiderstandsnderungen whrend des Aktionspotentials entsprechen der Annahme, da die Potentialverschiebungen durch Zu- und Abnahme der K-Permeabilitt verursacht werden. 3. Fr die Auslsung und Beendigung der Aktionspotentiale sind Cl und Na in der extracellulren Lsung nicht notwendig. Folgende Unterschiede bestehen zwischen dem Na- und dem K-Transportsystem:1. Nach Depolarisation der Membran wird die Na-Leitfhigkeit mit einer Halbwertszeit von wenigen Millisekunden vollstndig, die K-Leitfhigkeit mit einer Halbwertszeit von 15 sec unvollstndig inaktiviert. 2. Das Na-Transportsystem wird durch Cocainhydrochlorid in einem strkeren Ausma blockiert als das K-Transportsystem. 3. Die Chronaxie betrgt fr Na-Aktionspotentiale 0,1–0,3 und fr K-Aktionspotentiale 1–4 msec.
Article
Der passive Ionentransport von Na und K durch die erregbare Membran am Ranvierknoten unterscheidet sich durch eine Reihe von Eigenschaften: Geschwindigkeit der Aktivierung, Ausma und Geschwindigkeit der Inaktivierung, Empfindlichkeit gegenber Pharmaka (vgl. Lttgau 1960).Die Messung dieser Eigenschaften bei Anwesenheit anderer Ionen in der extracellulren Flssigkeit erlaubt Aussagen darber, ob die betreffenden Ionen den Na- oder K-Transportweg benutzen:1. Rb und Cs verhalten sich wie K. Unterschiede bestehen im Ausma und in der Geschwindigkeit der Inaktivierung. Zwei mgliche Erklrungen werden besprochen. 2. Ammonium-Ionen verhalten sich teils wie Na, teils wie K. Nach anodischer Repolarisation lassen sich Aktionspotentiale mit langdauernder Nachdepolarisation auslsen. Die Analyse macht wahrscheinlich, da die NH4-Ionen whrend des Aktionspotentials durch die Na- und whrend der Nachdepolarisation durch die K-Kanle diffundieren.
Article
Slow muscle fibres in isotonic potassium sulphate saline could be easily repolarized to -90 mV. From this membrane potential a regenerative response could be elicited with short depolarizing pulses. 2. This response is blocked by TEA, suggesting that potassium is the main ion involved. 3. In the presence of TEA, a transient depolarization is recorded when the steady hyperpolarization is withdrawn. This anode break response is dependent upon the external calcium and is blocked by cobalt, suggesting that it is due to a calcium conductance. 4. The membrane conductance change was continuously recorded with short pulses at the end of the hyperpolarization. The membrane conductance decayed with at least two components with an average t1/2 of 1-2 and 6-8 sec. TEA blocked the slow component, and the fast one was dependent upon calcium and was blocked by cobalt.
Article
The effects of replacement of external and internal K+ ions by Rb+ ions on the two fast components (gf1 and gf2) and slow component (gs) of the K+ conductance (gK) in frog nodes of Ranvier were investigated under voltage- and current-clamp conditions. Fast and slow components of gK were separated by double exponential fits to tail currents following long depolarizing pre-pulses, or by the use of short pre-pulses which activate little gs X gs was also isolated by 1 mM-4-aminopyridine (4-AP). gf1 and gf2 were distinguished in the fast conductance-voltage curve by their different voltage dependences, gf1 activating at more negative potentials. Reversal potential measurements indicated that Rb+ is less permeant than K+, and measurements in 4-AP indicated that the slow component has a lower Rb+ permeability than the fast. In a 50% K+, 50% Rb+ mixture PRb/PK was less than that in 100% Rb+ suggesting that PRb/PK is mole-fraction dependent. With external Rb+ the current-voltage relation was shifted by ca.-10 mV compared to that in K+, an effect on gf ( = gf1 + gf2). The slow conductance (gs) and, under similar conditions, the Na+ current-voltage relation were not shifted. gf, calculated from inward tail currents, was reduced with external Rb+ at potentials where gf2 was activated. Instantaneous current-voltage relations following pre-pulses which activate different components of gf confirmed these observations. In K+ the instantaneous current-voltage relation showed some inward rectification which was largely abolished with Rb+. Comparison of gf calculated from outward (go) and inward (gi) currents confirmed this, and showed that inward gf2 was reduced with Rb+ such that go = gi. Outward currents were little affected by external Rb+. External Rb+ slowed the fast inward tail current following all pre-pulses which activate gf, but had no effect on the time course of the slow component of the tail current. Regenerative responses, which occur in high [K+] (+300 nM-tetrodotoxin) solutions in current clamp did not repolarize in Rb+. Voltage-clamp experiments showed that inactivation of inward currents is slowed when Rb+ is the charge carrier. Replacement of internal K+, by application of Rb+ to the cut ends of the fibre, shifted the reversal potential to more positive potentials but had no effect on the conductance or kinetics. External Rb+ has a large number of effects on inward currents, but little effect on outward currents. Internal Rb+ had little effect on outward or inward currents.(ABSTRACT TRUNCATED AT 400 WORDS)
Article
This report constitutes the first part of an investigation into the nature of the material which induces electrical excitability in experimental bimolecular lipid membranes. The excitability-inducing material (EIM) is released upon growth of Aerobacter cloacae ATCC 961 in a defined medium containing only low molecular weight substances. It has been isolated by adsorption on Kieselgel and subsequent elution with 1% ammonia solution.By TEAE-cellulose column chromatography of EIM solution, protein and RNA moieties, the only detectable components of EIM, have been separated. The separation of moieties results in loss of activity. Partial reactivation can be obtained by mixing 1% solutions of the protein and RNA. This reactivation does not occur when 0.1% solutions (or lower) are mixed. Both moieties obtained by TEAE-cellulose column chromatography shows gross homogeneity by electrophoretic and sedimentation studies. The basic properties of the protein moiety, not evident with total EIM, become apparent after its separation from RNA. The chemical chromatographic and electrophoretic studies reported in this paper raise the possibility that a ribonucleoprotein complex constitutes the functional entity of the material.The paper also presents standardization of the EIM activity assay.
Article
Action potentials are constructed step by step in bimolecular lipid membranes by adjusting the membrane composition, ionic gradients, pH, temperature and the concentration of two proteinaceous adsorbates: an excitability inducing material (EIM) of mol. wt less than 105 and protamine sulfate. They show most bioelectric kinetic phenomena and generally conform to the Hodgkin and Huxley theory for action potentials in nerve. The evidence indicates that the system consists of two ion selective channel types. One, produced by EIM, develops a cationic e.m.f.; the other, resulting from a complex between EIM and protamine, develops an anionic e.m.f. Both contain a double gating mechanism showing two negative resistances which are controlled by the voltage and by chemical factors including membrane lipid composition, ionic strength, pH, some alkaloids, acridine and phenothiazine derivatives and divalent ions. The action potentials result from the interplay of e.m.f.'s and resistances of the two channel populations each acting as a parallel battery and a voltage dependent variable resistive load on the other coupled via the membrane potential. Some possible molecular mechanisms responsible for the conductance changes are discussed.
Article
THE formation of single, stable bimolecular lipid and proteolipid1 membranes up to 10 mm.2 in area has been accomplished routinely in 0.1 M saline solution by methods analogous to the formation of Hooke-Newton 'secondary black' in air soap films2-4. By forming such a membrane between two compartments filled with saline its transverse electrical properties can be measured, and controlled chemical investigations can be undertaken.
Article
In Nitella the action curve has two peaks, apparently because both protoplasmic surfaces (inner and outer) are sensitive to K(+). Leaching in distilled water makes the outer surface insensitive to K(+). We may therefore expect the action curve to have only one peak. This expectation is realized. The action curve thus obtained resembles that of Chara which has an outer protoplasmic surface that is normally insensitive to K(+). The facts indicate that the movement of K(+) plays an important part in determining the shape of the action curve.
Article
The effect of direct current, of controlled direction and density, across the protoplasm of impaled cells of Halicystis, is described. Inward currents slightly increase the already positive P.D. (70 to 80 mv.) in a regular polarization curve, which depolarizes equally smoothly when the current is stopped. Outward currents of low density produce similar curves in the opposite direction, decreasing the positive P.D. by some 10 or 20 mv. with recovery on cessation of flow. Above a critical density of outward current, however, a new effect becomes superimposed; an abrupt reversal of the P.D. which now becomes 30 to 60 mv. negative. The reversal curve has a characteristic shape: the original polarization passes into a sigmoid reversal curve, with an abrupt cusp usually following reversal, and an irregular negative value remaining as long as the current flows. Further increases of outward current each produce a small initial cusp, but do not greatly increase the negative P.D. If the current is decreased, there occurs a threshold current density at which the positive P.D. is again recovered, although the outward current continues to flow. This current density (giving positivity) is characteristically less than that required to produce reversal originally, giving the process a hysteretic character. The recovery is more rapid the smaller the current, and takes only a few seconds in the absence of current flow, its course being in a smooth curve, usually without an inflection, thus differing from the S-shaped reversal curve. The reversal produced by outward current flow is compared with that produced by treatment with ammonia. Many formal resemblances suggest that the same mechanism may be involved. Current flow was therefore studied in conjunction with ammonia treatment. Ammonia concentrations below the threshold for reversal were found to lower the threshold for outward currents. Subthreshold ammonia concentrations, just too low to produce reversal alone, produced permanent reversal when assisted by a short flow of very small outward currents, the P.D. remaining reversed when the current was stopped. Further increases of outward current, when the P.D. had been already reversed by ammonia, produced only small further increases of negativity. This shows that the two treatments are of equivalent effect, and mutually assist in producing a given effect, but are not additive in the sense of being superimposable to produce a greater effect than either could produce by itself. Since ammonia increases the alkalinity of the sap, and presumably of the protoplasm, when it penetrates, it is possible that the reversal of P.D. by current flow is also due to change of pH. The evidence for increased alkalinity or acidity due to current flow across phase boundaries or membranes is discussed. While an attractive hypothesis, it meets difficulties in H. ovalis where such pH changes are both theoretically questionable and practically ineffective in reversing the P.D. It seems best at the present time to assign the reversal of P.D. to the alteration or destruction of one surface layer of the protoplasm, with reduction or loss of its potential, leaving that at the other surface still intact and manifesting its oppositely directed potential more or less completely. The location of these surfaces is only conjectural, but some evidence indicates that it is the outer surface which is so altered, and reconstructed on recovery of positive P.D. This agrees with the essentially all-or-none character of the reversal. The various treatments which cause reversal may act in quite different ways upon the surface.
Article
The effect of direct current flow upon the potential difference across the protoplasm of impaled Valonia cells was studied. Current density and direction were controlled in a bridge which balanced the ohmic resistances, leaving the changes (increase, decrease, or reversal) of the small, normally negative, bioelectric potential to be recorded continuously, before, during, and after current flow, with a string galvanometer connected into a vacuum tube detector circuit. Two chief states of response were distinguished: State A.-Regular polarization, which begins to build up the instant current starts to flow, the counter E.M.F. increasing most rapidly at that moment, then more and more slowly, and finally reaching a constant value within 1 second or less. The magnitude of counter E.M.F. is proportional to the current density with small currents flowing in either direction across the protoplasm, but falls off at higher density, giving a cusp with recession to lower values; this recession occurs with slightly lower currents outward than inward. Otherwise the curves are much the same for inward and outward currents, for different densities, for charge and discharge, and for successive current flows. There is a slight tendency for the bioelectric potential to become temporarily positive following these current flows. Records in the regular state (State A) show very little effect of increased series resistance on the time constant of counter E.M.F. This seems to indicate that a polarization rather than a static capacity is involved. State B.-Delayed and non-proportional polarization, in which there is no counter E.M.F. developed with small currents in either direction across the protoplasm, nor with very large outward currents. But with inward currents a threshold density is reached at which a counter E.M.F. rather suddenly develops, with a sigmoid curve rising to high positive values (200 mv. or more). There is sometimes a cusp, after which the P.D. remains strongly positive as long as the current flows. It falls off again to negative values on cessation of current flow, more rapidly after short flows, more slowly after longer ones. The curves of charge are usually quite different in shape from those of discharge. Successive current flows of threshold density in rapid succession produce quicker and quicker polarizations, the inflection of the curve often becoming smoothed away. After long interruptions, however, the sigmoid curve reappears. Larger inward currents produce relatively little additional positive P.D.; smaller ones on the other hand, if following soon after, have a greatly increased effectiveness, the threshold for polarization falling considerably. The effect dies away, however, with very small inward currents, even as they continue to flow. Over a medium range of densities, small increments or decrements of continuing inward current produce almost as regular polarizations as in State A. Temporary polarization occurs with outward currents following soon after the threshold inward currents, but the very flow of outward current tends to destroy this, and to decondition the protoplasm, again raising the threshold, for succeeding inward flows. State A is characteristic of a few freshly gathered cells and of most of those which have recovered from injuries of collecting, cleaning, and separating. It persists a short time after such cells are impaled, but usually changes over to State B for a considerable period thereafter. Eventually there is a reappearance of regular polarization; in the transition there is a marked tendency for positive P.D. to be produced after current flow, and during this the polarizations to outward currents may become much larger than those to inward currents. In this it resembles the effects of acidified sea water, and of certain phenolic compounds, e.g. p-cresol, which produce State A in cells previously in State B. Ammonia on the other hand counteracts these effects, producing delayed polarization to an exaggerated extent. Large polarizations persist when the cells are exposed to potassium-rich solutions, showing it is not the motion of potassium ions (e.g. from the sap) which accounts for the loss or restoration of polarization. It is suggested that inward currents restore a protoplasmic surface responsible for polarization by increasing acidity, while outward currents alter it by increasing alkalinity. Possibly this is by esterification or saponification respectively of a fatty film. For comparison, records of delayed polarization in silver-silver chloride electrodes are included.
Article
An initial report is made on the electrocardiogram of a single heart muscle cell in vivo. The potential variations obtained by electrodes placed on opposite sides of the membrane of a heart muscle fibre are 50 to 100 times as large as those recorded by standard limb leads. The observations support the assumption that during activation the cell interior becomes positive with respect to its surrounding (depolarization, followed by polarization reversal). Induced alterations in shape and form of the action current of a single heart muscle fiber should provide further insight into the nature of the normal and abnormal electrocardiogram.
Bau und Funktion markhaltiger Nervenfasern, Ergebn. Physiol., 47, 7O. Tasaki, I., 1953, Nervous Transmission
• R St~mpfli
St~mpfli, R., 1952, Bau und Funktion markhaltiger Nervenfasern, Ergebn. Physiol., 47, 7O. Tasaki, I., 1953, Nervous Transmission, Springfield, Illinois, Charles C. Thomas Co.
The present concept of the structure of the plasma membrane
• R Hsber
HSber, R., 1930, The present concept of the structure of the plasma membrane, Biol. Bull., 58, 1.
Ueber Membran und Actions Potentials einzelner Myocardfasern des Warm-und Kaltblueterherzens
• W Trautwein
• K Zink
Trautwein, W., and Zink, K., 1952, Ueber Membran und Actions Potentials einzelner Myocardfasern des Warm-und Kaltblueterherzens, Arch. ges. Physiol., 256, 68.
The end of the spike potential of nerve and its relation to the beginning of the negative after potential
• H S Gasser
• H T Graham
Gasser, H. S., and Graham, H. T., 1932, The end of the spike potential of nerve and its relation to the beginning of the negative after potential, Am. Y. Physiol., 101, 316.
Beitraege zur physiol, des marklosen Nerven
• S Garten
Garten, S., 1903, Beitraege zur physiol, des marklosen Nerven, Jena, Gustav Fischer.
The role of phosphate in the maintenance of the resting potential and selective ion accumulation in frog muscle ceils
• G N Ling
Ling, G. N., 1952, The role of phosphate in the maintenance of the resting potential and selective ion accumulation in frog muscle ceils, in Phosphorus Metabolism. A Symposium on the Role of Phosphorus in the Metabolism of Plants and Animals, (W. D. McElroy and B. Glass, editors), Baltimore, The Johns Hopkins Press, 2, 748.
Lorente de N6 for invaluable help and advice during the course of this investigation a, The effects of current flowon bioelectric potential, I. Val~
The author wishes to express his gratitude to Dr. R. Lorente de N6 for invaluable help and advice during the course of this investigation. BIBLIOGRAPHY Blinks, L. R., 1935 a, The effects of current flowon bioelectric potential, I. Val~, J. Gen. Physiol., 19, 633. | 2022-08-19 22:04:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5853355526924133, "perplexity": 3839.8018128635345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00277.warc.gz"} |
http://cms.math.ca/cmb/msc/11D41?fromjnl=cmb&jnl=CMB | location: Publications → journals
Search results
Search: MSC category 11D41 ( Higher degree equations; Fermat's equation )
Expand all Collapse all Results 1 - 6 of 6
1. CMB 2006 (vol 49 pp. 560)
Luijk, Ronald van
A K3 Surface Associated With Certain Integral Matrices Having Integral Eigenvalues In this article we will show that there are infinitely many symmetric, integral $3 \times 3$ matrices, with zeros on the diagonal, whose eigenvalues are all integral. We will do this by proving that the rational points on a certain non-Kummer, singular K3 surface are dense. We will also compute the entire Néron-Severi group of this surface and find all low degree curves on it. Keywords:symmetric matrices, eigenvalues, elliptic surfaces, K3 surfaces, Néron--Severi group, rational curves, Diophantine equations, arithmetic geometry, algebraic geometry, number theoryCategories:14G05, 14J28, 11D41
2. CMB 2005 (vol 48 pp. 636)
Győry, K.; Hajdu, L.; Saradha, N.
Correction to: On the Diophantine Equation $n(n+d)\cdots(n+(k-1)d)=by^l$ In the article under consideration (Canad. Math. Bull. \textbf{47} (2004), pp.~373--388), Lemma 6 is not true in the form presented there. Lemma 6 is used only in the proof of part (i) of Theorem 9. We note, however, that part (i) of Theorem 9 in question is a special case of a theorem by Bennet, Bruin, Gy\H{o}ry and Hajdu. Category:11D41
3. CMB 2004 (vol 47 pp. 373)
Győry, K.; Hajdu, L.; Saradha, N.
On the Diophantine Equation $n(n+d)\cdots(n+(k-1)d)=by^l$ We show that the product of four or five consecutive positive terms in arithmetic progression can never be a perfect power whenever the initial term is coprime to the common difference of the arithmetic progression. This is a generalization of the results of Euler and Obl\'ath for the case of squares, and an extension of a theorem of Gy\H ory on three terms in arithmetic progressions. Several other results concerning the integral solutions of the equation of the title are also obtained. We extend results of Sander on the rational solutions of the equation in $n,y$ when $b=d=1$. We show that there are only finitely many solutions in $n,d,b,y$ when $k\geq 3$, $l\geq 2$ are fixed and $k+l>6$. Category:11D41
4. CMB 2003 (vol 46 pp. 26)
Bernardi, D.; Halberstadt, E.; Kraus, A.
Remarques sur les points rationnels des variétés de Fermat Soit $K$ un corps de nombres de degr\'e sur $\mathbb{Q}$ inf\'erieur ou \'egal \a $2$. On se propose dans ce travail de faire quelques remarques sur la question de l'existence de deux \'el\'ements non nuls $a$ et $b$ de $K$, et d'un entier $n\geq 4$, tels que l'\'equation $ax^n + by^n = 1$ poss\ede au moins trois points distincts non triviaux. Cette \'etude se ram\ene \a la recherche de points rationnels sur $K$ d'une vari\'et\'e projective dans $\mathbb{P}^5$ de dimension $3$, ou d'une surface de $\mathbb{P}^3$. Category:11D41
5. CMB 2003 (vol 46 pp. 71)
Cutter, Pamela; Granville, Andrew; Tucker, Thomas J.
The Number of Fields Generated by the Square Root of Values of a Given Polynomial The $abc$-conjecture is applied to various questions involving the number of distinct fields $\mathbb{Q} \bigl( \sqrt{f(n)} \bigr)$, as we vary over integers $n$. Categories:11N32, 11D41
6. CMB 2002 (vol 45 pp. 247)
Kihel, O.; Levesque, C.
On a Few Diophantine Equations Related to Fermat's Last Theorem We combine the deep methods of Frey, Ribet, Serre and Wiles with some results of Darmon, Merel and Poonen to solve certain explicit diophantine equations. In particular, we prove that the area of a primitive Pythagorean triangle is never a perfect power, and that each of the equations $X^4 - 4Y^4 = Z^p$, $X^4 + 4Y^p = Z^2$ has no non-trivial solution. Proofs are short and rest heavily on results whose proofs required Wiles' deep machinery. Keywords:Diophantine equationsCategory:11D41
top of page | contact us | privacy | site map | | 2016-06-26 06:22:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8976724743843079, "perplexity": 1244.6721581917104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00126-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://fsolt.org/res/published/hubersolt2004/index.html | inequality
democracy
neoliberalism
Authors
Evelyne Huber
Frederick Solt
Published
June 30, 2004
• Huber, Evelyne, and Frederick Solt. 2004. “Successes and Failures of Neoliberalism in Latin America.” Latin American Research Review 39(3):150-164.
• ## Abstract
Much of the debate about the effects of neoliberal reforms in Latin America has been carried out at a political and ideological level: the image of an overblown and inefficient state that stifles market forces and private initiative has been contrasted with the model of a lean and efficient state that relies on the market to set free productive energies and thus stimulates growth and solves social problems. With this research note, we aim to make a contribution to the emerging empirically based scholarly literature that investigates the effects of neoliberal policy reforms. We find that, on average, in the Latin American countries neoliberal reforms have failed to put into place policies that firmly advance growth, stability, the reduction of poverty and inequality, and improvements of the human capital base.
## BibTeX Citation
@article{HuberSolt2004,
author = {Huber, Evelyne and Solt, Frederick},
issn = {0023-8791},
journal = {Latin American Research Review},
number = {3},
pages = {150--164},
title = {Successes and Failures of Neoliberalism},
volume = {39},
year = {2004}} | 2023-01-31 00:42:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18254439532756805, "perplexity": 9001.836233878394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00045.warc.gz"} |
https://lookformedical.com/web.php?q=body%2C+deceleration&lang=1 | ###### viXra.org e-Print archive, Quantum Gravity and String Theory
Deceleration of Massive Bodies Due to Forehead and Backhead Collisions with Gravitons. Authors: Michael A. Ivanov. Comments: 4 ... The additional deceleration of massive bodies in the model of low-energy quantum gravity due to forehead and backhead ... It is shown that this deceleration $w$ is equal to: $w=-H_{0}c \cdot 4v^{2}/c^{2}\cdot (1-v^{2}/c^{2})^{0.5},$ where $H_{0}$ ... The slope of space creates a new kind of energy that causes objects, such as bodies, particles, and gravitational waves, to ...
http://www.vixra.org/qgst/
###### D33J - Death Valley Oasis - Boomkat
TV VICTOR The Ways Of The Bodies / Timeless Deceleration Tresor Records Electronic ...
https://boomkat.com/products/death-valley-oasis
###### TBI Flashcards by amanda rank | Brainscape
sudden deceleration of the body and head with variable forces transmitted to the surface deeps portions of the brain ... body become rigid in an extended position when examiner pinches victim(decerebrate posturing). Speech:makes sound that examiner ... Browse over 1 million classes created by top students, professors, publishers, and experts, spanning the world's body of " ... Pain: pulls a part of body away when pinched by examiner. Speech: Seems confused or disoriented ...
https://www.brainscape.com/flashcards/tbi-2366311/packs/4189952
###### Print Page - Bockscar 2.0
... this will be never an issue to your body because the deceleration force on your body is very low. if the vehicle start to fly ... all changes from the moving direction into a other direction creates a deceleration effect on your body...... it will be always ... the deceleration moment goes on your body....... if the impact to the ground is a instant stop (albeit for a very short time) ... so, in positon with a helmet on, your eye level is just above the top rail/top of the body panel. there is some rake to the car ...
http://www.landracing.com/forum/index.php?action=printpage
###### Patent WO2016164623A1 - Ambulatory extended-wear electrocardiography and syncope sensor monitor - Google Patents
The recognition of such movements through changes of acceleration and deceleration in the body can be used to detect syncope ... Internal tissues and body structures can adversely affect the current strength and signal fidelity of all body surface ... sensing through placement in a body location that robustly minimizes the effects of tissue and body structure. ... The body of the electrode patch 15 is preferably constructed using a flexible backing 20 formed as an elongated strip 21 of ...
###### System for monitoring repetitive movement - Aquatech Fitness Corp.
One technique uses an accelerometer mounted on the body to detect movement by sensing acceleration and deceleration of the body ... However, because undulation or pitches of the body about the X-axis and rotation of the body about the Y-axis result from the ... Similarly, as the user's body rolls or twists, the body rotates about the Y-axis, with the head, shoulders, and hips moving ... body about a longitudinal axis of the swimmer's body that is parallel to the direction of travel of the swimmer's body, and a ...
http://www.freepatentsonline.com/6955542.html
###### Bone exercise monitor - Wikipedia
The bone is stimulated by the acceleration and deceleration forces also known as G-forces causing impacts on the body ... The monitor measures the accelerations and decelerations of the body and analyzes the results. The daily (and weekly) achieved ...
https://en.wikipedia.org/wiki/Bone_exercise_monitor
###### Life | Free Full-Text | Cineradiographic Analysis of Mouse Postural Response to Alteration of Gravity and Jerk (Gravity...
However, the physiological limits for adaptation or the disruption of body orientation are not known. In this study, we ... Male C57BL6/J mice (n = 6) were exposed to various gravity-deceleration conditions by customized parabolic flight-maneuvers ... the gravity deceleration rate. A certain range of jerk facilitated mouse skeletal stretching efficiently, and a jerk of −0.3~− ... spine and hindlimbs was observed during the initial phase of gravity deceleration. Joint angles widened to 120%-200% of the ...
http://www.mdpi.com/2075-1729/4/2/174/htm
###### Feeding, fins and braking maneuvers: locomotion during prey capture in centrarchid fishes | Journal of Experimental Biology
Ram speed 120 ms prior to maximum gape versus average deceleration for 60 ms after maximum gape (A) and mean change in body ... To gain insights into the mechanisms and timing of deceleration during prey capture, I studied the body and fin kinematics of ... Mechanisms for modulating deceleration. The ability of a predator to modulate deceleration is imperative for arriving at a prey ... Fin function during deceleration. Protraction of the pectoral fins during prey capture is a mechanism of deceleration employed ...
http://jeb.biologists.org/content/210/1/107
###### Power Basketball, a youth basketball coaching and athletic resource: Creating Mobility, Flexibility, and Balance.
Your ability to control your body during deceleration, stopping on a dime, and then accelerating again have much to do with ... When preparing to jump the body must go through a series of events. The initial movement is the load where your body ... These require the body to be able to cut, move, and react quickly. Improper mobility and flexibility in basketball players will ... 4 Walk-Ups:To increase core strength and posterior full body flexibility, mobility, and balance Execution: Begin in a push up ...
###### Autosomal dominant guanosine triphosphate cyclohydrolase I deficiency (Segawa disease).
Besides the neurological symptoms, deceleration of the body length appears in childhood with the onset of motor symptoms. ... deceleration of body length. 10 to 15 years with aggravation of dystonic hypertonus. Around the age of 10 years, the postural ... Autosomal dominant Lewy body parkinsonism in a four-generation family. pdf655 Кб ... To answer the fifth question, it is intriguing to consider the pathophysiology of the stagnation of the body length that ...
https://www.docme.ru/doc/1904133/autosomal-dominant-guanosine-triphosphate-cyclohydrolase-...
###### Benefits of Whole Body Periodic Acceleration (WBPA) -- MEDICA - World Forum for Medicine
... motion of the Exer-Rest adds pulses to your natural vascular pulse with each acceleration and deceleration of the body. These ... The Exer-Rest provides Whole Body Periodic Acceleration (WBPA) therapy by moving the body repetitively head to foot at ... Benefits of Whole Body Periodic Acceleration (WBPA) Non Invasive Monitoring Systems, Inc. (NIMS) introduces a patented ... additional pulses act on the inner lining of blood vessels (endothelium) throughout the body to increase pulsatile sheer stress ...
###### CAR ACCIDENTS - Lyn Lake Chiropractic
Whiplash injury occurs when the body reacts to a deceleration or acceleration force by hyperflexion or hyperextension of the ...
http://www.lynlakechiropractic.com/page.cfm?pageid=14714&articleid=1271
###### Randy J. Schmitz , NC DOCKS (North Carolina Digital Online Collection of Knowledge and Scholarship)
Context: Lower extremity injury often occurs during abrupt deceleration when attempting to change the body's direction. ... Influence of Lean Body Mass and Strength on Landing Energetics. 2012. 410. Purpose: Less lean body mass may limit one's ability ... Lower Body Stiffness and Muscle Activity Differences Between Female Dancers and Basketball Players During Drop Jumps. 2011. 425 ... Biomechanical strategies to decelerate the body in the vertical direction have been implicated as a contributing cause. This ...
http://libres.uncg.edu/ir/clist.aspx?id=1436
###### The molecular basis of the braking action of muscle studied by X-ray interference
A more dramatic example would be deceleration of the body on landing after a jump (Figure 61), when the brakes must be applied ... We normally think of muscles as the motors that drive the movements of the body, but they can also act as brakes to resist an ... 61: The leg extensors decelerate the body after a jump (adapted from Cerretelli, Fisiologia dell'Esercizio, SEU, 2001). ...
http://www.esrf.eu/UsersAndScience/Publications/Highlights/2008/SCM/scm9
###### Hanging on a Line -- Occupational Health & Safety
... deceleration device, lifeline, or a suitable combination of these. ... a body harness, a lanyard, deceleration device, lifeline, or a suitable combination of these. ... 4.2.7 Full Body Harness with Self-retracting Lanyard (FBH +SRL) For integral (FBH +SRL) systems, the FBH constituent shall be ... How well are you protected? The swing action allows you to drop 11 feet, plus the height of your body! You will hit the ground. ...
https://ohsonline.com/articles/2006/07/hanging-on-a-line.aspx
###### CONTRARY BRIN: General Insights into the Future
Reduce velocity so the rock is moving in a tighter orbit, and time the deceleration so a second body can then decelerate it ... The benefit of a lifting body with spoilers would be to put that energy in the super high atmosphere, where it would presumably ... Reduce the density and increase the deceleration in the atmosphere. Make it float so you could use ocean recovery. If foaming ... In practice a lifting body with some control surfaces massing much more than my 111 tonnes would probably be used. An iron ...
http://davidbrin.blogspot.com/2010/04/general-insights-into-future.html
###### Pitch then power: limitations to acceleration in quadrupeds | Biology Letters
Free-body diagram of the stride-average forces acting on a generic quadruped of unvarying body geometry, assuming acceleration/ ... c) Deceleration. Results for deceleration in ponies largely mirror those of acceleration. There appears to be a reduced ... and deceleration for ponies) in a competitive setting. We show that maximum acceleration and deceleration ability may be ... 1989 Scaling body support in mammals: limb posture and muscle mechanics. Science 245, 45-48. (doi:10.1126/science.2740914). ...
http://rsbl.royalsocietypublishing.org/content/early/2009/06/19/rsbl.2009.0360
###### gallup elo
Deceleration of the body's heat production (decrease of metabolic rate) may indirectly decrease body temperature. With a ... body temperature begins to peak in the evening and the onset of sleep initiates a decline in the core body temperature curve, ... specific heat capacity of the body 3.56 kJ kg-1 K-1 [7]). During a 5-min bout, the body would thus act as a 460-W heater ... There are few mechanisms by which the body can lose heat so that its temperature decreases:. 1. Direct heat exchange with the ...
http://baillement.com/dossier/gallup_elo.html
###### Resmi Suzuki Baleno 2016 güvenli derecelendirme sonuçları
Dummy readings of chest deceleration also indicated poor protection of that part of the body. The side wing of the child ... Dummy readings of chest compression indicated marginal protection for this part of the body but good or adequate protection ... This contact led to very high decelerations and protection was rated as poor. ...
https://www.euroncap.com/tr/results/suzuki/baleno/24497
###### Sigma passenger harness : Dropzone.com Skydiving Forums
... composition and flexibility of the human body through a rapid deceleration from 120mph to 16ft/sec over the course of a few ... For the minority of folks that find it uncomfortable, I have found that they are either 1) not of the body type that will allow ... 1) When fitting the passenger harness there is no weight on the leg straps from the body. After properly adjusting your next ... And 2) Despite the most perfectly fitted harness on the ground, even when judged from a suspended harness, the human body is ...
http://www.dropzone.com/cgi-bin/forum/gforum.cgi?post=4381726
###### Case Report: 12CA010
It consists of an anchorage, connectors, a body harness and may include a lanyard, deceleration device, lifeline, or suitable ... It consists of anchorages, connectors, body belt/harness. It may include, lanyards, lifelines, and rope grabs designed for that ... suitably arranged to support the body in a sitting position. ...
https://www.cdph.ca.gov/Programs/CCDPHP/DEODC/OHB/FACE/Pages/12CA010.aspx
###### The Subway vs. My Back - health pain neck | Ask MetaFilter
The forces that repeated, rapid acceleration and deceleration on subway trains place on the body can cause or aggravate back ... Is the subway causing my body to hurt?. I've been riding the NYC subway twice a day for the last few months, and I'm noticing ... Apparently total-body-vibration is not such a good thing in large doses.. Is the subway causing minute but detectable bodily ... In my days as a New Yorker, I discovered that just about every aspect of the city made my body ache.. posted by Lutoslawski at ... | 2019-01-21 03:58:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33575302362442017, "perplexity": 4482.269234279416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583755653.69/warc/CC-MAIN-20190121025613-20190121051613-00416.warc.gz"} |
https://learn.careers360.com/ncert/question-find-the-amount-to-be-paid-at-the-end-of-2-years-on-rs-2400-at-5-percent-per-annum-compounded-annually/ | Q
# Find the amount to be paid. At the end of 2 years on Rs. 2,400 at 5% per annum compounded annually.
Find the amount to be paid
1. At the end of 2 years on Rs. 2,400 at 5% per annum compounded annually.
$\\A_{n}=P_{1}(1+\frac{R}{100})^{n}\\ P_{1}=Rs\ 2400\\ n=2\\ R=5\\ A_{2}=2400\times (1+\frac{5}{100})^2\\ A_{2}=Rs\ 2646\\$ | 2020-02-27 16:05:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9043125510215759, "perplexity": 1013.2715771108045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146744.74/warc/CC-MAIN-20200227160355-20200227190355-00293.warc.gz"} |
http://mathhelpforum.com/algebra/8088-need-help-equations.html | # Thread: Need Help with equations
1. ## Need Help with equations
5=3-8y
5=60n^-1 -15
4/8=10n/6
a^-1 x+b+c=d
2. Originally Posted by jerryramos
5=3-8y
$5=3-8y$
$-3 + 5=-3 + 3-8y$
$2 = -8y$
$\frac{2}{-8} = \frac{-8y}{-8}$
$y = -\frac{1}{4}$
-Dan
3. Originally Posted by jerryramos
5=60n^-1 -15
$5=\frac{60}{n} -15$
$n \cdot 5=n \cdot \left ( \frac{60}{n} -15 \right )$
$5n = 60 -15n$
$5n + 15n = 60 - 15n + 15n$
$20n = 60$
$\frac{20n}{20} = \frac{60}{20}$
$n = 3$
-Dan
4. Originally Posted by jerryramos
4/8=10n/6
a^-1 x+b+c=d
Why don't you give these two a try on your own and post what you have done. They are both very similar to the two examples I already posted an answer for.
The last one LOOKS tricky, but it is very straightforward. a, b, c, and d are just numbers. If it helps, pick some random numbers for them and solve the problem that way first. It will show you the steps you have to take to solve it.
-Dan | 2017-04-29 03:00:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7158094048500061, "perplexity": 656.9302511807599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123172.42/warc/CC-MAIN-20170423031203-00215-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://socratic.org/questions/how-do-you-simplify-6-times-10-8-7-times-10-12 | # How do you simplify (6 times 10^8)(7 times 10^-12)?
##### 1 Answer
May 10, 2017
$= 4.2 \times {10}^{-} 3$
#### Explanation:
$\left(6 \times {10}^{8}\right) \left(7 \times {10}^{-} 12\right)$
The key to this problem is realizing that every term multiplies and order doesn't matter.
$= 6 \times 7 \times {10}^{8} \times {10}^{-} 12$
$= 42 \times {10}^{10 - 12}$ (exponent addition rule)
$= 42 \times {10}^{-} 2$
$= 4.2 \times {10}^{-} 3$ | 2021-06-13 04:54:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9072959423065186, "perplexity": 11494.614822644866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487600396.21/warc/CC-MAIN-20210613041713-20210613071713-00273.warc.gz"} |
https://www.gradesaver.com/textbooks/science/physics/college-physics-4th-edition/chapter-1-multiple-choice-questions-page-18/3 | ## College Physics (4th Edition)
Published by McGraw-Hill Education
# Chapter 1 - Multiple-Choice Questions: 3
#### Answer
The correct answer is (a) 90 km/h
#### Work Step by Step
$v = (55 ~mi/h)(1.6 ~km/mi) = 88 ~km/h \approx 90 ~km/h$ The correct answer is (a) 90 km/h
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2018-08-18 20:48:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5510267019271851, "perplexity": 3637.2739771665238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213737.64/warc/CC-MAIN-20180818193409-20180818213409-00560.warc.gz"} |
http://bigwww.epfl.ch/tutorials/bultheel9701.html | Review of book English only BIG > Tutorials and Reviews > Review of book
CONTENTS Home Page News & Events Seminars People Research Publications Tutorials and Reviews Demos Download Algorithms Teaching Student Projects Intranet
Book Review Akram Aldroubi and Michael Unser, Eds., Wavelets in Medicine and Biology, CRC Press, Boca Raton, FL, 1996, 616 pp. BultheelA Journal of Approximation Theory, vol. 90, no. 3, pp. 458-459, September 1997.
Wavelets have built a strong reputation in the context of signal and image processing. The editors of this book have invited several specialists to contribute a chapter illustrating this in the (bio)medical and biological sciences. The book contains four parts. Part I has two chapters written by the editors themselves giving what they call a “surfing guide” to the theory and implementation of the wavelet transform. These 70 pages are among the better introductions to wavelets available in the literature. Both the continuous and the discrete wavelet transform are dealt with, but in view of the applications to follow, the emphasis is on the latter. The second part deals with medical imaging and tomography. Mathematically speaking, tomography refers to the reconstruction of an image from (noisy) observations or approximations of line integrals which are often called projections. This problem is well known to be ill posed and thus very sensitive to noise. Therefore, a constant observation made in these chapters is that by taking the wavelet transform, it becomes easier to distinguish noise from the clean data. Noise is typically small, of high frequency, and uncorrelated, while the locality of the wavelet basis in both the space and the frequency domains allows one to catch the true image in only a “few” large wavelet coefficients. By a particular way of shrinking the small coefficients, one can “filter out” the noise. Such denoising problems occur in a complex problem setting which depends on the specific application so that several variants and customized versions of this basic idea are explained in these chapters. For example the role of the regularity of the wavelet basis, the basis being orthogonal or biorthogonal, separable or not, the exploitation of redundant transforms versus nonredundant transforms, etc., are all discussed. These denoising techniques are closely related to edge detection and contrast enhancement in images. Indeed, edges correspond to high frequencies, just like noise, but edges give large wavelet coefficients at different resolution levels so that they can be distinguished from noise. The decreasing of small coefficients and the increasing of large ones result in a better contrast of images such as radiographs. Statistical methods are also an essential tool in these image processing techniques. For example, if the noise level is unknown, statistical techniques are introduced to estimate the noise threshold. Estimation of the local irregularity can reveal whether or not a pixel is dominated by noise. Hence a mask is defined which can be put on the image and thus noise can be removed without losing the fine details of the true image. The use of wavelet packages is also discussed. Usually, the wavelet transform decomposes a signal in a high- and a low-frequency component. The low-frequency part is again decomposed in two parts and so on. However, when splitting the high-frequency part as well, one obtains a redundant transform from which an optimal basis can be chosen to represent the image. All these techniques are illustrated in practical applications of X-ray computer tomography, magnetic resonance imaging, positron emission tomography, mammography, and many more. Part III deals with biomedical signal processing. These are typically one-dimensional signals, in general time-varying, nonstationary, sometimes transient, and, again, corrupted by noise. We give a sample of the wavelet transform applications in this domain. The excellent time localization property of wavelets is used to find several phenomena in a signal which occur at different frequencies and localize these events in time. For certain stochastic processes, such as action potentials or human heartbeat times, it is essential to estimate the fractal exponent of the process. Here again the wavelets are shown to outperform the Fourier transform. Furthermore, the continuous complex wavelet transform is used to analyze electrocardiograms. The modulus maxima and the ±π⁄2 phase crossing show the position of sharp signal transitions while modulus minima correspond to flat segments of the signal. In microvascular pulmonary pressure observations, two signals interfere. Here the signals are separated by using filtering techniques based on wavelets. Part IV uses wavelets for mathematical models in biology. The multiresolution structure of the continuous wavelet transform corresponds to a natural human perception of sounds. Therefore wavelets are well suited to make auditory nerve models. To measure blood velocity, traditional methods are based on the Doppler effect when the movement of reflecting particles in the bloodstream are measured. It is illustrated here how the wideband wavelet transform gives a viable alternative. Event-related potentials are reactions of the brain to certain stimuli. Analysis of such signals is typically done by principal component analysis. However, it is shown that wavelets, due to their locality, allow to the analysis of such signals effectively. When using a priori information, the data can be drastically reduced. Also the structure of macromolecules can be deduced from a wavelet analysis of the energy function. Here the multiresolution of wavelets allows for the grouping of certain molecules. This technique can also be used to represent complex surfaces, like for example in computer tomography. This in a sense closes the circle in this wide variety of applications that are presented in this volume. The book is of great importance for researchers working in medical or biological signal and image analysis. They will learn about wavelet alternatives for classical approaches. The wavelet researcher will certainly gain by learning about the particular problems posed by the applications of this particular, yet important field of wavelet based analysis. @ARTICLE(http://bigwww.epfl.ch/publications/bultheel9701.html, AUTHOR="Bultheel, A.", TITLE="{A}kram {A}ldroubi and {M}ichael {U}nser, {E}ds., \emph{{W}avelets in {M}edicine and {B}iology}, {CRC} {P}ress, {B}oca {R}aton, {FL}, 1996, 616 pp.", JOURNAL="Journal of Approximation Theory", YEAR="1997", volume="90", number="3", pages="458--459", month="September", note="") | 2018-01-23 01:41:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.535728394985199, "perplexity": 771.7047472382808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891705.93/warc/CC-MAIN-20180123012644-20180123032644-00526.warc.gz"} |
https://www.numerade.com/questions/what-is-the-molecular-formula-of-each-compound-a-empirical-formula-mathrmch_2m4208-mathrmg-mathrmmol/ | 🎉 The Study-to-Win Winning Ticket number has been announced! Go to your Tickets dashboard to see if you won! 🎉View Winning Ticket
### What is the molecular formula of each compound? …
03:04
University of Toronto
Problem 40
# What is the molecular formula of each compound?(a) Empirical formula $\mathrm{CH}_{2}(M=42.08 \mathrm{g} / \mathrm{mol})$(b) Empirical formula $\mathrm{NH}_{2}(\mathscr{M}=32.05 \mathrm{g} / \mathrm{mol})$(c) Empirical formula $\mathrm{NO}_{2}(\mathscr{M}=92.02 \mathrm{g} / \mathrm{mol})$(d) Empirical formula CHN $(M=135.14 \mathrm{g} / \mathrm{mol})$
## Discussion
You must be signed in to discuss.
## Video Transcript
So we have to found Walker foreigner from the gifting empirical formula. So for a the every foreigner we research too. And the Marco Fuller before two point. Oh, wait. The different mentioned Morgan Foreigner and broke a foreigner. Is that Michael Farmer? Just a multiple off the empirical phoner. So therefore it will be seen. I can Can we apply to the empirical? Um, with a mask. So from here, we can find the entry code for the mass Will be You're free around our 14 severe. We have 12 plus to travel because someone in carbon true Well, because fun into hydrogen. So what we're going to do is that we take our overall more the mass divided by our will amass off the empirical formula. So 42.0. H Divided by 14. Well, wolf, they have fleas 0.15 0 so we know that of the multiple would be if you point. Oh, so ah, it will be our microphone every three times for it. And fickle fauna. So, um, the actual market fauna will be C three a six. So we just need more by the whole thing by three. Okay, So same thing we're going to apply to the west of the question. So the empirical foreign formula an itch too will be, Ah, 16. So we're going to Ah, again to our more killer off the mother mass divide by 16. And how we should be able to find that the mobile will be roughly around too. So that's why we were on to H four at the Markkula formula. Okay. And then for an 02 we have 14 for 16 times too. And there were 92 point over to d O by 46. So what flee the multiple would be too. So we have into 04 So just remind you did that this one will ah, them blow them massive them people phone every before he Quimper. No. Okay, so ah, for park sti. So we just need to father again. Don't. But more the mass off the empirical falling arse over here for a C h and then Ah, nitrogen. So we're trying a river for around 27 grand for move, and then the overall mass will be one for 5114 day went by 27. So we were roughly around five times more than ethical formula, So we have C five, h five and five for the market of formula. | 2020-10-21 22:21:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6719630360603333, "perplexity": 2335.7377513297274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878633.8/warc/CC-MAIN-20201021205955-20201021235955-00618.warc.gz"} |
https://devblogs.microsoft.com/cppblog/porting-a-c-cli-project-to-net-core/ | # Porting a C++/CLI Project to .NET Core
Mike Rousos
One of the new features of Visual Studio 2019 (beginning with version 16.4) and .NET Core 3.1 is the ability to build C++/CLI projects targeting .NET Core. This can be done either directly with cl.exe and link.exe (using the new /clr:netcore option) or via MSBuild (using <CLRSupport>NetCore</CLRSupport>). In this post, I’ll walk through the steps necessary to migrate a simple C++/CLI interop project to .NET Core. More details can be found in .NET Core documentation.
## The sample project
First, I need to make a sample solution to migrate. I’m going to use an app with a native entry point that displays a Windows Forms form via C++/CLI. Migrating a solution with a managed entry point interoperating with native dependencies via C++/CLI would be just as easy, though. To get started, I’ve created a solution with three projects:
1. NativeApp. A C++ Windows app from Visual Studio’s ‘Windows Desktop Application’ template.
1. This will be they app’s entry point.
2. I’ve updated it to display the managed form (via the CppCliInterop project) and call a method on it when the IDM_ABOUT command is invoked.
2. ManagedLibrary. A C# Windows Forms library targeting .NET Core.
1. This will provide a WinForms form for the native app to display.
2. I’ve added a text box to the form and a method to set the text box’s text. I’ve also multi-targeted this project for .NET Core and .NET Framework so that it can be used with either. This way we can focus on migrating just the C++/CLI portion of the sample.
3. CppCliInterop. A .NET Framework C++/CLI Library.
1. This will be used as the interop layer to connect the app to the managed WinForms library.
2. It references ManagedLibrary and allows native projects to use it.
3. This is the project that needs to be migrated to .NET Core.
The sample code is available on GitHub. When you start the app, if you click on the Help -> About menu, the WinForms form will be displayed with text in its text box supplied by the NativeApp project.
## Migrating a vcxproj to .NET Core
Now for the interesting part – updating the sample app to run on .NET Core. The changes needed are actually quite minimal. If you’ve migrated C# projects to .NET Core before, migrating C++/CLI projects is even simpler because the project file format doesn’t change. With managed projects, .NET Core and .NET Standard projects use the new SDK-style project file format. For C++/CLI projects, though, the same vcxproj format is used to target .NET Core as .NET Framework.
All that’s needed is to make a few changes to the project file. Some of these can be done through the Visual Studio IDE, but others (such as adding WinForms references) can’t be yet. So the easiest way to update the project file, currently, is to just unload the project in VS and edit the vcxproj directly or to use an editor like VS Code or Notepad.
1. Replace <CLRSupport>true</CLRSupport> with <CLRSupport>NetCore</CLRSupport>. This tells the compiler to use /clr:netcore instead of /clr when building.
1. This change can be done through Visual Studio’s project configuration interface if you prefer.
2. Note that <CLRSupport> is specified separately in each configuration/platform-specific property group in the sample project’s project file, so the update needs to be made four different places.
2. Replace <TargetFrameworkVersion>4.7</TargetFrameworkVersion> with <TargetFramework>netcoreapp3.1</TargetFramework>.
1. These settings can be modified through Visual Studio’s project configuration interface in the ‘Advanced’ tab. Note, however, that changing a project’s CLR support setting as described in the previous step won’t change <TargetFrameworkVersion> automatically, so be sure to clear the “.NET Target Framework Version” setting before selecting .NET Core Runtime Support.
3. Replace .NET Framework references (to System, System.Data, System.Windows.Forms, and System.Xml) with the following reference to WinForms components from the Windows Desktop .NET Core SDK. This step doesn’t have Visual Studio IDE support yet, so it must be done by editing the vcxproj directly. Notice that only a reference to the Windows Desktop SDK is needed because the .NET Core SDK (which includes libraries like System, System.Xml, etc.) is included automatically. There are different Framework references for WinForms, WPF, or both (as explained in the migration docs).
1. <FrameworkReference Include="Microsoft.WindowsDesktop.App.WindowsForms" />
With those changes made, the C++/CLI project will build successfully targeting .NET Core. If you’re using the latest version of Visual Studio 2019 (16.5 or 16.6 preview 1), everything should work at runtime, too, and the migration is done!
Prior to Visual Studio 2019 16.5 preview 2, C++/CLI libraries didn’t generate the .runtimeconfig.json file necessary for C++/CLI libraries to indicate which version of .NET Core they use, so it had to be added manually. So, if you’re using an older version of Visual Studio, you’ll need to create this CppCliInterop.runtimeconfig.json file manually and make sure it’s copied to the output directory:
{
"runtimeOptions": {
"tfm": "netcoreapp3.1",
"framework": {
"name": "Microsoft.WindowsDesktop.App",
"version": "3.1.0"
}
}
}
The app can now run on .NET Core! A migrated version of the source is available in the NetCore branch in the sample’s GitHub repository. Here’s the Windows form running in front of the loaded modules showing coreclr.dll loaded.
## Building without MSBuild
Migrating this sample app to .NET Core was simply a matter of updating the project file to target .NET Core instead of .NET Framework. If you need to build C++/CLI assemblies directly with cl.exe and link.exe, that’s supported, too. The necessary steps are:
1. Use /clr:netcore in place of /clr when calling cl.exe.
2. Reference necessary .NET Core reference assemblies using /FU (.NET Core reference assemblies are typically installed under %ProgramFiles%\dotnet\packs\<SDK>\<Version>\ref).
3. When linking, include the .NET Core app host directory as a LibPath. The .NET Core app host files are typically installed under %ProgramFiles%\dotnet\packs\Microsoft.NETCore.App.Host.win-x64\<Version>\runtime\win-x64\native).
4. Make sure that ijwhost.dll (which is needed to start the .NET Core runtime) is copied locally from the .NET Core app host location. MSBuild does this automatically if building a vcxproj project.
5. Create a .runtimeconfig.json file, as discussed previously.
## A few caveats
As you can see, with Visual Studio 2019 and .NET Core 3.1, targeting .NET Core with C++/CLI projects is easy. There are a few C++/CLI limitations to look out for, though.
1. C++/CLI support is Windows only, even when running on .NET Core. If you need interoperability cross-platform, use platform invokes.
2. C++/CLI projects cannot target .NET Standard – only .NET Core or .NET Framework – and multi-targeting isn’t supported, so building a library that will be used by both .NET Framework and .NET Core callers will require two project files.
3. If a project uses APIs that aren’t available in .NET Core, those calls will need to be updated to .NET Core alternatives. The .NET Portability Analyzer can help to find any Framework dependencies that won’t work on .NET Core.
## Wrap-up and resources
Hopefully this sample shows how to take advantage of the new functionality in Visual Studio 2019 and .NET Core 3.1 to migrate C++/CLI projects to .NET Core. The following links may be useful for further reading. | 2023-02-03 03:10:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25135499238967896, "perplexity": 4720.965825249284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500042.8/warc/CC-MAIN-20230203024018-20230203054018-00011.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/the-filament-lamp-80-cm-screen-converging-lens-forms-image-it-screen-magnified-three-times-find-distance-lens-filament-focal-length-lens-linear-magnification-m-due-to-spherical-mirrors_27397 | # The Filament of a Lamp is 80 Cm from a Screen and a Converging Lens Forms an Image of It on a Screen, Magnified Three Times. Find the Distance of the Lens from the Filament and the Foca - Science
The filament of a lamp is 80 cm from a screen and a converging lens forms an image of it on a screen, magnified three times. Find the distance of the lens from the filament and the focal length of the lens.
#### Solution
Here, filament of the lamp acts as an object.
v = Image distance
and u = Object distance
According to the question:
v + u = 80 (taking magnitude only) ...(i)
and magnification m = "v"/"u" = 3
or, v = 3 u
or, v - 3u = 0 ...(ii)
Solving (i) and (ii), we get:
u = 20 cm
For the focal length of the lens, we have:
Distance of lens from the filament, u = -20 cm
"v"=3 xx "u"=3 xx 20 = 60cm
Applying lens formula, we get:
1/"v" -1/"u" = 1/"f"
1/60 + 1/20 = 1/"f"
(1+3)/60= 1/"f"
1/"f"=4/60
f=15 cm
So, focal length f = 15 cm
Is there an error in this question or solution?
#### APPEARS IN
Lakhmir Singh Class 10 Physics (Science)
Chapter 5 Refraction of Light
Q 19 | Page 247 | 2021-03-01 15:32:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3454870283603668, "perplexity": 2342.7852499003125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362741.28/warc/CC-MAIN-20210301151825-20210301181825-00288.warc.gz"} |
https://www.math-only-math.com/circle-formulae.html | # Circle Formulae
Circle formulae will help us to solve different types of problems on circle in co-ordinate geometry.
(i) The equation of a circle with centre at (h, k) and radius equals to ‘a’ units is (x - h)$$^{2}$$ + (y - k)$$^{2}$$ = a$$^{2}$$.
(ii) The general form of the equation of a circle is x$$^{2}$$ + y$$^{2}$$ + 2gx + 2fy + c = 0, where the co-ordinates of the centre are (-g, -f) and radius = $$\mathrm{\sqrt{g^{2} + f^{2} - c}}$$ units.
(iii) The equation of a circle with centre at the origin O and radius equals to ‘a’ is x$$^{2}$$ + y$$^{2}$$ = a$$^{2}$$
(iv) The parametric form of the equation of the circle x$$^{2}$$ + y$$^{2}$$ = r$$^{2}$$ is x = r cos θ, y = r sin θ.
(iv) The general second degree equation in x and y (ax$$^{2}$$ + 2hxy + by$$^{2}$$ + 2gx + 2fy + c = 0) represents a circle if coefficient of x$$^{2}$$ (i.e., a) = coefficient of y$$^{2}$$ (i.e., b) and coefficient of xy (i.e., h) = 0.
(v) The equation of the circle drawn on the straight line joining two given points (x$$_{1}$$, y$$_{1}$$) and (x$$_{2}$$, y$$_{2}$$) as diameter is (x - x$$_{1}$$)(x - x$$_{2}$$) + (y - y$$_{1}$$)(y - y$$_{2}$$) = 0
(vi) A point (x$$_{1}$$, y$$_{1}$$) lies outside, on or inside a circle S = x$$^{2}$$ + y$$^{2}$$ + 2gx + 2fy + c = 0 according as S$$_{1}$$ > = or <0, where S$$_{1}$$ = x$$_{1}$$$$^{2}$$ + y$$_{1}$$$$^{2}$$ + 2gx$$_{1}$$ + 2fy$$_{1}$$ + c.
(vii) The equation of the common chord of the intersecting circles x$$^{2}$$ + y$$^{2}$$ + 2g$$_{1}$$x + 2f$$_{1}$$y + c$$_{1}$$ = 0 and x$$^{2}$$ + y$$^{2}$$ + 2g$$_{2}$$x + 2f$$_{2}$$y + c$$_{2}$$ = 0 is 2(g$$_{1}$$ - g$$_{2}$$) x + 2(f$$_{1}$$ - f$$_{2}$$) y + c$$_{1}$$ - c$$_{2}$$ = 0.
(viii) The equation of any circle through the points of intersection of the circles x$$^{2}$$ + y$$^{2}$$ + 2g$$_{1}$$x + 2f$$_{1}$$y + c$$_{1}$$ = 0 and x$$^{2}$$ + y$$^{2}$$ + 2g$$_{2}$$x + 2f$$_{2}$$y + c$$_{2}$$ = 0 is x$$^{2}$$ + y$$^{2}$$ + 2g$$_{1}$$ x + 2f$$_{1}$$y + c$$_{1}$$ + k (x$$^{2}$$ + y$$^{2}$$ + 2g$$_{2}$$x + 2f$$_{2}$$y + c$$_{2}$$) = 0 (k ≠ -1).
(ix) The equation of a circle concentric with the circle x$$^{2}$$ + y$$^{2}$$ + 2gx + 2fy + c = 0 is x$$^{2}$$ + y$$^{2}$$ + 2gx + 2fy + c' = 0.
(x) The lengths of intercepts made by the circle x$$^{2}$$ + y$$^{2}$$ + 2gx + 2fy + c = 0 with X and Y axes are 2$$\mathrm{\sqrt{g^{2} - c}}$$ and 2$$\mathrm{\sqrt{f^{2} - c}}$$ respectively.
The Circle | 2019-04-23 21:52:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5611578226089478, "perplexity": 1588.2761076530874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578613888.70/warc/CC-MAIN-20190423214818-20190424000818-00222.warc.gz"} |
https://wiki.ubc.ca/Simple_Example_of_Derivatives | # Simple Example of Derivatives
## Implicit Differentiation (An example)
Q Differentiate ${\displaystyle y=arccos(2t/(1+t^{2}))}$
A A trick to do this question is to convert the question to ${\displaystyle cos(y)=2t/(1+t^{2})}$. Now do implicit differentiation to get ${\displaystyle -sin(y)y'=(-2t^{2}+2)/(1+t^{2})^{2}}$. Since as know ${\displaystyle cos(y)=2t/(1+t^{2})}$, then ${\displaystyle sin(y)=(t^{2}-1)/(t^{2}+1)}$. (there're two ways to get this. One way is to use trig identity ${\displaystyle sin^{2}+cos^{2}=1}$; the other way is to draw a right-angled triangle with angle ${\displaystyle y}$, adjacent side ${\displaystyle =2t}$ and hypotenuse ${\displaystyle =(1+t^{2})}$. Then the opposite side is ${\displaystyle (t^{2}-1)}$, and ${\displaystyle sin(y)=opp/hyp}$. So going back to the derivative and isolate ${\displaystyle y'=(-2t^{2}+2)/(t^{4}-1)}$.
Q Difference between secant and a tangent:
A A secant is any line passing through a graph. It would intersect the graph at least two points. However when two such points get closer and closer to each other, the secant becomes a tangent to a graph. One particular application is finding a derivative or slope of tangent at certain point. First find slope of the secant at ${\displaystyle (x,y)}$ and ${\displaystyle (x+h,y+h)}$ and then take limit ${\displaystyle h->0}$
Q Find the derivative of ${\displaystyle f(x)=x/(1+2x)}$ using the definition of limits
A ${\displaystyle f'(x)=lim(h->0)1/h*(x+h/(1+2(x+h)-x/(1+2x))=lim(h->0)h/h*(1+2x)(1+2(x+h)=1/(1+2x)^{2}}$
Q find the tangent to the graph f(x) at x=a
A slope of tangent = f'(x), equation of tangent line is ${\displaystyle y-f(x)=f'(x)(x-a)}$ [using the formula ${\displaystyle y-y1=m(x-x1)}$]
Q Find the derivative of ${\displaystyle f(x)=2^{ex}e^{2x}}$
A Using the product rule we can write ${\displaystyle f'(x)=2^{ex}(e^{2x})'+(2^{ex})'e^{2x}}$.
Now the derivative of ${\displaystyle e^{2x}}$ will be ${\displaystyle 2e^{2x}}$ using chain rule
Derivative of ${\displaystyle 2^{ex}=e\log(2)2^{ex}}$ using chain rule
So ${\displaystyle f'(x)=2^{ex}2e^{2x}+e\log(2)2^{ex}e^{2x}}$
Hint to find derivative of ${\displaystyle 2^{ex}}$: Derivative of ${\displaystyle a^{x}}$ is ${\displaystyle a^{x}\log a}$ using chain rule. use this formula ${\displaystyle a^{x}=e^{x\log a}}$. An extra ${\displaystyle e}$ will come by derivative of ${\displaystyle ex}$
COMMENT Revenue=Price*Number of Sales | 2022-01-28 11:17:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 31, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.915579617023468, "perplexity": 261.2536858761252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305494.6/warc/CC-MAIN-20220128104113-20220128134113-00539.warc.gz"} |
https://forum.azimuthproject.org/discussion/1514/chaos-considered-harmful | #### Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
Options
# Chaos considered harmful ?
An event going on called the Rotman Institute Conference on Knowledge and Model in Climate Change http://www.rotman.uwo.ca/videos/
Very interesting reading some of the tweets:
Steve Easterbrook @SMEasterbrook : Fleming: The butterfly effect is misnamed. Lorenz knew the perturbation would have to be really big. Better label: Mothra effect #Rotman2014
Gavin Schmidt @ClimateOfGavin : @smeasterbrook not sure I actually agree with this though. In GCMs smallest possible changes have same effect.
My take is that this speaker Jim Fleming suggested the idea that the chaotic models of climate as originally proposed by Edward Lorenz are not as chaotic as people think. Easterbrook interpreted that by stating that a butterfly was to weak a forcing to be able to change anything, and something more akin to Mothra (a gigantic SciFi moth) was needed to change the trajectory on climate.
I think that there are probably a couple of scales that we need to consider. Events such as hurricanes are likely unpredictable, but they are really inconsequential when compared to the largely deterministic trajectories of phenomena such as ENSO. Same with CO2, as that is a Godzilla of a forcing.
Gavin Schmidt on now!
• Options
1.
Gavin Schmidt said this during his talk with regard to natural variability and variance in climate models
"No climate model can be true."
"No physical model of the real world can be true."
(Paraphrased from memory)
"Why do climate modelers pursue this endless task of increasing complexity, instead of simple energy balance models?"
The answer he gave is because simple models do not include all the detailed factors.
" El Ninos and La Ninos are random. ... ENSO can not be predicted more than 6 months in advance."
Which indicates just how challenging the El Nino prediction project is.
During the Q&A he answered a question with this statement:
"If you were to try to 'game' the system, you would fail miserably"
He also said that energy balance models are much more stable over time in comparison to GCM's, which evolve rapidly over generations.
Also that credibility of models is very important in that they should be able to hindcast to prove their value.
There were a grand total of 12 viewers of the streaming video at the end !
Lots of red meat for Azimuthers to chew on.
Comment Source:Gavin Schmidt said this during his talk with regard to natural variability and variance in climate models >*"No climate model can be true."* >*"No physical model of the real world can be true."* (Paraphrased from memory) Then he rhetorically asked: >*"Why do climate modelers pursue this endless task of increasing complexity, instead of simple energy balance models?"* The answer he gave is because simple models do not include all the detailed factors. >*" El Ninos and La Ninos are random. ... ENSO can not be predicted more than 6 months in advance."* Which indicates just how challenging the El Nino prediction project is. During the Q&A he answered a question with this statement: >*"If you were to try to 'game' the system, you would fail miserably"* He also said that energy balance models are much more stable over time in comparison to GCM's, which evolve rapidly over generations. Also that credibility of models is very important in that they should be able to hindcast to prove their value. There were a grand total of 12 viewers of the streaming video at the end ! Lots of red meat for Azimuthers to chew on.
• Options
2.
edited October 2014
I'm glad I looked at the tweet log as I hadn't come across this ENSO paper.
Wittenberg et al., ENSO Modulation: Is It Decadally Predictable? (2014)
The abstract says:
These 40-member reforecast ensembles display potential predictability of the ENSO trajectory, extending up to several years ahead. However, no decadal-scale predictability of ENSO behavior is found. This indicates that multidecadal epochs of extreme ENSO behavior can arise not only intrinsically but also delicately and entirely at random. Previous work had shown that CM2.1 generates strong, reasonably realistic, decadally predictable high-latitude climate signals, as well as tropical and extratropical decadal signals that interact with ENSO. However, those slow variations appear not to lend significant decadal predictability to this model’s ENSO behavior, at least in the absence of external forcings.
What on earth does "delicately" mean? Are they saying both that high-latitude phenomena interact with ENSO and have detectable signals but have no predictive power?
It would be interesting to know what Gavin Schmidt made of the Ludescher et al. paper as he's just tweeted that ENSO is not predictable more than 6 months in advance.
Comment Source:I'm glad I looked at the tweet log as I hadn't come across this ENSO paper. Wittenberg et al., [ENSO Modulation: Is It Decadally Predictable? (2014)](http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-13-00577.1) The abstract says: > These 40-member reforecast ensembles display potential predictability of the ENSO trajectory, extending up to several years ahead. However, no decadal-scale predictability of ENSO behavior is found. This indicates that multidecadal epochs of extreme ENSO behavior can arise not only intrinsically but also delicately and entirely at random. Previous work had shown that CM2.1 generates strong, reasonably realistic, decadally predictable high-latitude climate signals, as well as tropical and extratropical decadal signals that interact with ENSO. However, those slow variations appear not to lend significant decadal predictability to this model’s ENSO behavior, at least in the absence of external forcings. What on earth does "delicately" mean? Are they saying both that high-latitude phenomena interact with ENSO and have detectable signals but have no predictive power? It would be interesting to know what Gavin Schmidt made of the Ludescher et al. paper as he's just tweeted that ENSO is not predictable more than 6 months in advance.
• Options
3.
Jim, I think "delicate" may mean sensitive to initial conditions and parameters, likely in the sense of a butterfly flapping its wings -- which is what they are debating in the twitter-storm.
I am still amazed by how much the QBO controls the peaks of ENSO, which makes it a forced stimulus. In contrast, Wittenberg et al think it is all intrinsic, and so presumably emergent.
This is definitely a major difference in modeling of ENSO.
Comment Source:Jim, I think "delicate" may mean sensitive to initial conditions and parameters, likely in the sense of a butterfly flapping its wings -- which is what they are debating in the twitter-storm. I am still amazed by how much the QBO controls the peaks of ENSO, which makes it a forced stimulus. In contrast, Wittenberg et al think it is all intrinsic, and so presumably emergent. This is definitely a major difference in modeling of ENSO.
• Options
4.
Jim emailed you the English text, no images found nor pdf.
Basically I cannot follow the paper, I assume he is running a simulator simulating a model. I also cannot see how the results of the paper could be duplicated. I cannot infer any of the conclusions.
If the cornerstone of scientific investigation is ability to duplicate a result from a set of experiments (or data), no science was endeavoured.
Dara
Comment Source:Jim emailed you the English text, no images found nor pdf. Basically I cannot follow the paper, I assume he is running a simulator simulating a model. I also cannot see how the results of the paper could be duplicated. I cannot infer any of the conclusions. If the cornerstone of scientific investigation is ability to duplicate a **result** from a set of experiments (or data), no science was endeavoured. Dara
• Options
5.
Thanks Dara,
Now I'm just short of the supplementary info for the 2 AMU papers Nathan cited.
Comment Source:Thanks Dara, Now I'm just short of the supplementary info for the 2 AMU papers Nathan cited.
• Options
6.
Based on what we are trying to do with the El Nino project, if someone says that a behavior occurs "entirely at random" it means that they have essentially punted on trying to predict anything with other than a broad probability measure.
So we go with a causal and more-or-less deterministic mechanism and see how far it will take us. It may work, it may not, but it is worth the risk.
Comment Source:Based on what we are trying to do with the El Nino project, if someone says that a behavior occurs *"entirely at random"* it means that they have essentially punted on trying to predict anything with other than a broad probability measure. So we go with a causal and more-or-less deterministic mechanism and see how far it will take us. It may work, it may not, but it is worth the risk.
• Options
7.
edited October 2014
Based on what we are trying to do with the El Nino project, if someone says that a behavior occurs “entirely at random” it means that they have essentially punted on trying to predict anything with other than a broad probability measure
I disagree Paul, people say those things when they do not know how to solve a complex problem. His comment: entirely at random is what my mother says when I visit her in Toronto and she complains about the weather changing so fast! In other words that is a layman's term.
Paul we are wasting our time reading bogus paper after bogus paper filled with non-technical verbiage. What you are doing is sound, code algorithms plots numerical analysis... and hopefully our fearless leader soon has more time to give us some theoretical framework to start formulating and solving some problems that relate to planetary climate.
Dara
Comment Source:>Based on what we are trying to do with the El Nino project, if someone says that a behavior occurs “entirely at random” it means that they have essentially punted on trying to predict anything with other than a broad probability measure I disagree Paul, people say those things when they do not know how to solve a complex problem. His comment: **entirely at random** is what my mother says when I visit her in Toronto and she complains about the weather changing so fast! In other words that is a layman's term. Paul we are wasting our time reading bogus paper after bogus paper filled with non-technical verbiage. What you are doing is sound, code algorithms plots numerical analysis... and hopefully our fearless leader soon has more time to give us some theoretical framework to start formulating and solving some problems that relate to planetary climate. Dara
• Options
8.
I disagree Paul, people say those things when they do not know how to solve a complex problem.
Dara, That is close to what I meant when I said "punted". In the game of football, when you punt you are giving up. To punt on first down, you are really giving up on solving the problem without even trying. To be charitable, and w/o reading the paper, they probably at least tried.
Comment Source:> I disagree Paul, people say those things when they do not know how to solve a complex problem. Dara, That is close to what I meant when I said "punted". In the game of football, when you punt you are giving up. To punt on first down, you are *really* giving up on solving the problem without even trying. To be charitable, and w/o reading the paper, they probably at least tried.
• Options
9.
Paul sorry heh heh heh.
I do not know what is the value for publishing a paper when nothing new is done, no positive results and whatever the author says cannot be duplicated nor verified.
Dara
Comment Source:Paul sorry heh heh heh. I do not know what is the value for publishing a paper when nothing new is done, no positive results and whatever the author says cannot be duplicated nor verified. These are **superstitions** about climate Dara
• Options
10.
I call these types of arguments with nothing to really back them up just-so stories. One can elevate them to a hypothesis, but unless someone has a way to verify them, they are kind of worthless.
There is a discussion going on at WUWT today, started by meteorologist Joe Bastardi. Everyone is chipping in with their own opinion on whether the Kelvin wave is fast enough, or whether the warm water is upwelling, or whether the winds are strong enough, etc, to generate an El Nino.
Yet, unless someone has a mathematical model that ties the pieces together, it is just armchair quarterbacking. The grizzled weather guys, such as Bastardi, have some intuition based on how often they study the data, but --- because they can't express their insight in terms of an algorithm --- it is kind of a useless exercise. They take a risk in predicting an El Nino, because they know that if they get it right, their consulting services will become much more valuable the next time around.
Consider that the famed hurricane predictor, William Gray, was one of the first to make the connection between QBO and ENSO. Yet, after all this time, he never came up with any type of algorithm. Gray is now a global warming skeptic. Neat if we can use the QBO as part of an algorithm to predict ENSO. It gives one extra motivation :)
Comment Source:I call these types of arguments with nothing to really back them up [just-so stories](http://en.wikipedia.org/wiki/Just-so_story). One can elevate them to a hypothesis, but unless someone has a way to verify them, they are kind of worthless. There is a discussion going on at [WUWT](http://wattsupwiththat.com/2014/10/26/yes-virginia-and-everyone-else-there-is-an-el-nio-coming/) today, started by meteorologist Joe Bastardi. Everyone is chipping in with their own opinion on whether the Kelvin wave is fast enough, or whether the warm water is upwelling, or whether the winds are strong enough, etc, to generate an El Nino. Yet, unless someone has a mathematical model that ties the pieces together, it is just armchair quarterbacking. The grizzled weather guys, such as Bastardi, have some intuition based on how often they study the data, but --- because they can't express their insight in terms of an algorithm --- it is kind of a useless exercise. They take a risk in predicting an El Nino, because they know that if they get it right, their consulting services will become much more valuable the next time around. Consider that the famed hurricane predictor, [William Gray](http://en.wikipedia.org/wiki/William_M._Gray), was one of the first to make the connection between QBO and ENSO. Yet, after all this time, he never came up with any type of algorithm. Gray is now a global warming skeptic. Neat if we can use the QBO as part of an algorithm to predict ENSO. It gives one extra motivation :)
• Options
11.
With the advent of GPM and TRMM satellites networks and the vast volumetric data collected in real-time from atmosphere and upper atmosphere, these guys are all already laid off persona non grata, the real deal will be the algorithms, software and parallelization and we all agree to coronate John as our king to lead us to sound theories and mathematical framework so we could develop code against.
Dara
Comment Source:With the advent of GPM and TRMM satellites networks and the vast volumetric data collected in real-time from atmosphere and upper atmosphere, these guys are all already laid off **persona non grata**, the real deal will be the algorithms, software and parallelization and we all agree to coronate John as our king to lead us to sound theories and mathematical framework so we could develop code against. Dara | 2021-04-11 07:43:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48793816566467285, "perplexity": 1934.3096308209076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061562.11/warc/CC-MAIN-20210411055903-20210411085903-00049.warc.gz"} |
https://competitive-exam.in/questions/discuss/a-box-contains-20-electric-bulbs-out-of-which-4 | # A box contains 20 electric bulbs, out of which 4 are defective. Two bulbs are chosen at random from this box. The probability that at least one of these is defective is
$\frac{7}{19}$
$\frac{6}{19}$
$\frac{5}{19}$
$\frac{4}{19}$
Please do not use chat terms. Example: avoid using "grt" instead of "great". | 2020-09-25 13:37:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8544297814369202, "perplexity": 420.87040710122227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400226381.66/warc/CC-MAIN-20200925115553-20200925145553-00407.warc.gz"} |
https://planetmath.org/Disjunction | # disjunction
A disjunction is true if either of its parameters (called disjuncts) are true. Disjunction does not correspond to “or” in English (see exclusive or.) Disjunction uses the symbol $\lor$ or sometimes $+$ when taken in algebraic context. Hence, disjunction of $a$ and $b$ would be written
$a\lor b$
or
$a+b$
The truth table for disjunction is
$a$ $b$ $a\lor b$
F F F
F T T
T F T
T T T
Title disjunction Disjunction 2013-03-22 11:54:40 2013-03-22 11:54:40 akrowne (2) akrowne (2) 14 akrowne (2) Definition msc 03B05 logical or disjunctive truth function Conjunction PropositionalLogic | 2019-09-23 18:21:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 9, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8586577773094177, "perplexity": 4788.045209271535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514577478.95/warc/CC-MAIN-20190923172009-20190923194009-00073.warc.gz"} |
https://mathoverflow.net/questions/218457/lebesgue-differentiation-theorem-holds-on-locally-doubling-space | # Lebesgue differentiation theorem holds on locally doubling space?
It's known that for a metric space with doubling measure $(X,\mu)$, the Lebesgue differentiation theorem holds , i.e. If $f:X\to \mathbb{R}$ is a locally integrable function, then $\mu$-a.e. points are Lebesgue points.
Can we relax the doubling condition to local doubling or local uniformly doubling?
• I would guess so, since Lebesgue differentiation theorem is essentially local in nature. – leo monsaingeon Sep 17 '15 at 8:56
$$\limsup_{r \to 0} \frac{\mu(B(x,2r))}{\mu(B(x,r))}<\infty \ \text{for}\ \mu-\text{a.e.}\ x \in X$$
This is done in Section 3.4 of the book Sobolev Spaces on Metric Measure Spaces" by Heinonen, Koskela, Shanmugalingam, and Tyson. | 2021-03-09 00:37:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9962303042411804, "perplexity": 376.6566564308103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385534.85/warc/CC-MAIN-20210308235748-20210309025748-00309.warc.gz"} |
https://smoothiex12.blogspot.com/2021/11/salvo-warfare-i.html | ## Sunday, November 21, 2021
### Salvo Warfare-I.
People who think that I am on some sort of a crusade against the political "science" or what passes today for "history", or rather what it becomes once some "historian" begins to offer "the range of interpretations"... they are absolutely right. These two fields of human "academic" activity--and this is not my definition, many other people used and continue to use it way before me--are the fields in which credentials are bestowed upon primarily interpretations and personal (however "justified" with sources) opinions. But in history, at least, there is some inherent knowable truth which could be found, once layer upon layer of "interpretations" will be peeled off, especially when it is done by professionals who know the subject which constitutes this layer. This is not the case with political "science" which for the last decades produced a dearth of BS and failed to predict just about anything.
It is not surprising. Just take a look at the political "science" courses, say in Columbia University, and you will find there a hodgepodge collection of mostly "current events" theoretical BS which anyone with IQ higher than room temperature can get from media. Here is one "unit" which has some relevance to real world: DATA ANALYSIS & STATS-POL RES.
This course examines the basic methods data analysis and statistics that political scientists use in quantitative research that attempts to make causal inferences about how the political world works. The same methods apply to other kinds of problems about cause and effect relationships more generally. The course will provide students with extensive experience in analyzing data and in writing (and thus reading) research papers about testable theories and hypotheses. It will cover basic data analysis and statistical methods, from univariate and bivariate descriptive and inferential statistics through multivariate regression analysis. Computer applications will be emphasized. The course will focus largely on observational data used in cross-sectional statistical analysis, but it will consider issues of research design more broadly as well. It will assume that students have no mathematical background beyond high school algebra and no experience using computers for data analysis.
As you can see yourself--they give them a very basic math, which later finds its other incidence, buried in the pile of purely story-telling topics such as "ISRAELI NATIONAL SECURITY STRATEGY, POLICY AND DECISION MAKING", such as, and you have guessed it--Game Theory. Among all this disjoint collection of "stories" about politics the most remarkable is this: THEORIES OF WAR AND PEACE.
In this course we undertake a comprehensive review of the literature on the causes of war and the conditions of peace, with a primary focus on interstate war. We focus primarily on theory and empirical research in political science but give some attention to work in other disciplines. We examine the leading theories, their key concepts and causal variables, the causal paths leading to war or to peace, and the conditions under which various outcomes are most likely to occur. We also give some attention to the degree of empirical support for various theories and hypotheses, and we look at some of the major empirical research programs on the origins and expansion of war. Our survey includes research utilizing qualitative methods, large-N quantitative methods, formal modeling, and experimental approaches. We also give considerable attention to methodological questions relating to epistemology and research design. Our primary focus, however, is on the logical coherence and analytic limitations of the theories and the kinds of research designs that might be useful in testing them. This course is designed primarily for graduate students who want to understand and contribute to the theoretical and empirical literature in political science on war, peace, and security. Students with different interests and students from other departments can also benefit from the seminar and are also welcome. Ideally, members of the seminar will have some familiarity with basic issues in international relations theory, philosophy of science, research design, and statistical methods.
Wow! So, as you can see yourself it is a feeble attempt to provide some degree of legitimacy for political "science" graduates' opinions on war, by skipping every single subject which constitutes the foundation of modern war and, as I am on record ad nauseam here, it is higher math, physics and fundamental engineering and military courses ranging from radio-electronics to weapon systems integration, to combat applications, tactics, operational art and research and many other things of which political "scientists" never heard about, not to mention have no clue that such subjects do even exist. How about the theory of survivability of the ship or structure of combat communication networks? Don't hold your breath. Those graduates get a glimpse of Theory of Operations through some statistical methods and basic probabilities course, and then move on to study what anyone with a half-brain can read up on internet in several hours.
Yet, guess from three times who dominates in the modern West (especially in the US) the "discussion" on crucial issues of war and peace, military strategies and geopolitics? You bet, political "scientists" who, as I often use this expression, will not know the difference between LGBT and BTG, which is Battalion (or Brigade) Tactical Group. These are the people who continue to not only spread mostly incompetent sophomoric BS on warfare, they are the MAIN force behind shaping a discussion in the US on geopolitics and strategy, having zero competencies in what defines humanity's main tool of group-against-group survival which is groups' power (capability) and warfare. You can bet your ass also on the fact that this contingent of institutionalized ignoramuses, together with lawyers, constitute the main body of the US legislature and government officials. Recall utter embarrassing failure of all those "scientists" in 2016. How did your political "science" and "statistics" work out, eh?
And here is the main point--modern warfare is complex. Extremely complex. By modern I mean already highly motorized warfare of WW II, with massive mechanized armies, supported by the vast combat aviation fleets, massive naval armadas equipped with radar and sonar clashing on different theaters of operations and producing not only catastrophic destruction and human loses but gigantic volumes of combat data and correlates which not only contributed immensely to a development of tactical and operational models but accelerated technological development of war and its deadly instruments to a breakneck speed. In 1942 a graduate of the Soviet high school or lower college could get into the accelerated artillery officer program, complete it in a few months and be sent to the front line to face Wehrmacht and its panzers. In 1985 the study of missile-artillery officer in academy (officer school) would take full 5 years (6 academic years) with graduate degree in engineering and under-graduate in military science and would involve the study of weapons systems of immense complexity. Same was and is true for naval and air force officers.
Today, the same is done based on immensely complex and state-of-the-art academic facilities which unify in themselves latest in weapon systems of an immense power even with conventional explosives, and warfare today is defined by extremely complex combat networks, computers, some really mind-boggling sensors, instant propagation of information, neural networks, robotics, materials which even 20 years ago seemed inconceivable. Ranges of even what would be considered tactical weapons 40 years ago grew into thousands of kilometers, decision making is assisted by AI elements and battle management systems provide not only sensor fusion but probabilistic analysis of operations. How do you fight such a war. By studying the politics of Japan and basic Game Theory with Statistics? Of course not. Political "science" is simply outclassed by several orders of magnitude by warfare and its instruments, as well as applying "the lessons" from history to modern war and geopolitics is a fool's errand because at the Battle of Lepanto they didn't have to resolve the issue of uncertainties when developing firing solution by long-range supersonic missile salvo against Carrier Battle Group at 500 kilometers.
So, tactics and operations take the front seats and this is what constitutes the most important element of modern day geopolitics as we observe it through the lens of actions by nations-states or their alliances and is manifested in statements by leaders of the states, their ministers of foreign affairs, parliaments and, 99% full of shit, media. It all rests on military-economic power, period. The rest is merely an addition or iteration of what is known as Composite Index of National Capability and if I can build a better weapon and kill you with less damage to myself--this is exactly what constitutes the real national power and, as Den Xiaoping used to play with Clausewitz' famous dictum: "Diplomacy is a continuation of war by other means." You either have a weapon or you become an object (not a subject) of history and your only hope is that you don't become a meal for a hungry aggressive superpower.
Late legendary Captain Wayne Hughes understood it clearly and developed an incredible sense for both strategy and evolution of weapons. Not surprisingly, Hughes was a graduate of the US Naval Academy in 1952 in the times of naval titans the scale of Chester Nimitz, Arleigh Burke and, inevitably, later Elmo Zumwalt, who recognized, unlike many of his contemporaries, the changing nature of the (naval) warfare and was a keen observer of Admiral Gorshkov who built the Soviet Navy around missile weapon systems and that changed everything. Hughes wrote extensively about it and applied his very own Salvo Model to a new paradigm of the naval, and not only, warfare which he presented to a wider public in his famous treatise.
Salvo Equations', unlike Osipov-Lanchester model, deal with discrete values. That is the things which you can actually count in exchange, they are not continuous, such as, for example an infantry battalion under the artillery barrage where it is possible but extremely difficult to model losses because not only the barrage could be continuous (for an hour with unknown number of shells) but depend dramatically on the design of defensive positions capable to take some degree of damage and thus save lives. In the end, even awareness and running skills of soldiers could be a factor in such an ordeal, which makes it extremely difficult to predict. Here is how Lanchester's model looks like for combat:
This is not a nice looking set of differential equations where coefficients a and e define the rate of NON-combat losses, b and f define the rate of losses due to fire impact on areas, c and g define the rate of losses at the front line (immediate contact) and d and h are the numbers of arriving or withdrawing reserves. You see, a hot mess. And then, of course, you cannot shoot down every single artillery shell. Not so with missiles, which you can shoot down and which are discrete by their very nature, as are ships. If you have 5 ships in your task group--that is it. This is 5 ships and that is what gives Salvo Model an elegant and easily understood form. Not in embellished form, I underscore. Embellished Salvo Equations are a bit different animal and require a serious understanding of weapons, but I will touch upon those later. Here is basic Salvo Model for two hostile forces (fleets) A and B.
The beauty of this model is in the fact that it is not necessarily just a naval one. Missile exchange exists not only at the sea and between fleets. One can apply this model to exchange between defended base and combat air component attacking it. It is a classic missile exchange between discrete forces. We also will look into that, but for now it is clear that at this level of basic Salvo Equations one can easily "play" with them based on some assumptions and get a feel of how they work. Mathematically it is very simple and I did present some examples before elsewhere but let's play a bit. But to cool down your enthusiasm a little bit because of a seemingly simple math in this model, the math behind it is actually quite complex and salvo model is basically a tip of the iceberg and its application requires a serious tactical and operational (and engineering) knowledge which, of course, is beyond the grasp of political "scientists".
Here is a simple illustration of the damage analysis for a ship. In any naval academy the course on Theory of Ship Design (Construction) and Survivability is a two full years course, which involves not only a truckload of math and naval architecture but such earthly and prole things like closing damn holes in the hull being under the attack of incoming and roaring water, or extinguishing fires while shit around you explodes and burns--believe me, this is not fun. It looks good only in the movies.
Here is a general solution for a salvo by a submarine:
Or here is maneuvering on board for taking salvo position:
So, this is just a minuscule part of what is needed to fully grasp what is this all about in Salvo Model, not to speak of Embellished Salvo Equations. So, don't get cocky just yet. You will have the chance to get cocky once you will follow my blog and, of course, support (those well-off among you) me on Patreon.
Now to basic play. Let's assume that two forces A and B have equal number of ships, say 5. Thus: A=5 and B=5. Let's continue with other coefficients. Say force A's$\mathrm{\beta =4.}$$\mathrm{Now, the following:}$ a1 and b1 are what is generally known as omega(
a1=2 and b1=1 (I use deliberately whole numbers to simplify the task) and now to a3 which is, basically, effectiveness of A's air defense, which is the number of missiles fired by B destroyed by A's air defense. So, say a3=2 and for B we say same effectiveness: b3=2. Now we are ready to calculate. Let's start with A's losses:
$As you can see yourself A doesn\text{'}t fare that well--it gets completely destroyed, and loses all 5 ships. But what about B.$s you can see, B didn't fare much better and got its ass handed to it by A. So, two task groups basically sunk each-other. Of course, this is a completely unrealistic scenario but it showed that B having much less "resistance" to being taken out by enemy missiles (only one per each B ship) couldn't capitalize on its advantage in a number of missiles it had over A. Force A ships simply could absorb more battle damage. If only B had better air defense or could absorb more damage. Should a1=2 and b1=2, that is being the same, force B could have won this exchange over A and would have retained 2.5 ships afloat. We round it and it is 3 ships--this is victory, a bloody one, but victory nonetheless. So, here we are, with some example of how simple this basic model is. But, of course, as you may have guessed it already, the devil is in those pesky details which define modern missile combat and that is a hell of a topic, which I intent on discussing...
P.S. If anyone notices some stupid mistakes in calculations, please inform--it is evening and even two monitors is not enough for navigating this mambo-jumbo. Do not forget to support me on Patreon.
To Be Continued... | 2023-02-03 03:50:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3167605698108673, "perplexity": 2024.6114159800015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500042.8/warc/CC-MAIN-20230203024018-20230203054018-00195.warc.gz"} |
https://aezoo.compute.dtu.dk/doku.php?id=adminplayground | # Authenticated Encryption Zoo
## Columns and valid options
In the following we specify the meaning of each column of the table and give what we consider valid options for each column. If you feel that a valid option is missing for a particular column, we encourage you to e-mail aezoo@compute.dtu.dk with your suggestions for changes.
With no doubt, opinions vary as to what e.g. an online cipher is. With our valid options below, we try to capture all definitions or levels of which a certain property is obtained, allowing for a good comparison of the candidates.
For candidates containing several, say K, parameter sets, and where properties differ across these parameter sets, we suggest to comma-separate the properties for each set, such that the ith option in the comma-separated lists across all columns of the table correspond to the same parameter set of that particular candidate.
### Type
Specify the primitive(s) underlying the construction. Valid options are:
• AES-based (assumed AES-128)
• AES-K-based (when K != 128)
• AES[n]-based (where n is any number of rounds deviating from the standard # of rounds for base AES-K (see above). Examples: AES[4] is 4 rounds of AES-128 and AES-256[4] is 4 rounds of AES-256)
• AES-like (based on some modified version of AES)
• Sponge[P] (where P is a named permutation. Can either be part of the submission or existing permutation)
• FSR (based on feedback shift register(s))
• ARX (modular addition, rotation and XOR)
• LRX (logical operations, rotation and XOR)
### Parallelizable (E/D)
Specify separately whether the scheme is parallelizable in encryption (E) and decryption (D). Valid options for both cases are:
• Fully (if there is a separation of the data into b chunks, such that each of these chunks of data can be fully processed independently of the others, allowing for constant overhead)
• Partly (if there is a separation of the data into b chunks, such that each of these chunks of data can partly be processed independently of the others, allowing for constant overhead)
• No (if none of the above apply)
### Online (E/D)
Specify separately whether the scheme is online in encryption (E) and decryption (D). Valid options for both cases are:
• Fully (if the scheme can process data, and output processed data, on-the-fly, using only constant memory, and not needing to know the length of data)
• Needs length (when the above applies, except one needs to know the length of data)
• No
### Nonce MR
State the schemes resistance towards nonce misuse. Here, the nonce is defined as the tuple consisting of private message number and public message number. Valid options are:
• MAX (leaks only whether a plaintext is repeated)
• MAX online (leaks only the LCP (longest common prefix) of plaintexts)
• LCP+X (leaks LCP and XOR of next plaintext block)
• None (when all security is lost if nonce is repeated)
### Inverse free
State whether the scheme requires the inverse of the underlying primitive when considering . ONLY applicable for block cipher- or permutation-based modes. Valid options are:
• Yes
• No
• N/A (for when not applicable, see above) | 2020-01-26 20:13:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5832542181015015, "perplexity": 3139.154704546452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690379.95/warc/CC-MAIN-20200126195918-20200126225918-00456.warc.gz"} |
https://egeulgen.github.io/pathfindR/reference/cluster_graph_vis.html | Graph Visualization of Clustered Enriched Terms
cluster_graph_vis(
clu_obj,
kappa_mat,
enrichment_res,
kappa_threshold = 0.35,
use_description = FALSE,
vertex.label.cex = 0.7,
vertex.size.scaling = 2.5
)
## Arguments
clu_obj
clustering result (either a matrix obtained via hierarchical_term_clustering or fuzzy_term_clustering fuzzy_term_clustering or a vector obtained via hierarchical_term_clustering)
kappa_mat
matrix of kappa statistics (output of create_kappa_matrix)
enrichment_res
data frame of pathfindR enrichment results. Must-have columns are "Term_Description" (if use_description = TRUE) or "ID" (if use_description = FALSE), "Down_regulated", and "Up_regulated". If use_active_snw_genes = TRUE, "non_Signif_Snw_Genes" must also be provided.
kappa_threshold
threshold for kappa statistics, defining strong relation (default = 0.35)
use_description
Boolean argument to indicate whether term descriptions (in the "Term_Description" column) should be used. (default = FALSE)
vertex.label.cex
font size for vertex labels; it is interpreted as a multiplication factor of some device-dependent base font size (default = 0.7)
vertex.size.scaling
scaling factor for the node size (default = 2.5)
## Value
Plots a graph diagram of clustering results. Each node is an enriched term from enrichment_res. Size of node corresponds to -log(lowest_p). Thickness of the edges between nodes correspond to the kappa statistic between the two terms. Color of each node corresponds to distinct clusters. For fuzzy clustering, if a term is in multiple clusters, multiple colors are utilized.
## Examples
if (FALSE) {
cluster_graph_vis(clu_obj, kappa_mat, enrichment_res)
} | 2022-09-28 22:28:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5640547871589661, "perplexity": 11021.077459392345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00468.warc.gz"} |
https://fizzbuzzer.com/divide-two-integers/ | ### Looking for good programming challenges?
Use the search below to find our solutions for selected questions!
# Divide two integers
Sharing is caring!
Problem statement
Divide two integers without using multiplication, division and mod operator. If it is overflow, return MAX_INT.
Solution
Obviously the naive approach to this problem would be to subtract the divisor from the dividend until the dividend becomes less than the divisor, while keeping track of how many times we subtracted.
This approach is not optimal. Imagine you would have to divide 2147483648 by 1.
This problem can be solved based on the fact that any number can be converted to the format of the following:
$num = a_0 \times 2^0 + a_1 \times 2^1 + a_2 \times 2^2 + \dots + a_n \times 2^n$
So, we will increment our divisor by shifting it one position left at a time until it surpasses the divided. That is, multiply it by 2, by 4, by 8, etc. We then shift it one position to the right to get the maximum divisor that still fits in our divided and subtract that from our current dividend. We repeat the process for the rest of our dividend until it becomes smaller than our original divisor.
Example, let dividend = 14 = 1110 and divisor = 1. We start by taking our divisor and shift it one position to the left, making it divisor = 2. It still is less than or equal to 14. We continue by shifting it another position to the left, making it divisor = 4, still less than 14. We do another shift and make it divisor = 8, which is still less than 14. Finally, we shift it once more resulting in a divisor = 16, which is greater than our dividend = 14. Shifting it one position to the right will give us the largest divisor that fits into 14, which is 8 and subtract that from our divided. We keep track of our multiplication factor and add it to our answer – three shifts to the left results in a factor of 8. Our answer becomes 8.
We continue with the rest of the dividend which is 14 - 8 = 6. Our divisor is larger than 6, so we shift it one position to the left making it divisor = 2. We then shift it again making it divisor = 4 and finally another shift, resulting in a divisor = 8, which is larger than 6. Shifting it one position to the right will give us the largest divisor that fits into 6, which is 4 and subtract that from our divided.. We keep track of our multiplication factor and add it to our answer – two shifts to the left results in a factor of 4. Our answer becomes 12.
We continue with the rest of the dividend which is 6 - 4 = 2. Our divisor is larger than 2, so we shift it one position to the left making it divisor = 2. We then shift it again making it divisor = 4 which is larger than 2. Shifting it one position to the right will give us the largest divisor that fits into 2, which is 2 and subtract that from our divided. We keep track of our multiplication factor and add it to our answer – one shift to the left results in a factor of 2. Our answer becomes 14, which is also our final answer.
Full code | 2019-01-23 22:53:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4372958540916443, "perplexity": 255.43531451167675}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584415432.83/warc/CC-MAIN-20190123213748-20190123235748-00526.warc.gz"} |
https://www.gamedev.net/forums/topic/521616-20-to-anyone-who-can-figure-out-whats-wrong-with-this-code/ | Share on other sites
Quote:
Original post by MikeTacular
Quote:
Original post by CodaKiller
Quote:
Original post by MikeTacularwhy do you have to change the center of rotation?
When rendering, are you translating before rotating, or rotating before translating? In that picture it looks like you're doing the latter when you want the former. It seems like it should be simple (but maybe I'm missing something):
1) Get the world matrix for each bone
2) Translate to the bone's position
3) Rotate according to the bone's angle
4) Render the stuff connected to that bone
However, if you can get some relative matrix (as opposed to a world matrix) you can multiply the current matrix by that relative matrix and get the new matrix.
P.S. Very nice pictures, it's helping clarify things a lot.
Thats just it, it seems simple but when I actually do it nothing happens, I know it's an error in the way I am calculating it but I have no idea whats wrong.
EDIT: Wait no, I believe it is offsetting the center of rotation but it's way off from where it should be.
[Edited by - CodaKiller on January 20, 2009 6:14:26 PM]
Share on other sites
"D3DXMatrixTranslation( &pos, mesh->bones[i].matrix._41, mesh->bones[i].matrix._42, mesh->bones[i].matrix._43 );"
This provides a ROW MAJOR matrix. You have taken that into account have you?
Try to transpose it, just for fun.
Share on other sites
This may help, it's basically exactly what I do in opengl for rotating around a specified origin:
glPushMatrix(); if(!translateTo.atOrigin()){ glTranslated(translateTo.x, translateTo.y, translateTo.z); } if(!scaleTo.atOrigin()){ glScaled(scaleTo.x, scaleTo.y, scaleTo.z); } if(!rotateTo.atOrigin()){ glTranslated(rotateOrigin.x, rotateOrigin.y, rotateOrigin.z); glRotated(rotateTo.x, 1.0, 0.0, 0.0); glRotated(rotateTo.y, 0.0, 1.0, 0.0); glRotated(rotateTo.z, 0.0, 0.0, 1.0); glTranslated(-rotateOrigin.x, -rotateOrigin.y, -rotateOrigin.z); } drawthebone glPopMatrix()
NOTE: atOrigin() is a function that basically checks to see if the vector is 0, 0, 0
See the rotate part? That's where you can specify to rotate around a specified origin.
However you do that in directx would be it.
Not sure if this is helpful. I store objects with a local translation and scale value and then also a rotation value and a rotation origin. That is what all these points relate to... It sounds like you're already doing this though and the problem lies with your shader (which I am not familiar with), so this is more an explanation to the people asking you why you're doing what you are.
Share on other sites
Quote:
Original post by M2tMThis may help, it's basically exactly what I do in opengl for rotating around a specified origin:*** Source Snippet Removed ***NOTE: atOrigin() is a function that basically checks to see if the vector is 0, 0, 0See the rotate part? That's where you can specify to rotate around a specified origin.However you do that in directx would be it.Not sure if this is helpful. I store objects with a local translation and scale value and then also a rotation value and a rotation origin. That is what all these points relate to... It sounds like you're already doing this though and the problem lies with your shader (which I am not familiar with), so this is more an explanation to the people asking you why you're doing what you are.
Thats what I'm already doing, I translate it to the origin I want then I multiply it by the rotation and then move it back to it's original position.
Thats at "// I thought the code below would give the matrix the correct center of rotation but it's still at the origin." which is actually working but it's not moving it to the right origin or something else is wrong thats making it the wrong origin, idk...
Share on other sites
Ah, I see, that picture is a "desired effect" so my opengl version transcribed into direct x would probably solve you. I mis-read and thought that second picture with the two bones rotated on the second bone's pivot was what you had working.
Have you made sure your rotation origin is what you think it is? Try stepping through the bugger with a debugger.
and go through my example and make sure each step is actually being done.
Share on other sites
CodaKiller, did see janta's post about the type of matrix D3D functions expect? I just want to point it out again to make sure that you see it. D3D expects row major matrices. However, you're getting a column major matrix.
Share on other sites
I'm not too experienced with directx, but I am familiar with this process that you are trying to do. It looks like you only set inv_world once, instead of for each bone. If you use the same inv_world translate matrix for each bone, they will use the same point of rotation.
Here is an image to hopefully explain better:
[IMG]http://i217.photobucket.com/albums/cc94/Oot_Oot_Ima_Monk/skeletalanimation.gif[/IMG]
Share on other sites
Quote:
Original post by MikeTacularCodaKiller, did see janta's post about the type of matrix D3D functions expect? I just want to point it out again to make sure that you see it. D3D expects row major matrices. However, you're getting a column major matrix.
Quote:
NOTE: While Direct3D matrices are technically row major, they have an additional semantic difference to OpenGL matrices which cancels this out; therefore, use this same code for Direct3D.
This is from the PhysX documentation, it's talking about the getColumnMajor44 command and of course I would have made sure I had a working matrix before I would have posted here.
Share on other sites
Quote:
Original post by MortusMaximusI'm not too experienced with directx, but I am familiar with this process that you are trying to do. It looks like you only set inv_world once, instead of for each bone. If you use the same inv_world translate matrix for each bone, they will use the same point of rotation.Here is an image to hopefully explain better:[IMG]http://i217.photobucket.com/albums/cc94/Oot_Oot_Ima_Monk/skeletalanimation.gif[/IMG]
inv_world is just to bring the object back out of model space, it's not possible to compute it for each bone since then it would be the same thing as using identity on all of the matrices.
Share on other sites
Aha! This explains exactly what I need to do! Finally found something! Whats sad is I was doing something similar but then I thought it was wrong and just delete it.
[Edited by - CodaKiller on January 21, 2009 11:47:06 AM]
Share on other sites
So, out of curiosity, what your mistake?
Share on other sites
Quote:
Original post by GrafalgarSo, out of curiosity, what your mistake?
Pretty much all of that code was wrong, it needed to be done in a completely different way.
Who gets the $20? Share this post Link to post Share on other sites Quote: Original post by VerManWho gets the$20?
No one really had an effect on me finding that tutorial or any type of positive effect on the problem, so I would have to say no one.
Share on other sites
Treat yourself ;)
Quote:
Original post by CodaKillerWhat do you mean? There is nothing to debug, I just don't know how to do the math needed and I have payed someone for helping me before, it's not a big deal.I figure if I have wasted over 80 hours of my life on something it's well worth $20 to find the answer. Debugging can allow you to see the values of variables line by line. You could see if the values are being properly updated. If not, you can track from where the improper values are coming. You might have, for example, watched the rotations and positions as they changed and seen that not only was the results not as expected, but that it was a flaw in the algorithm. Then again, as we see, the problem was more deep than originally thought. Debugging might not have helped as much as I expected, but familiarizing your self with the basics of debuggers (marking breakpoints and hovering over variables to see their values) can be very helpful, especially for finding problems in heavily mathematical sections of code. If you're using Visual C++ (which I use ONLY for its debugger and occasional code that requires it), you can just set a breakpoint, compile, and start debugging. And my point wasn't I don't want you to waste your money, my point is I don't want that going on here. I don't care where you waste your money as long as it's not here (though it's not like I have any authority or can stop you, nor will I comment on it anymore). [Edited by - Splinter of Chaos on January 21, 2009 8:10:18 PM] Share this post Link to post Share on other sites To Splinter: I don't actually have a problem with him offering money for a service, or help on figuring out a problem. It's not really any different from the "Help Wanted" forum. If he wants to offer somebody$20 as incentive, it's between him and whoever accepts the offer. Now, had it been an unethical offer (say, to help with homework), that's a different story. But I see no ethical compromises here.
You may feel that the good nature of the people here, offering help for free, is perhaps cheapened by someone who offers money for the same thing, or that it opens the door for other people making the same offers, but I do not consider that a problem. People do, after all, have a choice in how they want to spend their money / how they want to provide help.
Personally, I give him kudo's for finding a way to get people to pay attention, even though it ended up being for naught.
Share on other sites
Quote:
Original post by Grafalgar But I see no ethical compromises here.
I don't find it unethical, I find it uncomfortable.
Quote:
You may feel that the good nature of the people here, offering help for free, is perhaps cheapened by someone who offers money for the same thing, or that it opens the door for other people making the same offers, but I do not consider that a problem.
Reminds me of what happened in the indy crowd recently. An allegedly (haven't played it) great game called Braid, which everyone seems to love, comes out, and people can't stop bitching about it being 20 bucks. That's actually normal for an indy game, I thought, but they'd been so spoiled by 10 and 15 dollar indy games that weren't all that great that 20 was for some reason unreasonable.
(I'm not saying Braid's price was immoral, I'm just explaining the effect it had on the group psychology of that instance.)
If people got in the habit of offering 20 dollars for bugs on these forums, the experts might feel cheated when someone comes and asks for that free exchange of information. The people posting problems might feel cheated by not receiving the same level of help. It would be unfair to people who either could not afford it or didn't think it was worth it. As Kant would say, this is not a universalizable act. But, I'm not a Kantian philosopher and I don't believe this was immoral because I don't think it'll catch on. (I examine situations morally by action and environment, not intent. If it caught on, I still wouldn't consider this immoral still because the environment didn't suggest it would.)
I wasn't going to respond to that (I said I wouldn't), but then you hinted that this would be fine on a wider scale.
Quote:
People do, after all, have a choice in how they want to spend their money / how they want to provide help.
But it would not be true to say people can chose how they want to get help. You can't ask for help in physics help under the DX topic or in DX help under the networking thread. I feel that applies to this situation, but I can see why you do not.
Create an account
Register a new account
• Forum Statistics
• Total Topics
628402
• Total Posts
2982472
• 9
• 10
• 9
• 19
• 24 | 2017-11-25 10:02:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3593268096446991, "perplexity": 1270.8996586737512}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809746.91/warc/CC-MAIN-20171125090503-20171125110503-00483.warc.gz"} |