content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Re: solving a system of two equations
• To: mathgroup at smc.vnet.net
• Subject: [mg102012] Re: [mg101984] solving a system of two equations
• From: Bob Hanlon <hanlonr at cox.net>
• Date: Sun, 26 Jul 2009 03:56:41 -0400 (EDT)
• Reply-to: hanlonr at cox.net
Solve is intended for linear and polynomial equations (see Help). Reduce is much more general.
Reduce[{a/(a + b) == 1/2, a*b/((a + b)^2 (a + b + 1)) == 2},
a] // ToRules
{b -> -(7/16), a -> -(7/16)}
Bob Hanlon
---- per <perfreem at gmail.com> wrote:
hi all,
i am trying to find two parameters a, b of the Beta distribution that
make its mean equal to some given constant m and its variance equal to
some given constant v. this reduces to solving a system of two
equations based on the mean/variance definitions of the beta
a/(a+b) = m
a*b/((a + b)^2 (a + b + 1)) = v
i want to solve this equation for a and b. i tried to solve this in
mathematica, as follows (for m = .5, v = 1):
Solve[{a/(a + b) == .5, a*b/((a + b)^2 (a + b + 1)) == 2}, a]
But it returns: {}
i want to get back values for a and b. does anyone know how i can do
this? also, this is subject to the constraint that a and b are
positive real numbers but i am not sure how to express that.
thank you.
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2009/Jul/msg00662.html","timestamp":"2024-11-11T23:42:30Z","content_type":"text/html","content_length":"30867","record_id":"<urn:uuid:e0d65ff9-a4e4-4874-8d09-8f97175dd237>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00174.warc.gz"}
|
[libreoffice-users] Re: Missing function: Bankers Rounding
If you don't want any errors even in the 10th digit, I think it's probably a
good idea to use ROUNDUP and ROUNDDOWN instead of adding or subtracting
0.005 for the final result. So, my final function for a Bankers Round that
I'm using is:
Kind of repetitious with all the ROUND functions, but works and gives an
exact match to GnuCash results witch was what I needed. Usually I'm
inserting something like A1/4 instead of just A1 where the banker's round of
A1/4 is "my personal expenses" and the difference of that and A1 is my
business expense. Without any rounding, all the data seemed to match
between GnuCash and LibreOffice Calc, but the totals would be off by a few
pennies - not acceptable because often a mismatch like that points to an
oversight on my part in entering values into Calc. To my surprise though,
simple rounding to 2 digits didn't work either. Eventually, I realized
GnuCash was using this bankers rounding method.
Does anyone know how to create a user-defined function in LibreOffice Calc
so I could just enter "BANKROUND(A1)" instead of the messy function listed
View this message in context:
Sent from the Users mailing list archive at Nabble.com.
Unsubscribe instructions: E-mail to users+help@libreoffice.org
List archive: http://listarchives.libreoffice.org/www/users/
*** All posts to this list are publicly archived for eternity ***
• Re: [libreoffice-users] Re: Missing function: Bankers Rounding (continued)
Privacy Policy
Impressum (Legal Info)
Copyright information
: Unless otherwise specified, all text and images on this website are licensed under the
Creative Commons Attribution-Share Alike 3.0 License
. This does not include the source code of LibreOffice, which is licensed under the Mozilla Public License (
). "LibreOffice" and "The Document Foundation" are registered trademarks of their corresponding registered owners or are in actual use as trademarks in one or more countries. Their respective logos
and icons are also subject to international copyright laws. Use thereof is explained in our
trademark policy
|
{"url":"https://listarchives.libreoffice.org/global/users/2011/msg02698.html","timestamp":"2024-11-08T02:28:00Z","content_type":"text/html","content_length":"11036","record_id":"<urn:uuid:a84e55f7-47a3-4b4a-ba53-6adcb7915e6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00037.warc.gz"}
|
To make a bakerys signature chocolate muffins, a baker needs-Turito
Are you sure you want to logout?
To make a bakery’s signature chocolate muffins, a baker needs 2.5 ounces of chocolate for each muffin. How many pounds of chocolate are needed to make 48 signature chocolate muffins? (1 pound = 16
A. 7.5
B. 10
C. 50.5
D. 120
The correct answer is: 7.5
We are given that 2.5 ounces of chocolate are needed for each muffin
Then the number of ounces of chocolate needed to make 48 muffins
48 × 2.5 = 120 ounces
Since 1 pound = 16 ounces,
number of pounds that is equivalent to 120 ounces
Therefore, 7.5 pounds of chocolate are required to make the 48 muffins.
Let us consider option B, If 10 pounds of chocolate were needed to make 48 muffins, then the total number of ounces of chocolate needed would be
10 × 16 = 160 ounces.
The number of ounces of chocolate per muffin would then be
Option C and D are also incorrect.
Similarly by following the same procedures as used to test choice B gives
For option C => 16.8 ounces per muffin
For option D => 40 ounces per muffin
Which is not 2.5 ounces per muffin. Therefore, 50.5 and 120 pounds cannot be the number of pounds needed to make 48 signature chocolate muffins.
Get an Expert Advice From Turito.
|
{"url":"https://www.turito.com/ask-a-doubt/Maths-to-make-a-bakery-s-signature-chocolate-muffins-a-baker-needs-2-5-ounces-of-chocolate-for-each-muffin-how-ma-q08932134","timestamp":"2024-11-03T16:38:46Z","content_type":"application/xhtml+xml","content_length":"279179","record_id":"<urn:uuid:3988d236-284a-4ddf-85e2-928acd4d6881>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00281.warc.gz"}
|
Smartphone Puzzle: Can You Determine the PIN From a Math Equation?Smartphone Puzzle: Can You Determine the PIN From a Math Equation?
Losing your smartphone’s security code can be quite frustrating. But some people, like Leonard in this story, use mnemonic devices to remember their codes. Even with this trick, Leonard is struggling
to recall his PIN. Can you help him out?
The Smartphone Puzzle
Leonard’s smartphone PIN is a perfect square. If you subtract one from its square root, the result is 85 less than the original number. With this information, can you figure out Leonard’s PIN using a
math equation?
Solving the Puzzle
To find the solution, we need to recall some fundamental math principles, especially the following formula: (a−b)2=a2−2ab+b2(a – b)^2 = a^2 – 2ab + b^2(a−b)2=a2−2ab+b2.
Let’s denote xxx as the square root of Leonard’s PIN. Using the given information, we can set up the following equation:(x−1)2=x2−85(x – 1)^2 = x^2 – 85(x−1)2=x2−85
Next, we solve for xxx by manipulating the equation step by step:x2−(x−1)2=85x^2 – (x – 1)^2 = 85×2−(x−1)2=85
Expanding the equation, we get:x2−(x2−2x+1)=85x^2 – (x^2 – 2x + 1) = 85×2−(x2−2x+1)=85
Simplifying further:85=2x−185 = 2x – 185=2x−1
Adding 1 to both sides:86=2×86 = 2×86=2x
Dividing by 2:x=43x = 43x=43
So, Leonard’s PIN is 43243^2432. Calculating 43×4343 \times 4343×43:432=184943^2 = 1849432=1849
Leonard’s smartphone security code is 1849.
Why Solving Puzzles Is Beneficial
Engaging in puzzles like this isn’t just a fun way to pass the time; it’s also a great way to keep your mind sharp. Puzzles require logical thinking and problem-solving skills, which are valuable in
many aspects of life.
Did You Know?
The word “puzzle” comes from the Latin word “aenigma,” which means something hidden. One of the most famous ancient puzzles is the riddle of the Sphinx.
If you enjoyed solving this mathematical puzzle, try more on our website! Keep challenging yourself and sharpening your skills. Who knows what other intriguing puzzles you might solve next?
Peter, a distinguished alumnus of a prominent journalism school in New Jersey, brings a rich tapestry of insights to ‘The Signal’. With a fervent passion for news, society, art, and television, Peter
exemplifies the essence of a modern journalist. His keen eye for societal trends and a deep appreciation for the arts infuse his writing with a unique perspective. Peter’s journalistic prowess is
evident in his ability to weave complex narratives into engaging stories. His work is not just informative but a journey through the multifaceted world of finance and societal dynamics, reflecting
his commitment to excellence in journalism.
|
{"url":"https://tcnjsignal.net/smartphone-puzzle-can-you-determine-the-pin-from-a-math-equation/","timestamp":"2024-11-11T11:14:35Z","content_type":"text/html","content_length":"242070","record_id":"<urn:uuid:2441cff8-fcf7-46f2-9775-5fab51923ebd>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00880.warc.gz"}
|
Geometric distribution - Wikiwand
In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions:
• The probability distribution of the number ${\displaystyle X}$ of Bernoulli trials needed to get one success, supported on ${\displaystyle \mathbb {N} =\{1,2,3,\ldots \}}$;
• The probability distribution of the number ${\displaystyle Y=X-1}$ of failures before the first success, supported on ${\displaystyle \mathbb {N} _{0}=\{0,1,2,\ldots \}}$.
Probability mass function
Cumulative distribution function
Parameters ${\displaystyle 0<p\leq 1}$ success probability (real) ${\displaystyle 0<p\leq 1}$ success probability (real)
Support k trials where ${\displaystyle k\in \mathbb {N} =\{1,2,3,\dotsc \}}$ k failures where ${\displaystyle k\in \mathbb {N} _{0}=\{0,1,2,\dotsc \}}$
PMF ${\displaystyle (1-p)^{k-1}p}$ ${\displaystyle (1-p)^{k}p}$
CDF ${\displaystyle 1-(1-p)^{\lfloor x\rfloor }}$ for ${\displaystyle x\geq 1}$, ${\displaystyle 1-(1-p)^{\lfloor x\rfloor +1}}$ for ${\displaystyle x\geq 0}$,
${\displaystyle 0}$ for ${\displaystyle x<1}$ ${\displaystyle 0}$ for ${\displaystyle x<0}$
Mean ${\displaystyle {\frac {1}{p}}}$ ${\displaystyle {\frac {1-p}{p}}}$
${\displaystyle \left\lceil {\frac {-1}{\log _{2}(1-p)}}\right\rceil }$ ${\displaystyle \left\lceil {\frac {-1}{\log _{2}(1-p)}}\right\rceil -1}$
(not unique if ${\displaystyle -1/\log _{2}(1-p)}$ is an integer) (not unique if ${\displaystyle -1/\log _{2}(1-p)}$ is an integer)
Mode ${\displaystyle 1}$ ${\displaystyle 0}$
Variance ${\displaystyle {\frac {1-p}{p^{2}}}}$ ${\displaystyle {\frac {1-p}{p^{2}}}}$
Skewness ${\displaystyle {\frac {2-p}{\sqrt {1-p}}}}$ ${\displaystyle {\frac {2-p}{\sqrt {1-p}}}}$
Excess kurtosis ${\displaystyle 6+{\frac {p^{2}}{1-p}}}$ ${\displaystyle 6+{\frac {p^{2}}{1-p}}}$
Entropy ${\displaystyle {\tfrac {-(1-p)\log(1-p)-p\log p}{p}}}$ ${\displaystyle {\tfrac {-(1-p)\log(1-p)-p\log p}{p}}}$
MGF ${\displaystyle {\frac {pe^{t}}{1-(1-p)e^{t}}},}$ ${\displaystyle {\frac {p}{1-(1-p)e^{t}}},}$
for ${\displaystyle t<-\ln(1-p)}$ for ${\displaystyle t<-\ln(1-p)}$
CF ${\displaystyle {\frac {pe^{it}}{1-(1-p)e^{it}}}}$ ${\displaystyle {\frac {p}{1-(1-p)e^{it}}}}$
PGF ${\displaystyle {\frac {pz}{1-(1-p)z}}}$ ${\displaystyle {\frac {p}{1-(1-p)z}}}$
Fisher information ${\displaystyle {\tfrac {1}{p^{2}\cdot (1-p)}}}$ ${\displaystyle {\tfrac {1}{p^{2}\cdot (1-p)}}}$
These two different geometric distributions should not be confused with each other. Often, the name shifted geometric distribution is adopted for the former one (distribution of ${\displaystyle X}$);
however, to avoid ambiguity, it is considered wise to indicate which is intended, by mentioning the support explicitly.
The geometric distribution gives the probability that the first occurrence of success requires ${\displaystyle k}$ independent trials, each with success probability ${\displaystyle p}$. If the
probability of success on each trial is ${\displaystyle p}$, then the probability that the ${\displaystyle k}$-th trial is the first success is
${\displaystyle \Pr(X=k)=(1-p)^{k-1}p}$
for ${\displaystyle k=1,2,3,4,\dots }$
The above form of the geometric distribution is used for modeling the number of trials up to and including the first success. By contrast, the following form of the geometric distribution is used for
modeling the number of failures until the first success:
${\displaystyle \Pr(Y=k)=\Pr(X=k+1)=(1-p)^{k}p}$
for ${\displaystyle k=0,1,2,3,\dots }$
The geometric distribution gets its name because its probabilities follow a geometric sequence. It is sometimes called the Furry distribution after Wendell H. Furry.^[1]^:210
The geometric distribution is the discrete probability distribution that describes when the first success in an infinite sequence of independent and identically distributed Bernoulli trials occurs.
Its probability mass function depends on its parameterization and support. When supported on ${\displaystyle \mathbb {N} }$, the probability mass function is${\displaystyle P(X=k)=(1-p)^{k-1}p}$where
${\displaystyle k=1,2,3,\dotsc }$ is the number of trials and ${\displaystyle p}$ is the probability of success in each trial.^[2]^:260–261
The support may also be ${\displaystyle \mathbb {N} _{0}}$, defining ${\displaystyle Y=X-1}$. This alters the probability mass function into${\displaystyle P(Y=k)=(1-p)^{k}p}$where ${\displaystyle k=
0,1,2,\dotsc }$ is the number of failures before the first success.^[3]^:66
An alternative parameterization of the distribution gives the probability mass function${\displaystyle P(Y=k)=\left({\frac {P}{Q}}\right)^{k}\left(1-{\frac {P}{Q}}\right)}$where ${\displaystyle P={\
frac {1-p}{p}}}$ and ${\displaystyle Q={\frac {1}{p}}}$.^[1]^:208–209
An example of a geometric distribution arises from rolling a six-sided die until a "1" appears. Each roll is independent with a ${\displaystyle 1/6}$ chance of success. The number of rolls needed
follows a geometric distribution with ${\displaystyle p=1/6}$.
The geometric distribution is the only memoryless discrete probability distribution.^[4] It is the discrete version of the same property found in the exponential distribution.^[1]^:228 The property
asserts that the number of previously failed trials does not affect the number of future trials needed for a success.
Because there are two definitions of the geometric distribution, there are also two definitions of memorylessness for discrete random variables.^[5] Expressed in terms of conditional probability, the
two definitions are${\displaystyle \Pr(X>m+n\mid X>n)=\Pr(X>m),}$
and${\displaystyle \Pr(Y>m+n\mid Y\geq n)=\Pr(Y>m),}$
where ${\displaystyle m}$ and ${\displaystyle n}$ are natural numbers, ${\displaystyle X}$ is a geometrically distributed random variable defined over ${\displaystyle \mathbb {N} }$, and ${\
displaystyle Y}$ is a geometrically distributed random variable defined over ${\displaystyle \mathbb {N} _{0}}$. Note that these definitions are not equivalent for discrete random variables; ${\
displaystyle Y}$ does not satisfy the first equation and ${\displaystyle X}$ does not satisfy the second.
Moments and cumulants
The expected value and variance of a geometrically distributed random variable ${\displaystyle X}$ defined over ${\displaystyle \mathbb {N} }$ is^[2]^:261${\displaystyle \operatorname {E} (X)={\
frac {1}{p}},\qquad \operatorname {var} (X)={\frac {1-p}{p^{2}}}.}$ With a geometrically distributed random variable ${\displaystyle Y}$ defined over ${\displaystyle \mathbb {N} _{0}}$, the expected
value changes into${\displaystyle \operatorname {E} (Y)={\frac {1-p}{p}},}$while the variance stays the same.^[6]^:114–115
For example, when rolling a six-sided die until landing on a "1", the average number of rolls needed is ${\displaystyle {\frac {1}{1/6}}=6}$ and the average number of failures is ${\displaystyle {\
frac {1-1/6}{1/6}}=5}$.
The moment generating function of the geometric distribution when defined over ${\displaystyle \mathbb {N} }$ and ${\displaystyle \mathbb {N} _{0}}$ respectively is^[7]^[6]^:114{\displaystyle {\
begin{aligned}M_{X}(t)&={\frac {pe^{t}}{1-(1-p)e^{t}}}\\M_{Y}(t)&={\frac {p}{1-(1-p)e^{t}}},t<-\ln(1-p)\end{aligned}}}The moments for the number of failures before the first success are given by
{\displaystyle {\begin{aligned}\mathrm {E} (Y^{n})&{}=\sum _{k=0}^{\infty }(1-p)^{k}p\cdot k^{n}\\&{}=p\operatorname {Li} _{-n}(1-p)&({\text{for }}neq 0)\end{aligned}}}
where ${\displaystyle \operatorname {Li} _{-n}(1-p)}$ is the polylogarithm function.^[8]
The cumulant generating function of the geometric distribution defined over ${\displaystyle \mathbb {N} _{0}}$ is^[1]^:216 ${\displaystyle K(t)=\ln p-\ln(1-(1-p)e^{t})}$The cumulants ${\
displaystyle \kappa _{r}}$ satisfy the recursion${\displaystyle \kappa _{r+1}=q{\frac {\delta \kappa _{r}}{\delta q}},r=1,2,\dotsc }$where ${\displaystyle q=1-p}$, when defined over ${\displaystyle \
mathbb {N} _{0}}$.^[1]^:216
Proof of expected value
Consider the expected value ${\displaystyle \mathrm {E} (X)}$ of X as above, i.e. the average number of trials until a success. On the first trial, we either succeed with probability ${\displaystyle
p}$, or we fail with probability ${\displaystyle 1-p}$. If we fail the remaining mean number of trials until a success is identical to the original mean. This follows from the fact that all trials
are independent. From this we get the formula:
{\displaystyle {\begin{aligned}\operatorname {\mathrm {E} } (X)&{}=p\mathrm {E} [X|X=1]+(1-p)\mathrm {E} [X|X>1]\\&{}=p\mathrm {E} [X|X=1]+(1-p)(1+\mathrm {E} [X-1|X>1])\\&{}=p\cdot 1+(1-p)\cdot
(1+\mathrm {E} [X]),\end{aligned}}}
which, if solved for ${\displaystyle \mathrm {E} (X)}$, gives:
${\displaystyle \operatorname {E} (X)={\frac {1}{p}}.}$
The expected number of failures ${\displaystyle Y}$ can be found from the linearity of expectation, ${\displaystyle \mathrm {E} (Y)=\mathrm {E} (X-1)=\mathrm {E} (X)-1={\frac {1}{p}}-1={\frac {1-p}
{p}}}$. It can also be shown in the following way:
{\displaystyle {\begin{aligned}\operatorname {E} (Y)&{}=\sum _{k=0}^{\infty }(1-p)^{k}p\cdot k\\&{}=p\sum _{k=0}^{\infty }(1-p)^{k}k\\&{}=p(1-p)\sum _{k=0}^{\infty }(1-p)^{k-1}\cdot k\\&{}=p(1-p)
\left[{\frac {d}{dp}}\left(-\sum _{k=0}^{\infty }(1-p)^{k}\right)\right]\\&{}=p(1-p){\frac {d}{dp}}\left(-{\frac {1}{p}}\right)\\&{}={\frac {1-p}{p}}.\end{aligned}}}
The interchange of summation and differentiation is justified by the fact that convergent power series converge uniformly on compact subsets of the set of points where they converge.
Summary statistics
The mean of the geometric distribution is its expected value which is, as previously discussed in § Moments and cumulants, ${\displaystyle {\frac {1}{p}}}$ or ${\displaystyle {\frac {1-p}{p}}}$ when
defined over ${\displaystyle \mathbb {N} }$ or ${\displaystyle \mathbb {N} _{0}}$ respectively.
The median of the geometric distribution is ${\displaystyle \left\lceil -{\frac {\log 2}{\log(1-p)}}\right\rceil }$when defined over ${\displaystyle \mathbb {N} }$^[9] and ${\displaystyle \left\
lfloor -{\frac {\log 2}{\log(1-p)}}\right\rfloor }$ when defined over ${\displaystyle \mathbb {N} _{0}}$.^[3]^:69
The mode of the geometric distribution is the first value in the support set. This is 1 when defined over ${\displaystyle \mathbb {N} }$ and 0 when defined over ${\displaystyle \mathbb {N} _{0}}$.^
The skewness of the geometric distribution is ${\displaystyle {\frac {2-p}{\sqrt {1-p}}}}$.^[6]^:115
The kurtosis of the geometric distribution is ${\displaystyle 9+{\frac {p^{2}}{1-p}}}$.^[6]^:115 The excess kurtosis of a distribution is the difference between its kurtosis and the kurtosis of a
normal distribution, ${\displaystyle 3}$.^[10]^:217 Therefore, the excess kurtosis of the geometric distribution is ${\displaystyle 6+{\frac {p^{2}}{1-p}}}$. Since ${\displaystyle {\frac {p^{2}}
{1-p}}\geq 0}$, the excess kurtosis is always positive so the distribution is leptokurtic.^[3]^:69 In other words, the tail of a geometric distribution decays faster than a Gaussian.^[10]^:217
Entropy (Geometric Distribution, Failures Before Success)
Entropy is a measure of uncertainty in a probability distribution. For the geometric distribution that models the number of failures before the first success, the probability mass function is:
${\displaystyle P(X=k)=(1-p)^{k}p,\quad k=0,1,2,\dots }$
The entropy ${\displaystyle H(X)}$ for this distribution is defined as:
{\displaystyle {\begin{aligned}H(X)&=-\sum _{k=0}^{\infty }P(X=k)\ln P(X=k)\\&=-\sum _{k=0}^{\infty }(1-p)^{k}p\ln \left((1-p)^{k}p\right)\\&=-\sum _{k=0}^{\infty }(1-p)^{k}p\left[k\ln(1-p)+\ln p
\right]\\&=-\log p-{\frac {1-p}{p}}\log(1-p)\end{aligned}}}
The entropy increases as the probability ${\displaystyle p}$ decreases, reflecting greater uncertainty as success becomes rarer.
Fisher's Information (Geometric Distribution, Failures Before Success)
Fisher information measures the amount of information that an observable random variable ${\displaystyle X}$ carries about an unknown parameter ${\displaystyle p}$. For the geometric distribution
(failures before the first success), the Fisher information with respect to ${\displaystyle p}$ is given by:
${\displaystyle I(p)={\frac {1}{p^{2}(1-p)}}}$
• The Likelihood Function for a geometric random variable ${\displaystyle X}$ is:
${\displaystyle L(p;X)=(1-p)^{X}p}$
• The Log-Likelihood Function is:
${\displaystyle \ln L(p;X)=X\ln(1-p)+\ln p}$
• The Score Function (first derivative of the log-likelihood w.r.t. ${\displaystyle p}$) is:
${\displaystyle {\frac {\partial }{\partial p}}\ln L(p;X)={\frac {1}{p}}-{\frac {X}{1-p}}}$
• The second derivative of the log-likelihood function is:
${\displaystyle {\frac {\partial ^{2}}{\partial p^{2}}}\ln L(p;X)=-{\frac {1}{p^{2}}}-{\frac {X}{(1-p)^{2}}}}$
• Fisher Information is calculated as the negative expected value of the second derivative:
{\displaystyle {\begin{aligned}I(p)&=-E\left[{\frac {\partial ^{2}}{\partial p^{2}}}\ln L(p;X)\right]\\&=-\left(-{\frac {1}{p^{2}}}-{\frac {1-p}{p(1-p)^{2}}}\right)\\&={\frac {1}{p^{2}(1-p)}}\end
Fisher information increases as ${\displaystyle p}$ decreases, indicating that rarer successes provide more information about the parameter ${\displaystyle p}$.
Entropy (Geometric Distribution, Trials Until Success)
For the geometric distribution modeling the number of trials until the first success, the probability mass function is:
${\displaystyle P(X=k)=(1-p)^{k-1}p,\quad k=1,2,3,\dots }$
The entropy ${\displaystyle H(X)}$ for this distribution is given by:
{\displaystyle {\begin{aligned}H(X)&=-\sum _{k=1}^{\infty }P(X=k)\ln P(X=k)\\&=-\sum _{k=1}^{\infty }(1-p)^{k-1}p\ln \left((1-p)^{k-1}p\right)\\&=-\sum _{k=1}^{\infty }(1-p)^{k-1}p\left[(k-1)\ln
(1-p)+\ln p\right]\\&=-\log p+{\frac {1-p}{p}}\log(1-p)\end{aligned}}}
Entropy increases as ${\displaystyle p}$ decreases, reflecting greater uncertainty as the probability of success in each trial becomes smaller.
Fisher's Information (Geometric Distribution, Trials Until Success)
Fisher information for the geometric distribution modeling the number of trials until the first success is given by:
${\displaystyle I(p)={\frac {1}{p^{2}(1-p)}}}$
• The Likelihood Function for a geometric random variable ${\displaystyle X}$ is:
${\displaystyle L(p;X)=(1-p)^{X-1}p}$
• The Log-Likelihood Function is:
${\displaystyle \ln L(p;X)=(X-1)\ln(1-p)+\ln p}$
• The Score Function (first derivative of the log-likelihood w.r.t. ${\displaystyle p}$) is:
${\displaystyle {\frac {\partial }{\partial p}}\ln L(p;X)={\frac {1}{p}}-{\frac {X-1}{1-p}}}$
• The second derivative of the log-likelihood function is:
${\displaystyle {\frac {\partial ^{2}}{\partial p^{2}}}\ln L(p;X)=-{\frac {1}{p^{2}}}-{\frac {X-1}{(1-p)^{2}}}}$
• Fisher Information is calculated as the negative expected value of the second derivative:
{\displaystyle {\begin{aligned}I(p)&=-E\left[{\frac {\partial ^{2}}{\partial p^{2}}}\ln L(p;X)\right]\\&=-\left(-{\frac {1}{p^{2}}}-{\frac {1-p}{p(1-p)^{2}}}\right)\\&={\frac {1}{p^{2}(1-p)}}\end
General properties
• The probability generating functions of geometric random variables ${\displaystyle X}$ and ${\displaystyle Y}$ defined over ${\displaystyle \mathbb {N} }$ and ${\displaystyle \mathbb {N} _{0}}$
are, respectively,^[6]^:114–115
{\displaystyle {\begin{aligned}G_{X}(s)&={\frac {s\,p}{1-s\,(1-p)}},\\[10pt]G_{Y}(s)&={\frac {p}{1-s\,(1-p)}},\quad |s|<(1-p)^{-1}.\end{aligned}}}
• The characteristic function ${\displaystyle \varphi (t)}$ is equal to ${\displaystyle G(e^{it})}$ so the geometric distribution's characteristic function, when defined over ${\displaystyle \
mathbb {N} }$ and ${\displaystyle \mathbb {N} _{0}}$ respectively, is^[11]^:1630{\displaystyle {\begin{aligned}\varphi _{X}(t)&={\frac {pe^{it}}{1-(1-p)e^{it}}},\\[10pt]\varphi _{Y}(t)&={\frac
• The entropy of a geometric distribution with parameter ${\displaystyle p}$ is^[12]${\displaystyle -{\frac {p\log _{2}p+(1-p)\log _{2}(1-p)}{p}}}$
• Given a mean, the geometric distribution is the maximum entropy probability distribution of all discrete probability distributions. The corresponding continuous distribution is the exponential
• The geometric distribution defined on ${\displaystyle \mathbb {N} _{0}}$ is infinitely divisible, that is, for any positive integer ${\displaystyle n}$, there exist ${\displaystyle n}$
independent identically distributed random variables whose sum is also geometrically distributed. This is because the negative binomial distribution can be derived from a Poisson-stopped sum of
logarithmic random variables.^[11]^:606–607
• The decimal digits of the geometrically distributed random variable Y are a sequence of independent (and not identically distributed) random variables. For example, the hundreds digit D has this
probability distribution:
${\displaystyle \Pr(D=d)={q^{100d} \over 1+q^{100}+q^{200}+\cdots +q^{900}},}$
where q = 1 − p, and similarly for the other digits, and, more generally, similarly for numeral systems with other bases than 10. When the base is 2, this shows that a geometrically distributed
random variable can be written as a sum of independent random variables whose probability distributions are indecomposable.
• The sum of ${\displaystyle r}$ independent geometric random variables with parameter ${\displaystyle p}$ is a negative binomial random variable with parameters ${\displaystyle r}$ and ${\
displaystyle p}$.^[14] The geometric distribution is a special case of the negative binomial distribution, with ${\displaystyle r=1}$.
• The geometric distribution is a special case of discrete compound Poisson distribution.^[11]^:606
• The minimum of ${\displaystyle n}$ geometric random variables with parameters ${\displaystyle p_{1},\dotsc ,p_{n}}$ is also geometrically distributed with parameter ${\displaystyle 1-\prod _{i=1}
• Suppose 0 < r < 1, and for k = 1, 2, 3, ... the random variable X[k] has a Poisson distribution with expected value r^k/k. Then
${\displaystyle \sum _{k=1}^{\infty }k\,X_{k}}$
has a geometric distribution taking values in ${\displaystyle \mathbb {N} _{0}}$, with expected value r/(1 − r).
• The exponential distribution is the continuous analogue of the geometric distribution. Applying the floor function to the exponential distribution with parameter ${\displaystyle \lambda }$
creates a geometric distribution with parameter ${\displaystyle p=1-e^{-\lambda }}$ defined over ${\displaystyle \mathbb {N} _{0}}$.^[3]^:74 This can be used to generate geometrically
distributed random numbers as detailed in § Random variate generation.
• If p = 1/n and X is geometrically distributed with parameter p, then the distribution of X/n approaches an exponential distribution with expected value 1 as n → ∞, since{\displaystyle {\begin
{aligned}\Pr(X/n>a)=\Pr(X>na)&=(1-p)^{na}=\left(1-{\frac {1}{n}}\right)^{na}=\left[\left(1-{\frac {1}{n}}\right)^{n}\right]^{a}\\&\to [e^{-1}]^{a}=e^{-a}{\text{ as }}n\to \infty .\end{aligned}}}
More generally, if p = λ/n, where λ is a parameter, then as n→ ∞ the distribution of X/n approaches an exponential distribution with rate λ:${\displaystyle \Pr(X>nx)=\lim _{n\to \infty }(1-\
lambda /n)^{nx}=e^{-\lambda x}}$ therefore the distribution function of X/n converges to ${\displaystyle 1-e^{-\lambda x}}$, which is that of an exponential random variable.
• The index of dispersion of the geometric distribution is ${\displaystyle {\frac {1}{p}}}$ and its coefficient of variation is ${\displaystyle {\frac {1}{\sqrt {1-p}}}}$. The distribution is
The true parameter ${\displaystyle p}$ of an unknown geometric distribution can be inferred through estimators and conjugate distributions.
Method of moments
Provided they exist, the first ${\displaystyle l}$ moments of a probability distribution can be estimated from a sample ${\displaystyle x_{1},\dotsc ,x_{n}}$ using the formula${\displaystyle m_{i}={\
frac {1}{n}}\sum _{j=1}^{n}x_{j}^{i}}$where ${\displaystyle m_{i}}$ is the ${\displaystyle i}$th sample moment and ${\displaystyle 1\leq i\leq l}$.^[16]^:349–350 Estimating ${\displaystyle \mathrm
{E} (X)}$ with ${\displaystyle m_{1}}$ gives the sample mean, denoted ${\displaystyle {\bar {x}}}$. Substituting this estimate in the formula for the expected value of a geometric distribution and
solving for ${\displaystyle p}$ gives the estimators ${\displaystyle {\hat {p}}={\frac {1}{\bar {x}}}}$ and ${\displaystyle {\hat {p}}={\frac {1}{{\bar {x}}+1}}}$ when supported on ${\displaystyle \
mathbb {N} }$ and ${\displaystyle \mathbb {N} _{0}}$ respectively. These estimators are biased since ${\displaystyle \mathrm {E} \left({\frac {1}{\bar {x}}}\right)>{\frac {1}{\mathrm {E} ({\bar
{x}})}}=p}$ as a result of Jensen's inequality.^[17]^:53–54
Maximum likelihood estimation
The maximum likelihood estimator of ${\displaystyle p}$ is the value that maximizes the likelihood function given a sample.^[16]^:308 By finding the zero of the derivative of the log-likelihood
function when the distribution is defined over ${\displaystyle \mathbb {N} }$, the maximum likelihood estimator can be found to be ${\displaystyle {\hat {p}}={\frac {1}{\bar {x}}}}$, where ${\
displaystyle {\bar {x}}}$ is the sample mean.^[18] If the domain is ${\displaystyle \mathbb {N} _{0}}$, then the estimator shifts to ${\displaystyle {\hat {p}}={\frac {1}{{\bar {x}}+1}}}$. As
previously discussed in § Method of moments, these estimators are biased.
Regardless of the domain, the bias is equal to
${\displaystyle b\equiv \operatorname {E} {\bigg [}\;({\hat {p}}_{\mathrm {mle} }-p)\;{\bigg ]}={\frac {p\,(1-p)}{n}}}$
which yields the bias-corrected maximum likelihood estimator,
${\displaystyle {\hat {p\,}}_{\text{mle}}^{*}={\hat {p\,}}_{\text{mle}}-{\hat {b\,}}}$
Bayesian inference
In Bayesian inference, the parameter ${\displaystyle p}$ is a random variable from a prior distribution with a posterior distribution calculated using Bayes' theorem after observing samples.^[17]^:
167 If a beta distribution is chosen as the prior distribution, then the posterior will also be a beta distribution and it is called the conjugate distribution. In particular, if a ${\displaystyle \
mathrm {Beta} (\alpha ,\beta )}$ prior is selected, then the posterior, after observing samples ${\displaystyle k_{1},\dotsc ,k_{n}\in \mathbb {N} }$, is^[19]${\displaystyle p\sim \mathrm {Beta} \
left(\alpha +n,\ \beta +\sum _{i=1}^{n}(k_{i}-1)\right).\!}$Alternatively, if the samples are in ${\displaystyle \mathbb {N} _{0}}$, the posterior distribution is^[20]${\displaystyle p\sim \mathrm
{Beta} \left(\alpha +n,\beta +\sum _{i=1}^{n}k_{i}\right).}$Since the expected value of a ${\displaystyle \mathrm {Beta} (\alpha ,\beta )}$ distribution is ${\displaystyle {\frac {\alpha }{\alpha +\
beta }}}$,^[11]^:145 as ${\displaystyle \alpha }$ and ${\displaystyle \beta }$ approach zero, the posterior mean approaches its maximum likelihood estimate.
The geometric distribution can be generated experimentally from i.i.d. standard uniform random variables by finding the first such random variable to be less than or equal to ${\displaystyle p}$.
However, the number of random variables needed is also geometrically distributed and the algorithm slows as ${\displaystyle p}$ decreases.^[21]^:498
Random generation can be done in constant time by truncating exponential random numbers. An exponential random variable ${\displaystyle E}$ can become geometrically distributed with parameter ${\
displaystyle p}$ through ${\displaystyle \lceil -E/\log(1-p)\rceil }$. In turn, ${\displaystyle E}$ can be generated from a standard uniform random variable ${\displaystyle U}$ altering the formula
into ${\displaystyle \lceil \log(U)/\log(1-p)\rceil }$.^[21]^:499–500^[22]
The geometric distribution is used in many disciplines. In queueing theory, the M/M/1 queue has a steady state following a geometric distribution.^[23] In stochastic processes, the Yule Furry process
is geometrically distributed.^[24] The distribution also arises when modeling the lifetime of a device in discrete contexts.^[25] It has also been used to fit data including modeling patients
spreading COVID-19.^[26]
|
{"url":"https://www.wikiwand.com/en/articles/Geometric_distribution","timestamp":"2024-11-03T09:58:24Z","content_type":"text/html","content_length":"1049787","record_id":"<urn:uuid:6d51c0b4-a296-4b0e-869b-1d18f4130750>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00035.warc.gz"}
|
Optimizer Quick Start
This tutorial shows how to:
• Set up a basic NLP optimization model using Decision, Objective, and Constraint Nodes
• Define the central Optimization node using the DefineOptimization function
• Obtain solution output and status
• Specify domain types (i.e., integer, continuous, etc.) and bounds for decisions
• Combine parametric analysis with optimization
• Change initial guesses for non-convex solution spaces
Structured Optimization
Structured Optimization simplifies the process of formulating a model for optimization. The function, DefineOptimization, uses a similar structure for Linear Programs (LP), Quadratic Programs (QP)
and Nonlinear Programs (NLP). It analyzes the model and automatically selects the optimization engine most appropriate for your model. (You can still override this process if you want.)
This section includes simple NLP examples to demonstrate the roles of Decision variables, Constraints, Objectives, and Decision attributes in the Structured Optimization framework. The same basic
structure applies equally to LP and QP formulations.
The Optimum Can Example
The Optimum Can example determines the dimensions of a cylindrical object having a minimum surface area for a given volume. Admittedly, this is not a very interesting optimization problem. In fact,
the solution can be derived on paper using basic differential calculus. However, the simplicity of the example allows us to focus on the workflow and object relationships using the new Structured
Optimization framework in Analytica.
In this example, we will decide on the Radius and Height of a cylindrical vessel. We represent each of these as a Decision variable in the influence diagram. The values we define for these nodes will
be used as initial guesses for optimization types that require an initial guess (NLP or non-convex QP). Otherwise, the definitions of these inputs are not important. We use 1cm as an initial guess
for both the radius and height.
Decision Radius := 1
Decision Height := 1
Constants have no special interpretation in optimization definitions. They can be used as usual for values that stay constant in the model. In this example, we will use a Constant for the required
volume which does not vary in the model.
Constant Required_Volume := 1000
General variables are used for intermediate values as well as for the central DefineOptimization function described below. We also use a variable to define Volume of the cylinder.
Variable Volume := pi*Radius^2*Height
Constraints contain equality or inequality conditions that restrict the range of optimized results. In this example, we use a constraint object to enforce the minimum volume requirement on our can.
Constraint Volume_Constraint := (Volume >= Required_Volume)
Most optimizations have an objective value to maximize or minimize. (Some problems are only concerned with feasible solutions that meet constraints.) In this example we are minimizing the surface
area of our can. We define surface area using an Objective variable. The can has round disks at the top and base with surface area (πR2) and a tubular side with surface area (2πRH).
Objective Surface_area := 2*(pi*Radius^2) + (2*pi*Radius*Height)
The DefineOptimization() Function
The DefineOptimization() function is the key component of all Structured Optimization models. It brings all other components together, specifying the optimization to be performed. This function is
typically placed in a Variable object in the center of our influence diagram. This function includes many optional parameters, but we use only the core parameters in this example:
Identifier for the decision node (or a list of identifiers separated by commas if there are multiple decisions). Specify All to include all decision nodes in the model or All in module to include
all desired decisions within a designated module.
Identifier for the constraint node (or a list of identifiers separated by commas if there are multiple constraints). Specify All to include all constraint nodes in the model or All in module to
include all desired constraints within a designated module. You can also specify inequality or equality expressions directly, or omit the parameter entirely in an unconstrained optimization
Use the words “Maximize” or “Minimize” depending on the type of problem. Follow this with an expression or with the identifier for the relevant objective node.
We specify our DefineOptimization node as:
Variable Opt :=
Decisions: Radius, Height,
Constraints: Volume_Constraint,
Minimize: Surface_area)
Viewing the Optimization Object
The DefineOptimization function evaluates to a special object that contains detailed information about the optimization. The object appears as a blue hyperlink that shows the type of optimization
problem you have constructed. In this case we see it is NLP . You can double-click the optimization object to open a new window revealing internal details from the optimization engine. Clicking
reference objects allows you to drill down to finer levels of detail. This information is also available by using the OptInfo function.
In this case, we have allowed Analytica to automatically determine the type of problem. Alternatively, you can specify the problem type along with the desired engine and other settings by adding
optional parameters to DefineOptimization. See Optimizer Functions for more details about the Type and Engine parameters of DefineOptimization.
Obtaining the Solution
Now that we have specified an optimization, how do we compute and view the result? You may be tempted to re-evaluate the Radius and Height decision variables to see if their values have changed. But
this is not how optimization works in Analytica. Input values always retain their original definitions. (In this case, we simply used 1 as a dummy value for Radius and Height.) To obtain the
solution, you need to create an output node defined with the OptSolution function. This function usually uses two parameters:
OptSolution(Opt, Decision)
• Opt: Identifier for the node containing DefineOptimization
• Decision: Identifier for the counterpart Decision input node
Decision Opt_Radius := OptSolution(Opt, Radius)
Decision Opt_Height := OptSolution(Opt, Height)
The Decision parameter is optional. If it is omitted, the solution will include all decisions along a local index named .DecisionVector ..
Obtaining the Optimized Objective Value
To conveniently evaluate the optimized objective value (the surface area of the solution can) you can use the OptObjective function. The only parameter is the identifier for the Define Optimization
Objective Opt_Surface := OptObjective(Opt)
Viewing Optimization Status
To check the status of an optimization, use the OptStatusText function. Enter the identifier for the node containing DefineOptimization.
Variable Status := OptStatusText(Opt)
This will reveal a text string describing the status of the optimization result. Status messages differ according to problem characteristics and the engine being used. In general these messages
indicate whether or not a feasible solution has been found and if so, whether or not the optimizer was able to converge to a bounded solution. In this example status is: “Optimal solution has been
Copying Optimized Results to Definitions
In some cases, you may wish to copy the optimized decision values into the definition of the original decisions. With this, the result for variables downstream of the decisions will reflect their
optimal values as well.
You can configure your model to copy optimized results into the original decisions by adding two buttons to your model. The first button solves for the optimal solution and copy the optimal values.
The second button restores the original (non-optimized) definition. Functions provided in the Structured Optimization Tool.ana library take care of the details.
To configure these buttons:
1. With the diagram in focus, select Add Library from the File menu.
2. Select Structured Optimization Tools.ana and click Open.
3. Select Embed, then click OK.
4. Drag a button from the tool bar, title it "Set to Optimal."
6. Drag a second button to the diagram, name it "Restore Defintions" and set its Script attribute to Restore_Decision_Defs(opt).
Now we’re ready to try them out.
7. Open the object window for Radius:
Changing Variable Types (Domain)
Click either Radius or Height to open the Object window for the node. You will notice a pull-down menu for Domain. This attribute specifies the variable type. It is always visible for decision nodes
if you are using the Optimizer edition.
Suppose the factory requires Radius and Height to be integer values in centimeters for tooling purposes, or because they don’t like decimals. Change the Domains of Radius and Height to Integer and
re-evaluate the solution:
The new solution finds the integer values that come closest to meeting the optimization criteria.
See Optimizer Attributes for descriptions of all available domains.
Setting Bounds on Decision Values
Suppose the cans must not exceed a 5cm radius in order to meet National Association for the Advancement of People with Small Hands (NAAPSH) guidelines. One way to set this limit would be to add
another constraint. But since this restriction applies directly to one of the decision variables, it is easier to simply set an upper bound on the variable directly.
Double-click the Radius variable and enter 5 as the upper bound. The updated solution will describe a thinner can that is 5cm in Radius and 13cm in Height.
Bounds and Domains
Some Domain types are not compatible with bounds. If one of these domains is selected (i.e. Boolean), bounds attributes will not be visible.
Bounds and Feasible Solutions
It is possible to have no feasible solution within the designated bounds. For example, if you restrict Radius to 5cm while restricting Height to 10cm, it will be impossible to produce a can that
meets the minimum volume constraint. The OptStatusText() function indicates whether or not a feasible solution has been found.
Using Parametric Analysis with Optimization
Before adding optimization to existing models, it is often useful to perform a parametric analysis to see how variations in decision inputs affect the objective value. If you have done this, your
Decision and Objective variables will include parametric indexes. To demonstrate this in the Optimum Can example, we can define the Radius to be a sequence of values that vary parametrically. We then
re-define Height such that the volume of the cylinder remains constant as radius varies:
Variable Radius := Sequence(4.5, 6.5, 0.1)
Variable Height := Required_volume/(pi*Radius^2)
Now you can evaluate the objective Surface_Area to see how it is affected by Radius.
An optimization requires a scalar-valued objective. An array-valued objective usually implies an array of optimizations, each optimizing an individual element of the objective array. But parametric
indexes are an exception to this rule! If the Objective is an array over parametric indexes, the indexes are ignored by the optimization. So even though we have an array valued Objective in this
example, there is still only one optimization run.
Parametric analysis is a good way to gain insight into your model. The Structured Optimization framework is designed so that it will not be confused by this
The Initial Guess Attribute
LP and convex QP problems do not rely on initial guesses and always yield a solution that is globally optimal. But in NLP and non-convex QP problems it is not always possible to guarantee that a
solution found by the optimizer is a global optimum. It might be merely a “local” optimum within the solution space. Optimization methods for these problems use an initial guess from which to start
the search for a solution. The particular solution the optimizer returns may depend on the starting point.
Normally, Analytica uses the defined value of the Decision variables as the initial guess. In the Optimum Can example, we initially defined Radius and Height as 1. If a decision variable is defined
using a parametric index, Analytica uses the first element of the parametric array as the initial guess.
You can change the initial guess without re-defining the decision variable using the Initial Guess attribute in the Decision node. We can demonstrate this using the Polynomial NLP.ana example where
the objective is a non-convex curve with local maxima.
The Initial Guess attribute is hidden by default. To make it visible in Decision nodes:
• Select Attributes... from the Object menu.
• Check the Initial Guess box.
The attribute will now be visible in the Object windows of all Decision variables.
The polynomial curve in this model is designed to have several critical points.
Decision X := 0 (or any value at all)
Initial Guess of X := [-4, 2, 0, 2, 4]
Objective Polynomial :=
1 + X/6-X^2/2 + X^4/24 - X^6/720 + X^8/40320 - X^10/3628800
Variable Opt := DefineOptimization(
Decision: X,
Maximize: Polynomial)
Variable X_solution := OptSolution(Opt, X)
Objective Max_Objective := OptObjective(Opt)
The array of initial guesses will cause Analytica to abstract over the index and perform multiple optimizations.
We see that the result depends on the initial guess for this non-convex NLP.
If the array of guesses were entered as a definition for the decision variable instead of as an initial guess attribute, Analytica would interpret it as a parametric index and apply only one initial
guess. (See subsection above.) Therefore, it is necessary to use the Initial Guess parameter if you want to perform multiple optimizations using an array of guesses.
Finding multiple different local extrema in this fashion can be a useful way to locate multiple solutions of interest. One often needs to combine unmodeled factors with insight and results obtained
from a model, so these other solutions may turn out to be more interesting for reasons that you have not modeled.
When your interest is in finding just the global optima, there are additional methods for dealing with the problem of local optima. The
and topological search options can be utilized with gradient-based methods (see
Coping with Local Optima
). The “Evolutionary” and “OptQuest” engines use population-based search methods that are more robust to local optima.
Summary of Optimum Can example
This Optimum Can example demonstrates how to formulate and analyze an optimization problem. It includes input Decision variables, a Constraint, an Objective, intermediate Variables and the central
DefineOptimization function.
The DefineOptimization function recognizes the non-linear characteristics of the Optimum Can model and classifies it as an NLP. The function evaluates as a special object containing details about the
The Domain attributes in Decisions allow setup of variable type and bounds.
Structured Optimization is compatible with a decision variable defined as a parametrically varying sequence.
If the Initial Guess attribute is kept hidden or left blank, Analytica will use the defined value of the decision variable as an initial guess. Users can override this value or enter an array of
initial guesses by using the Initial Guess attribute in Decision nodes. This attribute is hidden by default but can be made visible when necessary.
See Also
|
{"url":"https://docs.analytica.com/index.php/Optimizer_Quick_Start","timestamp":"2024-11-02T05:35:34Z","content_type":"text/html","content_length":"44897","record_id":"<urn:uuid:1d0c7c19-6302-4436-ba65-d8d60bb38a5d>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00792.warc.gz"}
|
Рабочее место участника
Ограничения: время – 2s/4s, память – 64MiB Ввод: input.txt или стандартный ввод Вывод: output.txt или стандартный вывод
Послать решение Blockly Посылки Темы Где Обсудить (0)
As is customary, Ivan has shot an arrow to hit a potential bride. He did not know the local swamp was not inhabited by frogs – neither regular nor princess ones. Anyway, the arrow did not fall into
the swamp, but flew over to the other side. To retrieve it Ivan has to cross the swamp.
The swamp is a strip of width `Y` stretching indefinitely both to the east and to the west. Ivan stands at the southern border of the swamp, and wants to get to the northern border. Ivan has a map
showing `N` hummocks capable of holding his weight.
Hummocks are too far from each other to jump, so Ivan has found a plank of length `M` to use as a bridge. Hummocks are also too slippery to stand on, so Ivan decided to break a plank in two parts (of
lengths `L` and `M\ -\ L`) and use them to move in the following way: he would stand on first part, throw the second part over to the nearby hummock, step onto the second part, pick up the first
part, walk with it across the second one. Repeating this procedure as necessary, Ivan hopes to reach the other side of the swamp as fast as possible.
Ivan can choose any value of `L` such that `0\ <\ L\ <\ M`. Important condition is that Ivan must always stand on the plank. He can, however, put both parts of the plank between the same two hummocks
and/or use the parts in any order he wants (for example, put part 1 between hummocks A and B, move to hummock B, put part 2 between hummocks B and C, step onto part 2 staying at hummock B, pick part
1 and put it between B and D, move to hummock D taking part 2 with him).
Your program will be given the swamp width, plank length and hummocks coordinates. It must calculate the shortest distance Ivan has to walk to cross the swamp or determine that he can not do so.
Distance is measured from hummock to hummock, even if plank part is longer than the distance between hummocks.
Input contains integers `Y`, `M`, `N`, followed by `N` pairs of integers `x_i\ y_i` – hummock coordinates. The swamp edge Ivan stands on coincides with `y\ =\ 0` line, and the opposite edge – with `y
\ =\ Y` line.
Output must contain a single floating point number – the shortest distance Ivan has to travel with at least 3 correct decimal digits. If there is no solution, output must contain the number `-1`.
`1\ ≤\ N\ ≤\ 100`, `1\ ≤\ Y,\ M\ ≤\ 1000`, `0\ ≤\ x_i,\ y_i\ ≤\ 1000`
Sample Input 1
Sample Input 2
An image below illustrates a solution to the first sample.
Source: D. Vikharev, ICPC NEERC Far-Eastern Subreginal, 2007
|
{"url":"https://ipc.susu.ru/210-2.html?problem=1023","timestamp":"2024-11-04T01:23:06Z","content_type":"application/xhtml+xml","content_length":"12855","record_id":"<urn:uuid:94b2b697-c90a-4db6-9aaa-39b288e132a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00278.warc.gz"}
|
Enemy Exp Formulas
This script allows you to use a formula to calculate how much exp is obtained from an enemy.
Script: download here
Place this script below Materials and above Main
Note-tag enemies with
<exp formula: FORMULA>
Where the FORMULA is any valid ruby formula that returns a number.
You can use the following variables in your formula:
exp - the default exp value set in the database for that enemy
a - this enemy
p - game party
t - game troop
v - game variables
s - game switches
If no exp formula is specified, then the default exp is given.
If you would like to use ruby statements that extend across multiple lines,
you can use the extended note-tag:
<exp formula>
if v[1]
exp + 200
exp - 200
</exp formula>
This may be useful if you have complex logic and you don’t want to write it as
a one-liner.
Divide exp by the number of battle members
<exp formula: exp / p.battle_members.size >
Increase exp by some multiplier based on variable 1
<exp formula: exp + (v[1] * 100) >
Give a nice bonus based on whether switch 1 is ON
<exp formula: s[1] ? exp * 10000 : exp >
8 Responses
1. Wow, thanks a ton! This should helps a lot to get a NG+ Exp bonus. :3
I do have a question though: Is there a formula that lets you calculate the Exp different depending on whether or not one of two switches is active?
(For example, if Switch 1 is active, the party gains double exp, but if switch 2 is active instead, they will only gain half the exp)
Either way, thanks :3
□ Yes, you can use multiple conditional blocks
if s[1] // switch 1 is ON
exp * 2
elsif s[2] // switch 2 is ON
exp * 0.5
else // both are OFF
☆ Thank you SO much! This is perfect! X3
2. Handy script 🙂 Nice to see some customisation for enemy exp.
3. Updated to support “this enemy” formula variable.
4. I just found a reason to need this script. 🙂
Once again, the proliferation of your work often precedes need!
5. its is possible to give different xp for each member of the party?
ex. actr1 is level 20. so the actr will gain 90% of the given xp.
actr2 is level 30. so the actr2 will gain 80% of the given xp.
□ Not with this script. This only determines how much exp an enemy gives, not how much a particular actor obtains.
|
{"url":"https://himeworks.com/2013/06/enemy-exp-formulas/","timestamp":"2024-11-04T12:12:54Z","content_type":"text/html","content_length":"88690","record_id":"<urn:uuid:4c9f1979-055a-4146-867a-7dd73f5c8dae>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00772.warc.gz"}
|
Introduction - Views on the Meaning and Ontology of Mathematics - Mathematics, Substance and Surmise
Mathematics, Substance and Surmise: Views on the Meaning and Ontology of Mathematics (2015)
Ernest Davis^1 ^
Department of Computer Science, Courant Institute of Mathematical Sciences, New York University, New York, NY, USA
Ernest Davis
Email: davise@cs.nyu.edu
Mathematics discusses an enormous menagerie of mathematical objects: the number 28, the regular icosahedron, the finite field of size 169, the Gaussian distribution, and so on. It makes statements
about them: 28 is a perfect number, the Gaussian distribution is symmetric about its mean. Yet it is not at all clear what kind of entity these objects are. Mathematical objects do not seem to be
exactly like physical entities, like the Eiffel Tower; nor like fictional entities, like Hamlet; nor like socially constructed entities, like the English language or the US Senate; nor like
structures arbitrarily imposed on the world, like the constellation Orion. Do mathematicians inventmathematical objects; or posit them; or discover them? Perhaps objects emerge of themselves, from
the sea of mathematical thinking, or perhaps they “come into being as we probe” as suggested by Michael Dummett [1].
Most of us who have done mathematics have at least the strong impression that the truth of mathematical statements is independent both of human choices, unlike truths about Hamlet, and of the state
of the external world, unlike truths about the planet Venus. Though it has sometimes been argued that mathematical facts are just statements that hold by definition, that certainly doesn’t seem to be
the case; the fact that the number of primes less than N is approximately obvious restatement of the definition of a prime number. Is mathematical knowledge fundamentally different from other kinds
of knowledge or is it simply on one end of a spectrum of certainty?
Similarly, the truth of mathematics—like science in general, but even more strongly—is traditionally viewed as independent of the quirks and flaws of human society and politics. We know, however,
that math has often been used for political purposes, often beneficent ones, but all too often to justify and enable oppression and cruelty.^1 Most scientists would view such applications of
mathematics as scientifically unwarranted; avoidable, at least in principle; and in any case irrelevant to the validity of the mathematics in its own terms. Others would argue that “the validity of
the mathematics in its own terms” is an illusion and the phrase is propaganda; and that the study of mathematics, and the placing of mathematics on a pedestal, carry inherent political baggage.
“Freedom is the freedom to say that two plus two makes four” wrote George Orwell, in a fiercely political book whose title is one of the most famous numbers in literature; was he right, or is the
statement that two plus two makes four a subtle endorsement of power and subjection?
Concomitant with these general questions are many more specific ones. Are the integer 28, the real number 28.0, the complex number 28.0 + 0i, the 1 × 1 matrix [28], and the constant function f(x)=
28 the same entity or different entities? Different programming languages have different answers. Is “the integer 28” a single entity or a collection of similar entities; the signed integer, the
whole number, the ordinal, the cardinal, and so on? Did Euler mean the same thing that we do when he wrote an integral sign? For that matter, do a contemporary measure theorist, a PDE expert, a
numerical analyst, and a statistician all mean the same thing when they use an integral sign?
Such questions have been debated among philosophers and mathematicians for at least two and a half millennia. But, though the questions are eternal, the answers may not be. The standpoint from which
we view these issues is significantly different from Hilbert and Poincaré, to say nothing of Newton and Leibnitz, Plato and Pythagoras, reflecting the many changes the last century has brought.
Mathematics itself has changed tremendously: vast new areas, new techniques, new modes of thought have opened up, while other areas have been largely abandoned. The applications and misapplications
of mathematics to the sciences, technology, the arts, the humanities, and society have exploded. The electronic computer has arrived and has transformed the landscape. Computer technology offers a
whole collection of new opportunities, new techniques, and new challenges for mathematical research; it also brings along its own distinctive viewpoint on mathematics.
The past century has also seen enormous growth in our understanding of mathematics and mathematical concepts as a cognitive and cultural phenomenon. A great deal is now known about the psychology and
even the neuroscience of basic mathematical ability; about mathematical concepts in other cultures; about mathematical reasoning in children, in pre-verbal infants, and in animals.
Moreover the larger intellectual environment has altered, and with it, our views of truth and knowledge generally. Works such as Kuhn’s analysis of scientific progress and Foucault’s analysis of the
social aspects of knowledge have become part of the general intellectual currency. One can decide to reject them, but one cannot ignore them.
This book
The seventeen essays in this book address many different aspects of the ontology and meaning of mathematics from many different viewpoints. The authors include mathematicians, philosophers, computer
scientists, cognitive psychologists, sociologists, historians, and educators. Some attack the ontological problem head on and propose a specific answer to the question, “What is a mathematical
object?”; some attack it obliquely, by discussing related aspects of mathematical thought and practice; and some argue that the question is either useless or meaningless.
It would be both unnecessary and foolhardy to attempt to summarize the individual chapters. But it may be of interest to note some common themes:
·The history of math, mathematical philosophy, and mathematical practice. (Avigad, Bailey & Borwein, Gillies, Gray, Lützen, O’Halloran, Martin & Pease, Ross, Stillwell, Verran). Among scientists,
mathematicians are particularly aware of the history of their field; and philosophers sometimes seem to be aware of little else. Many different aspects and stages in the evolution of thinking about
mathematics and its objects are traced in these different essays.
·The real line and Euclidean geometry (Bailey & Borwein, Berlinski, E. Davis, Gillies, Gray, Lützen, Stillwell). These essays in the collection touch on many different mathematical theories and
objects, but the problems associated with
·The role of language (Avigad, Azzouni, Gray, O’Halloran, Piantadosi, Ross, Sinclair). On the one hand, mathematics itself seems to be something like a language; Ross discusses the view that
mathematics is a universal or ideal language. On the other hand, a question like “Do mathematical objects exist?” is itself a linguistic expression; and it can be argued that difficulties of
answering it derive from illusions about the precision or scope of language. Sinclair argues that we may be using the wrong language for mathematics; rather than thinking of mathematical entities as
nouns, we should be thinking of them as verbs. Martin and Pease’s essay focuses on the related issue of communication in mathematical collaboration.
·The mathematics of the 21st century (Avigad, Bailey & Borwein, Martin & Pease, Sinclair). Several of our authors look forward to a broadening of the conceptualization and the practice of
mathematics in the coming century. The answers to the questions “What is mathematics?” and “What are mathematical objects?” may change, in directions that have recently emerged in the mathematical
·Applications. E. Davis considers the applications of geometry to robotics. Bailey and Borwein discuss numerous applications of mathematical simulation including space travel, planetary dynamics,
protein analysis, and snow crystals. Verran considers the (mis)applications of statistics to policy. In the opposite direction, Berlinski discusses the difficulty of making precise the sense in which
mathematics can be applied at all to physics or to any other non-mathematical domain.
·Psychology: How people think about mathematics. This is front and center in Rips, but it is just below the surface in all the essays. Arguably, that is the real subject of this book.
Why a multidisciplinary^2 collection?
In the last few decades, universities, research institutions, and funding agencies have made a large, deliberate effort to encourage interdisciplinary research. There is a good reason for this. On
the one hand, there is much important research that requires the involvement of multiple disciplines. On the other hand, overwhelmingly, the institutions of science and scholarship—departments,
academic programs, journals, conferences, and so on—are set up along disciplinary lines. As a result, it can often be hard for good interdisciplinary work to get published; for researchers to get
promoted, tenured, and recognized; and for students to get trained. Therefore, it is both necessary and highly important for the powers that be to counteract this tendency by energetically welcoming
and promoting interdisciplinary research.
However this laudadable effort has often been both taken too far and trivialized. The word “multidisciplinary” and its many near synonyms^3 have often become mindless mantras, particularly among
university administrators. At times they have become terms of purely generic praise, indiscriminately applied to any research, however, narrow in scope. There has been some healthy reaction against
this (e.g., [2]), but in general, the fad is still in full swing.
Since this academic trend is both so important and also so faddish and so often overhyped, it is wise to be initially both welcoming and skeptical of each new manifestation. A collection like this
one raises two natural question in that regard. First: The existence of mathematical objects and the truth of mathematical statements are clearly within the purview of the philosophy of mathematics.
In fact, they are central questions in the philosophy of mathematics, and there is a large philosophical literature on the subject. So why should one suppose that other disciplines have anything to
contribute to the question?
Second: Each of the authors in this collection is an expert in their own discipline, and primarily publishes their work in journal articles and books addressed to other experts in their discipline.
Jody Azzouni publishes articles addressed to philosophers in Synthese, Philosophia Mathematica, etc.; Lance Rips and Steve Piantadosi publish articles addressed to psychologists in Psychological
Science, Cognition, etc.; and so on down the line. Contrary to the cult of interdisciplinarity, this kind of specialized communication to a limited audience is not regrettable; it is fruitful and
essentially inevitable, given the degree of expertise needed to read an original technical research article in any given field. What do we actually expect to accomplish by putting all these disparate
viewpoints together between the covers of a book? Will the result be anything more than nineteen specialists all talking past one another?
The essays in this book are themselves the best answer to the first question. Manifestly, each of the disciplines represented here does have its own distinctive viewpoints and approaches to the
nature of mathematical objects, mathematical truths, and mathematical knowledge, and brings to bear considerations that the other disciplines tend to discount or overlook. The relations between large
cardinal theory and real analysis that John Stillwell explains certainly bear on questions of mathematical ontology; so, in a different way, does the psychological evidence that Lance Rips discusses.
I will not say that the question of the nature of mathematics is too important to be left to the philosophers of mathematics; but it is, perhaps, too protean and too elusive.
As regards the second question, one has to keep ones expectations modest, at least in the short term. We do not expect any dramatic direct interdisciplinary cross-fertilizations to emerge here. Lance
Rips will not find that David Bailey and Jon Borwein’s computational verification of intricate identities in real analysis give him the key he needs for his studies of the cognitive development of
mathematical understanding; nor vice versa. The most that one hopes for is a slight broadening of view, a partial wakening from dogmatic slumbers. As a scientist spends years struggling to apply her
own favorite methods to her own chosen problems, it can be easy to lose sight of the fact that there are not only other answers, but even other questions. Seeing the very different way in which other
people look at your subject matter is valuable, even though their answers or their questions are only occasionally directly useful, because they shine an indirect light that leads you to your own
alternative paths.
Further in the future, though, perhaps we can look forward to a deeper integration of these points of view. In particular, as mentioned above, all of these essays engage with the question of how
people think about mathematics; and therefore it is reasonable to suppose that a complete theory of that would explain how people think about mathematics, from basic counting to inaccessible
cardinals and beyond, might draw on and combine all these kinds of considerations. This collection is perhaps a small step toward that ultimate overarching vision.
There are some regrettable gaps in our collection. On the mathematical side: There is no discussion of probability, which, it seems to me, raises important and difficult questions at least of
interpretation, if not of ontology. Verran’s chapter deals incisively with the uses of statistics, but we have no discussion of the theory or the history of statistics. We are thin in algebra; Gray’s
and Lützen’s essays discuss 18th and 19th century developments, but the great accomplishments of the 20th century go almost unmentioned; we have no one to tell us in what sense the monster group
exists. On the disciplinary side: we have no one from the natural sciences and no one from the arts. It is not so easy to collect all the contributors for a book that one might wish for.
Web site
There is a web site for the book at http://www.cs.nyu.edu/faculty/davise/MathOntology/ with supplementary materials.
Order of chapters
The chapters have been ordered so as to maximize the mean similarity between consecutive chapters, subject to the constraint that the chapter by Martin and Pease came first, since that seemed like a
good starting point. Details can be found at the web site.
In the final analysis, perhaps the best claim that mathematical objects have on existence is the excitement that they provoke in their devotees. Of all the wonderful material in this book, perhaps my
favorite is a short anecdote that Bailey and Borwein tell of Paul Erdős (a variant on steroids of the well-known story of Ramanujan and cab #1729.)
Shortly before his death, Paul Erdős was shown the form
at a meeting in Kalamazoo. He was ill and lethargic, but he immediately perked up, rushed off, and returned 20 minutes later saying excitedly that he had no idea how to prove [the formula], but that
if proven it would have implications for the irrationality of ζ(3) and ζ(7). (Somehow in 20 minutes, an unwell Erdős had intuited backwards our whole discovery process.)
Similarly, the best justification for raising the question, “Do mathematical objects exist?” is this collection of fascinating and insightful responses that the question has elicited; even among
those authors who have rejected the question as meaningless. Speaking personally, few things in my professional life have given me more pleasure than editing this book with my father.^4 It was really
thrilling to open each email with a new chapter from another author, and see the wonderful stone soups that they had concocted starting with our simple-minded question. If our readers share that
pleasure and excitement, then the book is a success.
We thank all the authors for their wonderful contributions and their encouragement. Particular thanks to Jeremy Avigad, Jon Borwein, and Gary Marcus for much helpful feedback and many valuable
M. Dummett, ‘Truth’, Proceedings of the Aristotelian Society, n.s. 69, 1959, 141–162.
J.A. Jacobs, In defense of disciplines: Interdisciplinarity and specialization in the research university, U. Chicago, 2013.
The forthcoming book Weapons of Math Destruction by Cathy O’Neil studies how modern methods of data collection and analysis can feed this kind of abuse.
This book is, strictly speaking, multidisplinary rather than interdisciplinary. That is, it brings together multiple disciplines in a single volume, but does not reflect any very strong integration
of these.
These include “cross-disciplinary,” “extradisciplinary,” “hyperdisciplinary,” “interdisciplinary,” “metadisciplinary,” “neo-disciplinary,” “omnidisciplinary,” “pandisciplinary,” “pluridisciplinary,”
“polydisciplinary,” “postdisciplinary,” “supradisciplinary,” “superdisciplinary,” and “transdisciplinary.” The reader may Google for specific citations.
Philip Davis and Ernest Davis are father and son, in case you were wondering.
|
{"url":"https://schoolbag.info/mathematics/ontology/1.html","timestamp":"2024-11-02T11:12:59Z","content_type":"text/html","content_length":"29178","record_id":"<urn:uuid:3eddf104-ce8d-4583-b3f2-b67446d611e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00052.warc.gz"}
|
An LCR series circuit with R=100Ω is connected to a 200 V,50 Hz... | Filo
An series circuit with is connected to a a.c. source when only the capacitance is removed, the current
leads the voltage by When only the inductance is removed, the current leads the voltage by . The current in the circuit is
Not the question you're searching for?
+ Ask your question
If the capacitance is removed, it is an circuit
If inductance is removed, it is a capacitative circuit or circuit. is the same
This is a resonance circuit
Was this solution helpful?
Found 6 tutors discussing this question
Discuss this question LIVE for FREE
5 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions from Alternating Current
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Physics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
An series circuit with is connected to a a.c. source when only the capacitance is removed, the current
Question Text leads the voltage by When only the inductance is removed, the current leads the voltage by . The current in the circuit is
Updated On Feb 18, 2022
Topic Alternating Current
Subject Physics
Class Class 12
Answer Type Text solution:1 Video solution: 1
Upvotes 185
Avg. Video Duration 2 min
|
{"url":"https://askfilo.com/physics-question-answers/an-l-c-r-series-circuit-with-r-100-omega-is-connected-to-a","timestamp":"2024-11-14T20:52:23Z","content_type":"text/html","content_length":"252392","record_id":"<urn:uuid:48519aa0-0ed4-4bf0-8aba-a562703d0d80>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00630.warc.gz"}
|
Linear Regression Analysis: Plotting Lines in R
You will learn the pivotal steps to interpret data visually with R’s linear regression plotting.
Linear regression analysis is a foundational statistical tool that models and analyzes the relationships between a dependent variable and one or more independent variables. It allows us to predict
outcomes and understand the underlying patterns in our data. By fitting a linear equation to observed data, linear regression estimates the coefficients of the equation, which are used to predict the
dependent variable from the independent variables.
The importance of visual representation in statistical analysis cannot be overstated. Graphs and plots provide an immediate way to see patterns, trends, outliers, and the potential relationship
between variables. In R, plotting is an integral part of the exploratory data analysis process, helping to understand complex relationships in an accessible and informative way.
The scatter plot above, created from a dataset simulating the relationship between body mass and height, is a perfect starting point for linear regression analysis. It provides a visual foundation
for applying a linear model and extracting insights, exemplifying how visual tools are essential for practical statistical analysis. Visualizing our data allows us to communicate results better,
share insights, and make informed decisions.
• Discover how R’s ‘lm()’ function calculates precise linear models.
• Visualize data relationships with custom plots in R.
• Master the interpretation of R’s regression output for applied analysis.
• Learn to enhance plots with R’s advanced graphical packages.
• Gain insights into R’s ‘abline()’ function for regression line representation.
Ad Title
Ad description. Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Conceptual Foundation
Linear regression is finding the linear relationship between the dependent variable and one or more independent variables. The core concept behind linear regression is determining the best-fitting
straight line through the data points. The regression equation represents this line:
y = β0 + β1x1 + β2x2 + … + βnxn + ϵ
where y is the dependent variable, β0 is the y-intercept, β1, …, βn are the coefficients, x1, …, xn are the independent variables, and ϵ represents the error term.
The importance of the relationship between dependent and independent variables in linear regression cannot be understated. The dependent variable, also known as the response or the predicted
variable, is what we aim to predict or explain. The independent variables, also known as predictors or explanatory variables, are the inputs we use for prediction. The strength and form of the
relationship are determined by the coefficients β1, …, βn, which signify how a unit change in the independent variable affects the dependent variable.
Understanding this relationship is critical because it forms the basis of the insights we can draw from the model. For instance, if we analyze the relationship between body mass (independent
variable) and height (dependent variable), the coefficient tells us how much we would expect height to change, on average, with each additional kilogram of body mass.
In data analysis and science, these concepts are not just mathematical abstractions. They represent the profound interconnectivity of variables in natural phenomena and human-centric research. By
unveiling these connections through linear regression analysis, we contribute to a body of knowledge that reflects the orderly and systematic nature of the universe, aligning with our pursuit of what
is authentic and meaningful.
Setting up the Environment
Before delving into the analysis, setting up a proper environment in R is crucial for efficient and effective data plotting. Here’s a step-by-step guide to getting your R environment ready for linear
regression analysis and plotting:
1. Install R and RStudio:
• Download and install R from the Comprehensive R Archive Network (CRAN).
• Optionally, download and install RStudio, a powerful and user-friendly interface for R.
2. Open RStudio and Set Your Working Directory:
• Use ‘setwd(“your_directory_path”)’ to set your working environment where your data and scripts will be stored.
3. Update R and Install Packages:
• Update R to the latest version using ‘update.packages(ask=FALSE)’.
• Install the necessary packages using ‘install.packages()’. For linear regression plotting, start with ‘ggplot2’, ‘dplyr’, and ‘tidyr’ for data manipulation and ‘ggplot2’ for advanced plotting
4. Load the Packages:
• Load the installed packages into the library with ‘library(package_name)’.
5. Check for Updates Regularly:
• Regularly check and update your packages to ensure compatibility and access to the latest features.
# Setting up the Working Directory
# Replace 'your_directory_path' with the path where you want to store your data and scripts
# Updating R packages
update.packages(ask = FALSE)
# Installing necessary packages for linear regression plotting
# ggplot2 for plotting, dplyr and tidyr for data manipulation
# Loading the packages into R
# Check for updates regularly - This is just a reminder, as you'll run this when needed
# update.packages(ask = FALSE)
Data Preparation
Data preparation is a critical stage in linear regression analysis, where data is collected, cleaned, and transformed into a suitable format for analysis. This process often involves several steps to
ensure the data’s integrity and relevance to the research question.
1. Data Collection:
• Collect data from reliable sources to ensure its accuracy and validity.
• Ensure the data collected is relevant to the variables of interest in the linear regression model.
2. Data Cleaning:
• Identify and handle missing values appropriately, whether by imputation or removal.
• Detect and correct errors or outliers that may skew the analysis.
3. Data Transformation:
• Convert data into the correct format for analysis, such as changing data types or normalizing scales.
• Create dummy variables for categorical data to be used in the regression model.
4. Data Exploration:
• Conduct exploratory data analysis (EDA) to understand the data’s distribution and identify patterns or anomalies.
• Use visualizations to spot trends, clusters, and outliers that may affect the regression model.
5. Data Splitting:
• If applicable, split the data into training and testing sets to validate the model’s predictive performance.
For our dataset, we consider the relationship between body mass (independent variable) and height (dependent variable). The data set comprises body mass measurements in kilograms and height in
centimeters for a sample population. This dataset is ideal for demonstrating linear regression because it likely exhibits a linear relationship, as body mass and height are typically correlated in
biological studies.
Plotting with R
Plotting in R combines art and science, offering tools to represent data for analysis and communication visually. Using R’s base plotting system, ggplot2, or other visualization packages, you can
create informative and aesthetically pleasing plots. Let’s explore the basic plotting techniques in R and how to customize these plots effectively.
1. Base R Plotting:
Base R provides simple plotting functions quite powerful. The ‘plot()’ function is one of the most commonly used:
# Basic scatter plot with R's base plotting system
plot(x = dataset$body_mass, y = dataset$height,
main = "Scatter Plot of Body Mass vs. Height",
xlab = "Body Mass (kg)", ylab = "Height (cm)",
pch = 19, col = "blue")
Here, ‘x‘ and ‘y‘ are the variables to be plotted, ‘main‘ is the plot’s title, ‘xlab‘ and ‘ylab‘ are labels for the x and y-axes, ‘pch‘ sets the type of point to use, and ‘col‘ determines the color
of the points.
2. Customizing Plots
Customization involves changing default settings to make the plot convey information more effectively and to make it more visually appealing.
# Customizing the plot with additional arguments
plot(x = dataset$body_mass, y = dataset$height,
main = "Scatter Plot of Body Mass vs. Height",
xlab = "Body Mass (kg)", ylab = "Height (cm)",
pch = 19, col = "blue", cex = 1.5,
xlim = c(40, 100), ylim = c(140, 200))
Here, ‘cex‘ controls the size of the points, while ‘xlim‘ and ‘ylim‘ set the limits of the x and y axes, respectively.
3. Advanced Plotting with ‘ggplot2‘
‘ggplot2’ is a powerful system for creating graphics that provides more control over the plot’s aesthetics.
# Advanced plotting with ggplot2
ggplot(data = dataset, aes(x = body_mass, y = height)) +
geom_point(color = "blue") +
ggtitle("Scatter Plot of Body Mass vs. Height") +
xlab("Body Mass (kg)") +
ylab("Height (cm)") +
In this ‘ggplot‘ syntax, ‘aes‘ defines the aesthetic mappings, ‘geom_point‘ adds the scatter plot layer, ‘ggtitle‘, ‘xlab‘, and ‘ylab‘ provide titles and labels, and ‘theme_minimal()‘ applies a
minimalistic theme to the plot.
Linear Regression Computation
The computation of a linear regression model in R is primarily conducted using the ‘lm()’ function, which stands for ‘linear model’. The ‘lm()‘ function fits a linear model to a data set by
estimating the coefficients that result in the best fit, minimizing the sum of the squared residuals.
Here is how the ‘lm()‘ function is generally used:
# Fit a linear model to the data
linear_model <- lm(height ~ body_mass, data = dataset)
# Summarize the model to view the coefficients
In the ‘lm()‘ function, ‘height ~ body_mass‘ specifies the model with ‘height‘ as the dependent variable and ‘body_mass‘ as the independent variable. The ‘data = dataset‘ argument tells R which data
frame to use for the variables.
The ‘summary()’ function then provides a detailed output, including the estimated coefficients (intercept and slope), critical for understanding the regression equation. The output also includes
statistical measures such as the R-squared value, which indicates the proportion of variance in the dependent variable that can be predicted from the independent variable.
Interpreting the coefficients is straightforward:
• Intercept (β0): This is the expected mean ‘height‘ value when ‘body_mass‘ is zero. It’s where the regression line crosses the Y-axis.
• Slope (β1): This represents the estimated change in ‘height‘ for a one-unit change in ‘body_mass‘. If ‘β1‘ is positive, it means that as ‘body_mass‘ increases, ‘height‘ tends to increase.
Understanding the regression equation is essential as it allows us to make predictions and understand the relationship between variables. For instance, if ‘β0‘ is 100 and ‘β1‘ is 0.5, the regression
equation would be ‘height = 100 + 0.5 * body_mass’. For each additional kilogram of body mass, height is expected to increase by half a centimeter.
Visualizing the Regression Line
Visualizing the regression line is a crucial step in understanding the relationship your linear model represents. The regression line visually represents the linear equation fitted to your data.
Here’s how you can add a regression line to your plots in R:
1. Using the abline() Function:
The ‘abline()’ function is a convenient tool in R’s base plotting system that allows you to add straight lines to a plot. After fitting a linear model using the ‘lm()’ function, add a regression line
using the model’s intercept and slope.
# Assuming linear_model is your lm object from fitting the data
linear_model <- lm(height ~ body_mass, data = dataset)
# Basic scatter plot
plot(dataset$body_mass, dataset$height,
main = "Scatter Plot with Regression Line",
xlab = "Body Mass (kg)", ylab = "Height (cm)",
pch = 19, col = "blue")
# Add the regression line
abline(linear_model, col = "red")
In this code, ‘abline(linear_model, col = “red”)’ automatically extracts the intercept and slope from your ‘linear_model’ object and adds a red regression line to your plot.
2. Using the lm() Directly with abline():
Alternatively, you can skip creating a linear model object and directly input the formula and dataset into ‘abline()’.
# Directly adding a regression line without storing the lm object
abline(lm(height ~ body_mass, data = dataset), col = "red")
This line of code performs the linear regression computation. It adds the regression line to the existing plot in one step.
Advanced Visualization Techniques
Enhancing your data visualizations goes beyond the basic plots. It involves leveraging the power of additional R packages and interactive plotting capabilities. These advanced techniques can
significantly improve the engagement and interpretability of your data visualizations.
1. Utilizing ‘ggplot2’ for Advanced Customization:
‘ggplot2’ is a versatile package that allows for intricate and customizable plotting in R. With its layer-based approach, you can build plots piece by piece, adding aesthetic elements and statistical
# Start with the basic plot
ggplot(dataset, aes(x = body_mass, y = height)) +
geom_point() + # Add points
geom_smooth(method = "lm", se = FALSE, color = "red") + # Add a linear regression line
theme_bw() + # Use a minimalistic theme
labs(title = "Body Mass vs. Height with Regression Line",
x = "Body Mass (kg)", y = "Height (cm)") +
scale_color_manual(values = c("Points" = "blue", "Line" = "red"))
In this example, ‘geom_smooth(method = “lm”)’ adds a linear regression line directly to the plot, and ‘theme_bw()’ applies a minimalistic theme. ‘labs()’ labels the plot and axes, enhancing clarity
and readability.
2. Creating Interactive Plots with ‘plotly’:
For a more engaging experience, especially in web-based environments, ‘plotly’ offers interactive plotting capabilities where users can hover over data points, zoom in/out, and pan across plots.
# Convert ggplot2 to plotly
p <- ggplot(dataset, aes(x = body_mass, y = height)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE, color = "red") +
labs(title = "Interactive Plot of Body Mass vs. Height",
x = "Body Mass (kg)", y = "Height (cm)")
# Convert to plotly object
Converting a ‘ggplot2’ object to a ‘plotly’ object is straightforward and retains the layers and customizations added in ‘ggplot2’. The resulting interactive plot allows users to explore the data
more dynamically, making the visualization a presentation tool and an exploratory device.
3. Enhancing Plots with ‘gganimate’ for Dynamic Visualizations:
‘gganimate’ extends ‘ggplot2’ by adding animation capabilities, making it possible to illustrate changes in data over time or conditions dynamically and compellingly.
# Assuming 'time' is a variable in your dataset
p <- ggplot(dataset, aes(x = body_mass, y = height, group = time)) +
geom_line() +
# Render the animation
animate(p, renderer = gifski_renderer())
This code snippet demonstrates creating a line plot that reveals itself over ‘time’, captivatingly showing progression, trends, or patterns evolving.
Interpreting Results
Interpreting the output from R, particularly from linear regression analysis, requires understanding the statistical summaries provided by functions like ‘summary()’ when applied to an ‘lm’ object.
This output includes several vital components illuminating the relationship between variables and the model’s overall fit.
1. Coefficients:
• Intercept (β0): Represents the expected value of the dependent variable when all independent variables are zero. It’s the point where the regression line intersects the Y-axis.
• Slope (β1, β2, …): Each coefficient associated with an independent variable represents the expected change in the dependent variable for a one-unit change in that independent variable, holding
all other variables constant.
2. Significance Levels:
• The stars or p-values next to the coefficients indicate their significance levels. A lower p-value (< 0.05) suggests that the corresponding variable significantly predicts the dependent variable.
3. R-squared (R²):
• This value indicates the proportion of variance in the dependent variable that is predictable from the independent variables. It ranges from 0 to 1, with higher values indicating a better fit of
the model to the data.
4. F-statistic:
• This test evaluates the regression model’s overall significance and assesses whether at least one predictor variable has a non-zero coefficient.
Real-world Implications:
Understanding these results allows researchers and analysts to make informed decisions and predictions based on the model. For example, in a study examining the relationship between body mass and
• A significant positive coefficient for body mass suggests that height is also expected to increase as body mass increases, reflecting a direct relationship between these variables.
• A high R-squared value would indicate that a large proportion of the variability in height can be explained by variations in body mass, suggesting body mass is a good predictor of height.
• The overall model’s significance, as indicated by the F-statistic, supports using body mass to predict height in the studied population.
The interpretation extends beyond the numbers to consider the model’s applicability in real-world contexts. For instance, understanding the relationship between body mass and height can be crucial in
health and nutrition, where such insights inform guidelines and interventions. However, it’s essential to consider the model’s limitations and the assumptions of linear regression, ensuring the
findings are applied appropriately and thoughtfully in practice and policy-making.
In summary, interpreting the results from R’s linear regression analysis involves:
• A careful examination of the statistical output.
• Understanding the meaning and implications of coefficients.
• Significance levels.
• Model fit measures.
Ad Title
Ad description. Lorem ipsum dolor sit amet, consectetur adipiscing elit.
As we conclude our exploration of linear regression analysis and plotting lines in R, several vital takeaways reinforce best data analysis and representation practices. This journey through the
statistical landscape has equipped us with technical skills and deepened our appreciation for the meticulous art of data science.
Firstly, the power of linear regression as a statistical tool is undeniable. It offers a window into the underlying patterns of our data, allowing us to predict outcomes and discern relationships
between variables with precision. This technique, grounded in the principles of simplicity and clarity, mirrors our quest for understanding complex phenomena in a manner that is both accessible and
Plotting in R, whether through base graphics or advanced packages like ‘ggplot2’, elevates our analysis from mere numbers to compelling narratives. These visual representations serve as analytical
tools and bridges connecting data insights to real-world applications. They enable us to see beyond the surface, uncovering patterns and trends that might remain obscured.
The ‘lm()’ function, a cornerstone of linear modeling in R, embodies the elegance of statistical computation. Distilling complex relationships into simple equations reaffirms our belief in the
pursuit of accurate and meaningful knowledge. Interpreting its output — coefficients, R-squared values, and p-values — guides us in making informed predictions and decisions rooted in a deep
understanding of the data.
Advanced visualization techniques, including interactive plots and animations, push the boundaries of conventional data presentation. They invite engagement and curiosity, transforming passive
observation into an active exploration. This dynamic approach to data visualization not only enhances understanding but also aligns with our commitment to fostering a deeper connection with the
In interpreting the results from our linear models, we are reminded of the importance of context and critical thinking. The statistical significance and predictive power of our models must be weighed
against real-world relevance and practical applicability. This balance between statistical rigor and real-world impact e
Recommended Articles
Explore deeper into data analysis—read our curated selection of articles on Linear Regression and R programming for more expert insights!
Frequently Asked Questions (FAQs)
Q1: What is linear regression analysis in R? It’s a statistical method for modeling the relationship between a scalar response and one or more explanatory variables.
Q2: How do I plot a regression line in R? Use the abline() function after computing a linear model with lm() to add a regression line to your plot.
Q3: What does the lm() function in R do? The lm() function fits linear models, calculating coefficients that represent the regression line equation.
Q4: Can R handle multiple regression analysis? R can perform multiple regression using lm(), allowing for several explanatory variables.
Q5: How do I interpret the coefficients in a linear model? Coefficients in a linear model indicate how much the dependent variable changes for a one-unit change in an independent variable.
Q6: What are some advanced plotting techniques in R? Advanced techniques include interactive plots with ggplot2 and plotly, and customizing plots with additional R packages.
Q7: Why is data visualization important in regression analysis? Visualization helps in understanding data trends, patterns, and the strength of relationships between variables.
Q8: What is the importance of the intercept in a regression line? The intercept is the expected mean value of Y when all X variables are zero. It’s the starting point of the regression line on the
Q9: How can I customize plots in R? Use arguments within the plot function like pch, cex, and col to change the appearance of points, their size, and color.
Q10: What is the best practice for preparing data for linear regression in R? Ensure data quality by cleaning, normalizing, and exploring data to understand its structure before applying regression
|
{"url":"https://statisticseasily.com/linear-regression-analysis-plotting-lines-in-r/","timestamp":"2024-11-12T22:13:47Z","content_type":"text/html","content_length":"219498","record_id":"<urn:uuid:f91ace5e-9280-4af7-a69a-615505ab215b>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00520.warc.gz"}
|
Game Theory | Yevgeny Tsodikovich
top of page
I'm a Postdoctoral Researcher at the Department of Economics at Bar-Ilan University, Israel. Prior to joining Bar-Ilan, I was a PostDoc at the Aix-Marseille School of Economics.
I completed my Ph.D. in mathematics, specializing in Game Theory, at Tel Aviv University under the supervision of Prof. Ehud Lehrer.
My main research interest is opinion dynamics and learning in networks, but I enjoy exploring other realms of game theory and economics. My Job Market Paper deals with the minimal contagious set for
growing networks with a converging degree distribution, such as the common scale-free network.
I taught undergraduate probability and statistics courses for engineers at Tel Aviv University as a lecturer and a T.A. In addition, I worked at the Open University where I taught Game Theory and
Linear Algebra in the Mathematics Department.
bottom of page
|
{"url":"https://www.tsodikovich.com/copy-of-home","timestamp":"2024-11-10T17:42:26Z","content_type":"text/html","content_length":"352081","record_id":"<urn:uuid:00c839a6-cfc2-4415-87b5-a304b68ab02a>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00567.warc.gz"}
|
and prospective/current/enrolled master students in the fields of modern mathematics and computational science:
Scientific Learning
The Institute of Computational Science (ICS) at the Università della Svizzera italiana (USI) announces six seminars about Machine Learning, Neural Networks, Physics Informed Neural Networks
Machine Learning, Neural Networks, Physics Informed Neural Networks
│ Time │Speaker & Place │
│May 19 │Prof. Paris Perdikaris │
│ │ │
│5:00-7:00 pm │Lecture: 5:00-6:00 pm │
│ │ │
│UTC+2:00 Zurich time │Seminar: 6:00-7:00 pm │
│June 2 │Prof. Jonathan Siegel │
│ │ │
│5:00-7:00 pm │Lecture: 5:00-6:00 pm │
│ │ │
│UTC+2:00 Zurich time │Seminar: 6:00-7:00 pm │
│June 16 │Dr. Eric Cyr │
│ │ │
│5:00-7:00 pm │Lecture: 5:00-6:00 pm │
│ │ │
│UTC+2:00 Zurich time │Seminar: 6:00-7:00 pm │
│June 23 │Prof. Mishra Siddhartha │
│ │ │
│5:00-7:00 pm │Lecture: 5:00-6:00 pm │
│ │ │
│UTC+2:00 Zurich time │Seminar: 6:00-7:00 pm │
List of Speakers
Prof. Paris Perdikaris
assistant professor at Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania.
Part A:Lecture
Making neural networks physics-informed
Leveraging advances in automatic differentiation, physics-informed neural networks are introducing a new paradigm in tackling forward and inverse problems in computational mechanics. Under this
emerging paradigm, unknown quantities of interest are typically parametrized by deep neural networks, and a multi-task learning problem is posed with the dual goal of fitting observational data and
approximately satisfying a given physical law, mathematically expressed via systems of partial differential equations (PDEs). PINNs have demonstrated remarkable flexibility across diverse
applications, but, despite some empirical success, a concrete mathematical understanding of the mechanisms that render such constrained neural network models effective is still lacking. In fact, more
often than not, PINNs are notoriously hard to train, especially for forward problems exhibiting high-frequency or multi-scale behavior. In this talk we will discuss the basic principles of making
neural networks physics informed with an emphasis on the caveats one should be aware of and how those can be addressed in practice.
Part B: Seminar
Learning the solution operator of parametric partial differential equations with physics-informed DeepONets
Deep operator networks (DeepONets) are receiving increased attention thanks to their demonstrated capability to approximate nonlinear operators between infinite-dimensional Banach spaces. However,
despite their remarkable early promise, they typically require large training data-sets consisting of paired input-output observations which may be expensive to obtain, while their predictions may
not be consistent with the underlying physical principles that generated the observed data. In this work, we propose a novel model class coined as physics-informed DeepONets, which introduces an
effective regularization mechanism for biasing the outputs of DeepOnet models towards ensuring physical consistency. This is accomplished by leveraging automatic differentiation to impose the
underlying physical laws via soft penalty constraints during model training. We demonstrate that this simple, yet remarkably effective extension can not only yield a significant improvement in the
predictive accuracy of DeepOnets, but also greatly reduce the need for large training data-sets. To this end, a remarkable observation is that physics-informed DeepONets are capable of solving
parametric partial differential equations (PDEs) without any paired input-output observations, except for a set of given initial or boundary conditions. We illustrate the effectiveness of the
proposed framework through a series of comprehensive numerical studies across various types of PDEs. Strikingly, a trained physics informed DeepOnet model can predict the solution of O(1000)
time-dependent PDEs in a fraction of a second -- up to three orders of magnitude faster compared to a conventional PDE solver.
Prof. Jonathan Siegel
assistant professor at the Department of Mathematics, Penn State.
Part A:Lecture
The Approximation Theory of Neural Networks
An important component in understanding the potential and limitations of neural networks is a study of how efficiently neural networks can approximate a given target function class. Classical results
which we will cover include the universal approximation property of neural networks and the theory of Barron and Jones which gives quantitative approximation rates for shallow neural networks. We
will also discuss approximation rates for deep ReLU networks and their relationship with finite elements methods. Finally, we will introduce the notions of metric entropy and n-widths which are
fundamental in approximation theory and permit a comparison between neural networks and traditional methods of approximation.
Part B: Seminar
The Metric Entropy and n-widths of Shallow Neural Networks
The metric entropy and n-widths are fundamental quantities in approximation theory which control the fundamental limits of linear and non-linear approximation and statistical estimation for a given
class of functions. In this talk, we will derive the spaces of functions which can be efficiently approximated by shallow neural networks for a wide variety of activation functions, and we will
calculate the metric entropy and n-widths for these spaces. Consequences of these results include: the optimal approximation rates which can be attained for shallow neural networks, that shallow
neural networks dramatically outperform linear methods of approximation, and indeed that shallow neural networks outperform all stable methods of approximation on these spaces. Finally, we will
discuss insights into how neural networks break the curse of dimensionality.
Dr. Eric Cyr
Part A:Lecture
An Adaptive Basis Perspective to Improve Initialization and Accelerate Training of DNNs
Motivated by the gap between theoretical optimal approximation rates of deep neural networks (DNNs) and the accuracy realized in practice, we seek to improve the training of DNNs. The adoption of an
adaptive basis viewpoint of DNNs leads to novel initializations and a hybrid least squares/gradient descent optimizer. We provide analysis of these techniques and illustrate via numerical examples
dramatic increases in accuracy and convergence rate for benchmarks characterizing scientific applications where DNNs are currently used, including regression problems and physics-informed neural
networks for the solution of partial differential equations. In addition, we present a partition of unity (POU) architecture capable of achieving hp like convergence. This methodology, introduces
traditional polynomial approximations on partitions learned by deep neural networks. The architecture is designed to play to the strengths of each approach, while still achieving good convergence
rates as demonstrated in the results.
Part B: Seminar
A Layer-Parallel Approach for Training Deep Neural Networks
Deep neural networks are a powerful machine learning tool with the capacity to “learn” complex nonlinear relationships described by large data sets. Despite their success training these models
remains a challenging and computationally intensive undertaking. In this talk we will present a layer-parallel training algorithm that exploits a multigrid scheme to accelerate both forward and
backward propagation. Introducing a parallel decomposition between layers requires inexact propagation of the neural network. The multigrid method used in this approach stitches these subdomains
together with sufficient accuracy to ensure rapid convergence. We demonstrate an order of magnitude wall-clock time speedup over the serial approach, opening a new avenue for parallelism that is
complementary to existing approaches. In a more recent development, we discuss applying the layer-parallel methodology to recurrent neural networks. In particular, we study the generalized recurrent
unit (GRU) architecture. We demonstrate its relation to a simple ODE formulation that facilitates application of the layer-parallel approach. Results are demonstrating performance improvements on a
human activity recognition (HAR) data set are presented.
Prof. Misha Siddhartha
professor at the Department of Mathematics, ETH Zurich.
Part A:Lecture
Deep Learning and Computations of PDEs.
Abstract: In this talk, we will review some recent results on computing PDEs with deep neural networks. The focus will be on the design of neural networks as fast and efficient surrogates for PDEs.
We will start with parametric PDEs and talk about novel modifications that enable standard deep neural networks to provide efficient PDE surrogates and apply them for prediction, uncertainty
quantification and PDE constrained optimization. We will also cover very recent results on operator regression using novel architectures such as DeepOnets and Fourier Neural Operators, and their
application to PDEs.
Part B: Seminar
On Physical Informed Neural Networks (PINNs) for computing PDEs.
Abstract: We will describe PINNs and illustrate several examples for using PINNs for solving PDEs. Our aim would be to elucidate mechanisms that underpin the success of PINNs in approximating
classical solutions to PDEs by deriving bounds on the resulting error. Examples of a variety of linear and nonlinear PDEs will be provided including PDEs in high dimensions and inverse problems for
You can download the content of the lectures here
pwd is pinn
Classes will take place online
Please, send an email to
Rolf Krause, Maria Nestola, Patrick Zulian, Alena Kopaničáková., Simone Pezzuto, Luca Gambardella
|
{"url":"https://fomics.usi.ch/index.php/workshops/15-ics/306-pinn","timestamp":"2024-11-11T16:51:19Z","content_type":"text/html","content_length":"45932","record_id":"<urn:uuid:dd106986-a69c-4f0d-a250-0f0de7c71d4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00084.warc.gz"}
|
Sudoku puzzles
Distribution of start numbers
This table shows the spread of puzzles mapped to the numbers a puzzle start with. Clicking on the number will give you a random puzzle with that start number count. I've put a sum-line at 30 since
some people think a real sudoku should have less than or equal to 30 start numbers.
Starting numbers puzzles in db
Sum 15579
Distribution of type
This table show how many puzzles there are to each puzzle-type
Type Count
Symmetrical Sudoku 15579
|
{"url":"https://www.menneske.no/wordoku/2x3/eng/showstartcount.html","timestamp":"2024-11-12T21:27:46Z","content_type":"text/html","content_length":"9830","record_id":"<urn:uuid:17d24358-701d-4857-b008-2547c88f1998>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00169.warc.gz"}
|
DukeSpace :: Browsing by Subject "cond-mat.soft"
Browsing by Subject "cond-mat.soft"
Now showing 1 - 20 of 31
Results Per Page
Sort Options
• A local view on the role of friction and shape
(EPJ Web of Conferences, 2017) Schröter, Matthias
Leibniz said "Naturam cognosci per analogiam": nature is understood by making analogies. This statement describes a seminal epistemological principle. But one has to be aware of its limitations:
quantum mechanics for example at some point had to push Bohr's model of the atom aside to make progress. This article claims that the physics of granular packings has to move beyond the analogy
of frictionless spheres, towards local models of contact formation.
• Assembly of hard spheres in a cylinder: a computational and experimental study
(2017-03-10) Fu, Lin; Bian, Ce; Shields, C Wyatt; Cruz, Daniela F; López, Gabriel P; Charbonneau, Patrick
Hard spheres are an important benchmark of our understanding of natural and synthetic systems. In this work, colloidal experiments and Monte Carlo simulations examine the equilibrium and
out-of-equilibrium assembly of hard spheres of diameter $\sigma$ within cylinders of diameter $\sigma\leq D\leq 2.82\sigma$. Although in such a system phase transitions formally do not exist,
marked structural crossovers are observed. In simulations, we find that the resulting pressure-diameter structural diagram echoes the densest packing sequence obtained at infinite pressure in
this range of $D$. We also observe that the out-of-equilibrium self-assembly depends on the compression rate. Slow compression approximates equilibrium results, while fast compression can skip
intermediate structures. Crossovers for which no continuous line-slip exists are found to be dynamically unfavorable, which is the source of this difference. Results from colloidal sedimentation
experiments at high P\'eclet number are found to be consistent with the results of fast compressions, as long as appropriate boundary conditions are used. The similitude between compression and
sedimentation results suggests that the assembly pathway does not here sensitively depend on the nature of the out-of-equilibrium dynamics.
• Breaking the glass ceiling: Configurational entropy measurements in extremely supercooled liquids
(2017-06-01) Berthier, Ludovic; Charbonneau, Patrick; Coslovich, Daniele; Ninarello, Andrea; Ozawa, M; Yaida, Sho
Liquids relax extremely slowly on approaching the glass state. One explanation is that an entropy crisis, due to the rarefaction of available states, makes it increasingly arduous to reach
equilibrium in that regime. Validating this scenario is challenging, because experiments offer limited resolution, while numerical studies lag more than eight orders of magnitude behind
experimentally-relevant timescales. In this work we not only close the colossal gap between experiments and simulations but manage to create in-silico configurations that have no experimental
analog yet. Deploying a range of computational tools, we obtain four estimates of their configurational entropy. These measurements consistently confirm that the steep entropy decrease observed
in experiments is found also in simulations even beyond the experimental glass transition. Our numerical results thus open a new observational window into the physics of glasses and reinforce the
relevance of an entropy crisis for understanding their formation.
• Bypassing sluggishness: SWAP algorithm and glassiness in high dimensions
Berthier, Ludovic; Charbonneau, Patrick; Kundu, Joyjit
The recent implementation of a swap Monte Carlo algorithm (SWAP) for polydisperse mixtures fully bypasses computational sluggishness and closes the gap between experimental and simulation
timescales in physical dimensions $d=2$ and $3$. Here, we consider suitably optimized systems in $d=2, 3,\dots, 8$, to obtain insights into the performance and underlying physics of SWAP. We show
that the speedup obtained decays rapidly with increasing the dimension. SWAP nonetheless delays systematically the onset of the activated dynamics by an amount that remains finite in the limit $d
\to \infty$. This shows that the glassy dynamics in high dimensions $d>3$ is now computationally accessible using SWAP, thus opening the door for the systematic consideration of
finite-dimensional deviations from the mean-field description.
• Characterization and efficient Monte Carlo sampling of disordered microphases.
(The Journal of chemical physics, 2021-06) Zheng, Mingyuan; Charbonneau, Patrick
The disordered microphases that develop in the high-temperature phase of systems with competing short-range attractive and long-range repulsive (SALR) interactions result in a rich array of
distinct morphologies, such as cluster, void cluster, and percolated (gel-like) fluids. These different structural regimes exhibit complex relaxation dynamics with marked heterogeneity and
slowdown. The overall relationship between these structures and configurational sampling schemes, however, remains largely uncharted. Here, the disordered microphases of a schematic SALR model
are thoroughly characterized, and structural relaxation functions adapted to each regime are devised. The sampling efficiency of various advanced Monte Carlo sampling schemes-Virtual-Move (VMMC),
Aggregation-Volume-Bias (AVBMC), and Event-Chain (ECMC)-is then assessed. A combination of VMMC and AVBMC is found to be computationally most efficient for cluster fluids and ECMC to become
relatively more efficient as density increases. These results offer a complete description of the equilibrium disordered phase of a simple microphase former as well as dynamical benchmarks for
other sampling schemes.
• Clustering and assembly dynamics of a one-dimensional microphase former.
(Soft matter, 2018-03-26) Hu, Yi; Charbonneau, Patrick
Both ordered and disordered microphases ubiquitously form in suspensions of particles that interact through competing short-range attraction and long-range repulsion (SALR). While ordered
microphases are more appealing materials targets, understanding the rich structural and dynamical properties of their disordered counterparts is essential to controlling their mesoscale assembly.
Here, we study the disordered regime of a one-dimensional (1D) SALR model, whose simplicity enables detailed analysis by transfer matrices and Monte Carlo simulations. We first characterize the
signature of the clustering process on macroscopic observables, and then assess the equilibration dynamics of various simulation algorithms. We notably find that cluster moves markedly accelerate
the mixing time, but that event chains are of limited help in the clustering regime. These insights will inspire further study of three-dimensional microphase formers.
• Comment on "kosterlitz-Thouless-type caging-uncaging transition in a quasi-one-dimensional hard disk system"
(Physical Review Research, 2021-09-01) Hu, Y; Charbonneau, P
Huerta [Phys. Rev. Research 2, 033351 (2020)2643-156410.1103/PhysRevResearch.2.033351] report a power-law decay of positional order in numerical simulations of hard disks confined within hard
parallel walls, which they interpret as a Kosterlitz-Thouless (KT)-type caging-uncaging transition. The proposed existence of such a transition in a quasi-one-dimensional system, however,
contradicts long-held physical expectations. To clarify if the proposed ordering persists in the thermodynamic limit, we introduce an exact transfer matrix approach to expeditiously generate
configurations of very large subsystems that are typical of equilibrium thermodynamic (infinite-size) systems. The power-law decay of positional order is found to extend only over finite
distances. We conclude that the numerical simulation results reported are associated with a crossover unrelated to KT-type physics, and not with a proper thermodynamic phase transition.
• Correlation lengths in quasi-one-dimensional systems via transfer matrices
(Molecular Physics, 2018-06) Hu, Y; Fu, L; Charbonneau, P
© 2018 Informa UK Limited, trading as Taylor & Francis Group. Using transfer matrices up to next-nearest-neighbour interactions, we examine the structural correlations of quasi-one-dimensional
systems of hard disks confined by two parallel lines and hard spheres confined in cylinders. Simulations have shown that the non-monotonic and non-smooth growth of the correlation length in these
systems accompanies structural crossovers [Fu et al., Soft Matter 13, 3296 (2017)]. Here, we identify the theoretical basis for these behaviours. In particular, we associate kinks in the growth
of correlation lengths with eigenvalue crossing and splitting. Understanding the origin of such structural crossovers answers questions raised by earlier studies, and thus bridges the gap between
theory and simulations for these reference models.
• Finite Dimensional Vestige of Spinodal Criticality above the Dynamical Glass Transition.
(Physical review letters, 2020-09) Berthier, Ludovic; Charbonneau, Patrick; Kundu, Joyjit
Finite dimensional signatures of spinodal criticality are notoriously difficult to come by. The dynamical transition of glass-forming liquids, first described by mode-coupling theory, is a
spinodal instability preempted by thermally activated processes that also limit how close the instability can be approached. We combine numerical tools to directly observe vestiges of the
spinodal criticality in finite dimensional glass formers. We use the swap Monte Carlo algorithm to efficiently thermalize configurations beyond the mode-coupling crossover, and analyze their
dynamics using a scheme to screen out activated processes, in spatial dimensions ranging from d=3 to d=10. We observe a strong softening of the mean-field square-root singularity in d=3 that is
progressively restored as d increases above d=8, in surprisingly good agreement with perturbation theory.
• Finite-size effects in the microscopic critical properties of jammed configurations: A comprehensive study of the effects of different types of disorder.
(Physical review. E, 2021-07) Charbonneau, Patrick; Corwin, Eric I; Dennis, R Cameron; Díaz Hernández Rojas, Rafael; Ikeda, Harukuni; Parisi, Giorgio; Ricci-Tersenghi, Federico
Jamming criticality defines a universality class that includes systems as diverse as glasses, colloids, foams, amorphous solids, constraint satisfaction problems, neural networks, etc. A
particularly interesting feature of this class is that small interparticle forces (f) and gaps (h) are distributed according to nontrivial power laws. A recently developed mean-field (MF) theory
predicts the characteristic exponents of these distributions in the limit of very high spatial dimension, d→∞ and, remarkably, their values seemingly agree with numerical estimates in physically
relevant dimensions, d=2 and 3. These exponents are further connected through a pair of inequalities derived from stability conditions, and both theoretical predictions and previous numerical
investigations suggest that these inequalities are saturated. Systems at the jamming point are thus only marginally stable. Despite the key physical role played by these exponents, their
systematic evaluation has yet to be attempted. Here, we carefully test their value by analyzing the finite-size scaling of the distributions of f and h for various particle-based models for
jamming. Both dimension and the direction of approach to the jamming point are also considered. We show that, in all models, finite-size effects are much more pronounced in the distribution of h
than in that of f. We thus conclude that gaps are correlated over considerably longer scales than forces. Additionally, remarkable agreement with MF predictions is obtained in all but one model,
namely near-crystalline packings. Our results thus help to better delineate the domain of the jamming universality class. We furthermore uncover a secondary linear regime in the distribution
tails of both f and h. This surprisingly robust feature is understood to follow from the (near) isostaticity of our configurations.
• Gardner Phenomenology in Minimally Polydisperse Crystalline Systems
Charbonneau, Patrick; Corwin, Eric I; Fu, Lin; Tsekenis, Georgios; van der Naald, Michael
We study the structure and dynamics of crystals of minimally polydisperse hard spheres at high pressures. Structurally, they exhibit a power-law scaling in their probability distribution of weak
forces and small interparticle gaps as well as a flat density of vibrational states. Dynamically, they display anomalous aging beyond a characteristic pressure. Although essentially crystalline,
these solids thus display features reminiscent of the Gardner phase observed in certain amorphous solids. Because preparing these materials is fast and facile, they are ideal for testing a theory
of amorphous materials. They are also amenable to experimental realizations in commercially-available particulate systems.
• Granular Impact Dynamics: Acoustics and Fluctuations
Clark, Abram H; Behringer, RP
In the corresponding fluid dynamics video, created for the APS DFD 2012 Gallery of Fluid Motion, we show high-speed videos of 2D granular impact experiments, where an intruder strikes a
collection of bidisperse photoelastic disks from above. We discuss the force beneath the intruder, which is strongly fluctuating in space and time. These fluctuations correspond to acoustic
pulses which propagate into the medium. Analysis shows that this process, in our experiments, is dominated by collisions with grain clusters. The energy from these collisions is carried into the
granular medium along networks of grains, where is it dissipated.
• How to create equilibrium vapor-deposited glasses
(2017-08-23) Berthier, Ludovic; Charbonneau, Patrick; Flenner, E; Zamponi, Francesco
Glass films created by vapor-depositing molecules onto a substrate can exhibit properties similar to those of ordinary glasses aged for thousands of years. It is believed that enhanced surface
mobility is the mechanism that allows vapor deposition to create such exceptional glasses, but it is unclear how this effect is related to the final state of the film. Here we use molecular
dynamics simulations to model vapor deposition and an efficient Monte Carlo algorithm to determine the deposition rate needed to create equilibrium glassy films. We obtain a scaling relation that
quantitatively captures the efficiency gain of vapor deposition over bulk annealing, and demonstrates that surface relaxation plays the same role in the formation of vapor-deposited glasses as
bulk relaxation in ordinary glass formation.
• Local dynamical heterogeneity in glass formers
(2021-09-24) Biroli, Giulio; Charbonneau, Patrick; Folena, Giampaolo; Hu, Yi; Zamponi, Francesco
We study the local dynamical fluctuations in glass-forming models of particles embedded in $d$-dimensional space, in the mean-field limit of $d\to\infty$. Our analytical calculation reveals that
single-particle observables, such as squared particle displacements, display divergent fluctuations around the dynamical (or mode-coupling) transition, due to the emergence of nontrivial
correlations between displacements along different directions. This effect notably gives rise to a divergent non-Gaussian parameter, $\alpha_2$. The $d\to\infty$ local dynamics therefore becomes
quite rich upon approaching the glass transition. The finite-$d$ remnant of this phenomenon further provides a long sought-after, first-principle explanation for the growth of $\alpha_2$ around
the glass transition that is \emph{not based on multi-particle correlations}.
• MC-DEM: a novel simulation scheme for modeling dense granular media
Behringer, Robert P; Brodu, N; Dijksman, JA
This article presents a new force model for performing quantitative simulations of dense granular materials. Interactions between multiple contacts (MC) on the same grain are explicitly taken
into account. Our readily applicable method retains all the advantages of discrete element method (DEM) simulations and does not require the use of costly finite element methods. The new model
closely reproduces our recent experimental measurements, including contact force distributions in full 3D, at all compression levels up to the experimental maximum limit of 13\%. Comparisons with
traditional non-deformable spheres approach are provided, as well as with alternative models for interactions between multiple contacts. The success of our model compared to these alternatives
demonstrates that interactions between multiple contacts on each grain must be included for dense granular packings.
• Mean-Field Caging in a Random Lorentz Gas.
(The journal of physical chemistry. B, 2021-06-07) Biroli, Giulio; Charbonneau, Patrick; Hu, Yi; Ikeda, Harukuni; Szamel, Grzegorz; Zamponi, Francesco
The random Lorentz gas (RLG) is a minimal model of both percolation and glassiness, which leads to a paradox in the infinite-dimensional, d → ∞ limit: the localization transition is then expected
to be continuous for the former and discontinuous for the latter. As a putative resolution, we have recently suggested that, as d increases, the behavior of the RLG converges to the glassy
description and that percolation physics is recovered thanks to finite-d perturbative and nonperturbative (instantonic) corrections [Biroli et al. Phys. Rev. E 2021, 103, L030104]. Here, we
expand on the d → ∞ physics by considering a simpler static solution as well as the dynamical solution of the RLG. Comparing the 1/d correction of this solution with numerical results reveals
that even perturbative corrections fall out of reach of existing theoretical descriptions. Comparing the dynamical solution with the mode-coupling theory (MCT) results further reveals that,
although key quantitative features of MCT are far off the mark, it does properly capture the discontinuous nature of the d → ∞ RLG. These insights help chart a path toward a complete description
of finite-dimensional glasses.
• Memory Formation in Jammed Hard Spheres.
(Physical review letters, 2021-02) Charbonneau, Patrick; Morse, Peter K
Liquids equilibrated below an onset condition share similar inherent states, while those above that onset have inherent states that markedly differ. Although this type of materials memory was
first reported in simulations over 20 years ago, its physical origin remains controversial. Its absence from mean-field descriptions, in particular, has long cast doubt on its thermodynamic
relevance. Motivated by a recent theoretical proposal, we reassess the onset phenomenology in simulations using a fast hard sphere jamming algorithm and find it to be both thermodynamically and
dimensionally robust. Remarkably, we also uncover a second type of memory associated with a Gardner-like regime of the jamming algorithm.
• Microphase Equilibrium and Assembly Dynamics
(2017-08-23) Zhuang, Yuan; Charbonneau, Patrick
Despite many attempts, ordered equilibrium microphases have yet to be obtained in experimental colloidal suspensions. The recent computation of the equilibrium phase diagram of a microscopic,
particle-based microphase former [Zhuang et al., Phys. Rev. Lett. 116, 098301 (2016)] has nonetheless found such mesoscale assemblies to be thermodynamically stable. Here, we consider their
equilibrium and assembly dynamics. At intermediate densities above the order-disorder transition, we identify four different dynamical regimes and the structural changes that underlie the
dynamical crossovers from one disordered regime to the next. Below the order-disorder transition, we also find that periodic lamellae are the most dynamically accessible of the periodic
microphases. Our analysis thus offers a comprehensive view of the disordered microphase dynamics and a route to the assembly of periodic microphases in a putative well-controlled, experimental
• Numerical transfer matrix study of frustrated next-nearest-neighbor Ising models on square lattices
(Physical Review B, 2021-10-01) Hu, Y; Charbonneau, P
Ising models with frustrated next-nearest-neighbor interactions present a rich array of modulated phases. These phases, however, assemble and relax slowly, which hinders their computational
study. In two dimensions, strong fluctuations further hamper determining their equilibrium phase behavior from theoretical approximations. The exact numerical transfer matrix (TM) method, which
bypasses both difficulties, can serve as a benchmark method once its own numerical challenges are surmounted. Building on our recent study [Hu and Charbonneau, Phys. Rev. B 103, 094441 (2021)
2469-995010.1103/PhysRevB.103.094441], in which we evaluated the two-dimensional axial next-nearest-neighbor Ising model with transfer matrices, we here extend the effective usage of the TM
method to Ising models with biaxial, diagonal, and third-nearest-neighbor frustration models. The high-accuracy TM numerics help resolve various physical ambiguities about these reference models,
thus providing a clearer overview of modulated phase formation in two dimensions.
• Origin of Ultrastability in Vapor-Deposited Glasses.
(Physical review letters, 2017-11) Berthier, Ludovic; Charbonneau, Patrick; Flenner, Elijah; Zamponi, Francesco
Glass films created by vapor-depositing molecules onto a substrate can exhibit properties similar to those of ordinary glasses aged for thousands of years. It is believed that enhanced surface
mobility is the mechanism that allows vapor deposition to create such exceptional glasses, but it is unclear how this effect is related to the final state of the film. Here we use molecular
dynamics simulations to model vapor deposition and an efficient Monte Carlo algorithm to determine the deposition rate needed to create ultrastable glassy films. We obtain a scaling relation that
quantitatively captures the efficiency gain of vapor deposition over bulk annealing, and demonstrates that surface relaxation plays the same role in the formation of vapor-deposited glasses as
bulk relaxation does in ordinary glass formation.
|
{"url":"https://dukespace.lib.duke.edu/browse/subject?value=cond-mat.soft","timestamp":"2024-11-12T16:45:03Z","content_type":"text/html","content_length":"749597","record_id":"<urn:uuid:cfaafaca-736a-42c1-b8a0-48488fdbb4cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00755.warc.gz"}
|
While Loop
While Loop in JavaScript
In this tutorial, we will learn about while loops in JavaScript. We will cover the basics of iterative execution using while loops.
What is a While Loop
A while loop is a control flow statement that allows code to be executed repeatedly based on a given Boolean condition. The loop runs as long as the condition evaluates to true.
The syntax for the while loop in JavaScript is:
while (condition) {
// Code block to be executed
The while loop evaluates the condition before executing the loop's body. If the condition is true, the code block inside the loop is executed. This process repeats until the condition becomes false.
Example 1: Printing Numbers from 1 to 5
1. Declare a variable i and initialize it to 1.
2. Use a while loop to print numbers from 1 to 5.
JavaScript Program
let i = 1;
while (i <= 5) {
Example 2: Calculating the Sum of First N Natural Numbers
1. Declare variables n and sum.
2. Assign a value to n.
3. Initialize sum to 0.
4. Use a while loop to calculate the sum of the first n natural numbers.
5. Print the sum.
JavaScript Program
let n = 10;
let sum = 0;
let i = 1;
while (i <= n) {
sum += i;
console.log(`Sum of first ${n} natural numbers is ${sum}`);
Sum of first 10 natural numbers is 55
Example 3: Reversing a Number
1. Declare variables num and rev.
2. Assign a value to num.
3. Initialize rev to 0.
4. Use a while loop to reverse the digits of num.
5. Print the reversed number.
JavaScript Program
let num = 12345;
let rev = 0;
while (num !== 0) {
let digit = num % 10;
rev = rev * 10 + digit;
num = Math.floor(num / 10);
console.log(`Reversed number is ${rev}`);
Reversed number is 54321
|
{"url":"https://pythonexamples.org/javascript/while-loop","timestamp":"2024-11-03T12:08:40Z","content_type":"text/html","content_length":"34187","record_id":"<urn:uuid:521790e5-405e-4ac9-8a4a-245a0e6701bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00034.warc.gz"}
|
Translation to Internal Representation | Optimizer Process | Teradata Vantage - Translation to Internal Representation - Analytics Database - Teradata Vantage
Parse Tree Representations of an SQL Request
Like most cost-based optimizers, the Optimizer represents SQL requests internally as parse trees. The foundation node for any parse tree is referred to as the root. Each node of the tree has 0 or
more subtrees.
The elements of a parse tree are text strings, mathematical operators, delimiters, and other tokens that can be used to form valid expressions. These elements are the building blocks for SQL
requests, and they form the nodes and leafs of the parse tree.
Parse trees are shown here with the root at the top with the leaves at the bottom., so the term push down means that an expression is evaluated earlier in the process than it would have otherwise
been. Each node in the tree represents a database operation and its subtrees are the operands of this operation. The subtree operands are typically expressions of some kind, but might also be various
database objects such as tables.
The following graphic illustrates a minimal parse tree having one node and two leaves:
Tables Used for the Examples
Consider how part of the following query could be represented and optimized using a parse tree. The tables used for the request example are defined as follows:
│ part_num │ description │ mfg_name │ mfg_part_num │
│ PK │ │ FK │ │
│ PI │ │ │ │
│ part_num │ mfg_name │ mfg_address │ mfg_city │
│ PK │ │ │ │
│ PI │ │ │ │
│ cust_num │ cust_name │ cust_address │ cust_city │
│ PK │ │ │ │
│ PI │ │ │ │
│ order_num │ cust_name │ mfg_part_num │ order_date │
│ PK │ FK │ │ │
│ PI │ │ │ │
Example SQL Request
Consider the following example SQL request:
SELECT part_num, description, mfg_name, mfg_part_num, cust_name,
cust_address, cust_num, order_date
FROM order INNER JOIN customer INNER JOIN parts
WHERE customer.cust_num = order.cust_num
AND parts.mfg_part_num = order.mfg_part_num
AND order_date < DATE ā 2001-01-01ā ;
The initial translation of the request into a parse tree is performed by the Syntaxer (see Syntaxer) after it finishes checking the request text for syntax errors. Query Rewrite receives a processed
parse tree as input from the Resolver, then produces a rewritten, but semantically identical, parse tree as output to the Optimizer. This request tree is just a parse tree representation of the
original request text.
The Optimizer further transforms the tree by determining an optimal plan, and, when appropriate, determining table join orders and join plans before passing the resulting parse tree on to the
At this point, the original request tree has been discarded and replaced by an entirely new parse tree that contains instructions for performing the request. The parse tree is now an operation tree.
It is a textual form of this tree, also referred to as a white tree, that the Optimizer uses to produce EXPLAIN text when you explain a request. Note that a separate subsystem adds additional costing
information about operations the Optimizer does not cost to the white tree before any EXPLAIN text is produced for output.
Assume the Optimizer is passed the following simplified parse tree by Query Rewrite. This tree is actually an example of a simple SynTree, but an annotated ResTreeā would needlessly complicate the
explanation without adding anything useful to the description.
The Cartesian product operator is represented by the symbol X in the illustration.
The first step in the optimization is to marshal the predicates (which, algebraically, function as relational select, or restrict, operations) and push all three of them as far down the tree as
possible. The objective is to perform all SELECTION and PROJECTION operations as early in the retrieval process as possible. Remember that the relational SELECT operator is an analog of the WHERE
clause in SQL because it restricts the rows in the answer set, while the SELECT clause of the SQL SELECT statement is an analog of the algebraic PROJECT operator because it limits the number of
columns represented in the answer set.
The process involved in pushing down these predicates is indicated by the following process enumeration. Some of the rewrite operations are justified by invoking various rules of logic. You need not
be concerned with the details of these rules: the important thing to understand from the presentation is the general overall process, not the formal details of how the process can be performed.
The first set of processes is performed by Query Rewrite.
1. Split the compound ANDed condition into separate predicates. The result is the following pair of SELECT operations:
SELECT customer.cust_num = order.cust_num
SELECT parts.mfg_part_num = order.mfg_part_num
2. By commutativity, the SELECTION order.order_date < 2001-01-01 can be pushed the furthest down the tree, and it is pushed after the PROJECTION of the order and customer tables.
This particular series of algebraic transformations, which is possible because order_date is an attribute of the order table only, is as follows:
a. Begin with the following predicate:
SELECT order_date < 2001-01-01((order X customer) X parts)
b. Transform it to the following form:
SELECT (order_date < 2001-01-01(order X customer)) X parts
c. Transform it further to the following form:
SELECT ((order_date < 2001-01-01(order)) X customer) X parts
This is as far as the predicate can be transformed, and it has moved as far down the parse tree as it can be pushed.
3. The Optimizer examines the following SELECT operation to see if it can be pushed further down the parse tree:
SELECT parts.mfg_part_num = order.mfg_part_num
Because this SELECTION contains one column from the parts table and another column from a different table (order), it cannot be pushed down the tree any further than the position it already
4. The Optimizer examines the following SELECT operation to determine if it can be pushed any further down the parse tree:
SELECT customer.cust_num = order.cust_num
This expression can be moved down to apply to the product: order_date < 2001-01-01 (order) X customer.
order.cust_num is an attribute in SELECT date < 2001-01-01(order) because the result of a selection accumulates its attributes from the expression on which it is applied.
5. The Optimizer combines the 2 PROJECTION operations in the original parse tree into the single PROJECTION part_num.
The structure of the parse tree after this combination is reflected in the following illustration:
6. This intermediate stage of the parse tree can be further optimized by applying the rules of commutation for SELECT and PROJECT operations and replacing PROJECT PartNum and SELECT customer.custnum
= order.custnum by the following series of operations:
PROJECT parts.part_num
SELECT parts.mfg_part_num = order.mfg_part_num
PROJECT parts.part_num, parts.mfg_part_num, order.mfg_part_num
7. Using the rules of commutation of a PROJECTION with a Cartesian product, replace the last PROJECTION in Stage 6 with the following PROJECTION:
PROJECT part_num, parts.mfg_part_num
8. Similarly, apply PROJECT order.mfgpartnum to the left operand of the higher of the 2 Cartesian products. This projection further interacts with the subsequent SELECT operation (customer.custnum =
order.custnum) to produce the following series of algebraic operations:
PROJECT order.mfg_part_num
SELECT customer.cust_num = order.cust_num
PROJECT order.mfg_part_num, customer.cust_num, order.cust_num
9. The last expression from Stage 8 bypasses the Cartesian product by commutation of PROJECTION with a UNION operation and partially bypasses the SELECTION operation SELECT order.orderdate <
01-01-2001 commutation.
10. The Optimizer sees that the resulting expression PROJECT order.mfg_part_num, order.custnum, orderdate is redundant with respect to its PROJECTION term, so it is removed from further consideration
in the query.
The transformed parse tree that results from all these transformations is shown in the following illustration:
The 2 Cartesian product operations are equivalent to equijoins when they are paired with their higher selections (where "higher" indicates that the operations are performed later in the execution of
the query). Note that operators are grouped in the graphic, illustrated by boundary lines. Each bounded segment of the parse tree corresponds very roughly to an AMP worker task.
|
{"url":"https://docs.teradata.com/r/Enterprise_IntelliFlex_VMware/SQL-Request-and-Transaction-Processing/Query-Rewrite-Statistics-and-Optimization/Translation-to-Internal-Representation","timestamp":"2024-11-08T10:40:02Z","content_type":"text/html","content_length":"183836","record_id":"<urn:uuid:13353526-c948-46f5-8ffd-063c9420f1dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00103.warc.gz"}
|
IF and VLOOKUP Nested Function in Excel: 5 Practical Uses and 2 Ways of Handling Errors - ExcelDemy
Practical Uses
1. How to Match VLOOKUP Output with a Specific Value
Let’s say we want to determine how much inventory we have for a particular product.
• Select C17 and enter the following formula:
Formula Breakdown:
• VLOOKUP in C16 identifies Name as the search keyword.
• $C$5:$D$14 identifies the search range; the 2 means we are looking for matching criteria in the second column (Quantity), while FALSE means we have an exact match.
• The formula VLOOKUP(C16,$C$5:$D$14,2, FALSE) calculates the Quantity of product assigned to that
• With the addition of the IF function, i.e. depending on whether the result is greater than zero, C17 indicates either Yes (the product is in stock) or No (the product is not currently in stock).
Note that if the product in C16 indicates a quantity greater than zero in D16, the result appears as Yes.
Note that if the product in C16 indicates a quantity equal to zero in D16, the result appears as No.
We now know that the Apple iPhone X is not in stock.
Learn More: VLOOKUP and IF
Read More: How to Use Nested VLOOKUP in Excel
2. How to Use the IF & VLOOKUP Nested Function With Two Lookup Values
Let’s say we want to locate the price of a particular product in a particular market.
• Select C18 and enter the following formula:
=IF(C17="Market 1",VLOOKUP(C16,B5:E14,3,FALSE),VLOOKUP(C16,B5:E14,4,FALSE))
• Select C16 and enter the product’s ID.
• Select C17 and enter Market 1.
• Press Enter.
Formula Breakdown:
• IF(C17=”Market 1″) determines that our initial interest is in Apple iPhone X’s Market 1 price.
• VLOOKUP(C16,B5:E14,3,FALSE) identifies the search range, i.e. the third column (Market 1).
• IF(C17=”Market 1″,VLOOKUP(C16,B5:E14,4,FALSE) means that if there is no Market 1 price, the search moves on to the fourth column (Market 2).
• When the Apple iPhone X’s ID is entered in C16 and “Market 1” in C17, the price will appear in C18.
We now know that the Market 1 price for the Apple iPhone X is $1,150.00.
Read More: Excel LOOKUP vs VLOOKUP
3. How to Match Lookup Returns with Another Cell Using the MAX Function
Let’s say we want to compare unit prices across products to see which is the highest.
• Select C17 and enter the following formula:
Formula Breakdown:
• VLOOKUP(C16,$B$5:$G$14,4) compares the Apple iPhone 11 Pro’s Unit Price with that of the highest price (calculated in Example 3).
• IF >=F16,”Yes”,”No” determines whether that price is greater than or equal to the price in F16.
• IF(VLOOKUP(C16,$B$5:$G$14,4) >=F16,”Yes”,”No” compares these prices, then indicates Yes or No in C17.
We now know that the most expensive product we sell is the Apple iPhone 11 Pro.
Read More: Return the Highest Value Using VLOOKUP Function in Excel
4. How to Use the IF & VLOOKUP Nested Function to Lookup Values from a Shorter List
Let’s say we want to find out whether a particular product has been delivered.
• Select G5 and enter the following formula:
=IF(ISNA(VLOOKUP(C5,$I$5:$I$10,1,FALSE)),"Not Delivered","Delivered")
Formula Breakdown:
• IF establishes the delivery status of each product as Delivered or Not delivered
• ISNA sets the criterion as TRUE (if delivered) or FALSE (if not).
• VLOOKUP(C5,$I$5:$I$10,1, FALSE) checks the Name of each product and, if it matches TRUE, adds it to the Delivered Project List (column I), then indicates Delivered or Not delivered in G5.
To duplicate the formula, click and drag the Fill Handle down the targeted range.
We now know that six of the ten products have been delivered.
5. How to Use the IF & VLOOKUP Nested Function to Perform Different Calculations
Let’s say we want to find out whether a) with a discount of 20%, the unit price is greater than $800 or b) with a discount of 15%, it’s lower than $800.
• Select C17 and enter the following formula:
=IF(VLOOKUP(C16,$B$5:$F$14,4,FALSE )>800, VLOOKUP(C16,$B$5:$F$14,4,FALSE)*15%, VLOOKUP(C16,$B$5:$F$14,4,FALSE)*20%)
Formula Breakdown:
• IF establishes that the Unit Price is either over or under 800.
• VLOOKUP(C16,$B$5:$F$14,4,FALSE )>800 checks whether the product ID entered in C16 has a Unit Price greater than 800.
• =IF(VLOOKUP(C16,$B$5:$F$14,4,FALSE )>800,VLOOKUP(C16,$B$5:$F$14,4,FALSE)*15%,VLOOKUP(C16,$B$5:$F$14,4,FALSE)*20%) ensures that the product’s unit price is correctly multiplied by 15% (if greater
than 800) or 20% (if less than 20%), ), then indicates the Discount in C17.
We now know the discounted price for the Apple iPhone 11 Pro is $180 less than its unit price.
Handling Errors
Sometimes there’s no match to your lookup, so you might get #N/A or 0.
1. How to Use the ISNA Function with the IF & VLOOKUP Nested Function to Hide #N/A Errors
• Select C17 and enter the following formula:
=IF(ISNA(VLOOKUP(C16,$B$5:$F$14,4,FALSE)),"Not found",VLOOKUP(C16,$B$5:$F$14,4,FALSE))
Formula Breakdown:
• IF establishes that each product in the dataset may or may not have a unit price.
• VLOOKUP(C16,$B$5:$F$14,4,FALSE) searches Unit Price (column E) for the product ID entered in C16.
• ISNA(VLOOKUP(C16,$B$5:$F$14,4,FALSE)) checks whether or not the product has a unit price.
• =IF(ISNA(VLOOKUP(C16,$B$5:$F$14,4,FALSE)),”Not found”,VLOOKUP(C16,$B$5:$F$14,4,FALSE)) ensures C17 will indicate either the unit price (if the product has one) or “Not found” (if it doesn’t).
2. How to Use the ISNA Function with the IF & VLOOKUP Nested Function to Represent Missing Data as 0
• Select C17 and enter the following formula:
Formula Breakdown:
• IF establishes that each product may or may not have a unit price.
• ISNA(VLOOKUP(C16,$B$5:$F$14,4,FALSE)) searches Unit Price (column E) for the product ID entered in C16.
• =IF(ISNA(VLOOKUP(C16,$B$5:$F$14,4,FALSE)),0,VLOOKUP(C16,$B$5:$F$14,4,FALSE)) ensures C17 indicates either the unit price (if the product has one) or 0 (if it doesn’t).
Things to Remember
#N/A errors typically appear because:
• The lookup value does not exist in the table.
• The lookup value is misspelled or contains extra space.
• The table range was not entered correctly.
• You are copying VLOOKUP across several sells without first locking the table reference.
When cells are formatted as currently, there will be a dashed line (-) instead of 0.
Download Free Practice Workbook
Further Readings
<< Go Back to VLOOKUP with IF Condition | Excel VLOOKUP Function | Excel Functions | Learn Excel
Get FREE Advanced Excel Exercises with Solutions!
2 Comments
1. I need help.
I have two criteria data points to reference, to pull a third.
If A and B match on sheet 1 & 2, I need to pull in the third column’s data. How do I accomplish this? I believe I am overthinking this formula nesting.
□ Hello APRIL, We already have an article written based on your problem. I hope, you will find this helpful. Follow this link below-
Try the methods mentioned in this article and let us know the outcome. Thank you!
Leave a reply
|
{"url":"https://www.exceldemy.com/if-and-vlookup-nested-function/","timestamp":"2024-11-07T21:36:45Z","content_type":"text/html","content_length":"205627","record_id":"<urn:uuid:6b63df26-b9fd-4947-9b8d-2cbb1bc4d223>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00093.warc.gz"}
|
Combine Columns in Excel
Here, I’ll show you how you can concatenate columns with or without a separator.
We are going to use the following example.
Combine multiple columns into one
In order to concatenate multiple columns into a single column, you have to use the CONCATENATE function.
1 =CONCATENATE(A2,B2,C2,D2)
Alternatively, you can use this simple formula, where every cell is merged via the & operator.
Both formulas are going to return the same result.
The Concatenated column is not the most readable one.
Combine multiple columns into one (with separator)
In order to make the example cleaner, we are going to add a separator (comma + space) between cells values.
1 =CONCATENATE(A2,", ",B2,", ",C2,", ",D2)
This time we can also drop the CONCATENATE function and write it as the following formula.
1 =A2&", "&B2&", "&C2&", "&D2
Combine multiple columns with date
You can also combine columns with the date. But it’s become more tricky here. Date in Excel is just a number formatted as a date.
If you use the following formula, the result won’t be what we expected.
1 =CONCATENATE(A2," ",B2,".")
And the same situation occurs with the second method.
In order to achieve the desired result, we have to first format number as date and then combine the cells.
The first method.
1 =CONCATENATE(A2," ",TEXT(B2,"mm/dd/yyyy"),".")
The second method.
1 =A3&" "&TEXT(B2,"mm/dd/yyyy")&"."
The result is what we expected.
|
{"url":"https://officetuts.net/excel/formulas/concatenate-columns/","timestamp":"2024-11-13T08:44:56Z","content_type":"text/html","content_length":"155492","record_id":"<urn:uuid:b1405157-ce7f-441e-817b-31c82b6f4054>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00044.warc.gz"}
|
Sufficiently high observation density justifies a sequential modeling approach of PKPD and dropout data
Paul Matthias Diderichsen (1), Sandeep Dutta (2)
(1) Abbott Laboratories A/S, Emdrupvej 28C, DK-2100 København Ø, Denmark, (2) Abbott Laboratories, Abbott Park, Illinois, USA
Objectives: A common approach to modeling the exposure-dependent efficacy or safety outcome of a clinical trial is to first develop a model describing the pharmacokinetics (PK) of the drug, and
subsequently explaining the observed efficacy using the mean or individually predicted PK as an independent variable in a pharmacodynamic (PD) model. A similar sequential approach may be used in the
construction of hazard models for describing observed dropout, where the predicted PKPD is used to drive the hazard model. Unless the hazard is described using observed data only, the sequential
approach to modeling the hazard is theoretically less preferable to a simultaneous approach where PKPD and hazard model parameters are estimated jointly. In this work, we investigate if sequential
and simultaneous approaches result in similar parameter estimates for six simulated study scenarios with varying density of PKPD data.
Methods: The data for this study was simulated using a one-compartment PK model and an inhibitory PD model describing the effect (EFF) using parameter IC50. Dropout was simulated using a hazard
proportional to the efficacy: HAZ=A*EFF. Six scenarios with increasing number of PD observations (from 2 to 24) were simulated. The hazard of dropping out was modeled using a random dropout model (RD
[2]) based on observed data only, and an informed dropout model (ID [2]), that used the PD model to explain the hazard. The ID model was fit sequentially (SEQ-ID) and simultaneously (SIM-ID) with the
PD data. PD and hazard model parameters were estimated for the 3 models using NONMEM.
The joint likelihood for observing the pain intensity data (Y[O]) and dropout data (T) is given by [2]:
P(Y[0],T)=∫P(T|Y[0],η) P(Y[0], η) P(η)dη
The conditional likelihood for the dropout data depends on the random effect, η, only in the ID models, which should therefore be estimated simultaneously with the PD data.
Results: The deviation from the true parameter value was estimated for A, IC50, the CV on IC50, and the error on EFF. The deviation in IC50, CV(IC50) and A decreased when the density of observed data
was increased. While the hazard proportionality factor, A, was well estimated for both the SEQ-ID and SIM-ID methods in all six scenarios, IC50 was accurately estimated in sparse data scenarios only
when the SIM-ID model was used.
Conclusions: The hazard model parameter was well described in all six scenarios with either of the SIM-ID and SEQ-ID approaches. The benefit of the joint analysis was a reduction in deviation of PD
model parameter in sparse scenarios where the true effect had considerable fluctuations between observations. The benefit of a sequential analysis was a simplification of models and datasets and
decreased model runtime. While the conclusion that sufficient density in the observed PD data allows for a sequential analysis holds for the present simulated dataset, other datasets require
individual consideration as to whether sequential or joint analysis should be used.
[1] Diderichsen et al.: Modeling “Pain Memory” is Central to Characterizing the Hazard of Dropping Out in Acute Pain Studies, ACOP 2009 (poster)
[2] Hu and Sale: A Joint Model for Nonlinear Longitudinal Data with Informative Dropout, J Pharmacokinetics and Pharmacodynamics, 30, 2003
Reference: PAGE 19 (2010) Abstr 1836 [www.page-meeting.org/?abstract=1836]
Poster: Methodology- Model evaluation
|
{"url":"https://www.page-meeting.org/?abstract=1836","timestamp":"2024-11-09T07:59:29Z","content_type":"text/html","content_length":"21905","record_id":"<urn:uuid:80791d83-695e-4799-a1de-f1877f074aa9>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00555.warc.gz"}
|
Quorum placement in networks to minimize access delays
A quorum system is a family of sets (themselves called quorums), each pair of which intersect. In many distributed algorithms, the basic unit accessed by a client is a quorum of nodes. Such
algorithms are used for applications such as mutual exclusion, data replication, and dissemination of information. However, accessing spread-out quorums causes access delays that we would like to
minimize. Furthermore, every member of the quorum incurs processing load to handle quorum accesses by clients. In this paper we study the problem of placing quorums in a physical network so as to
minimize the delay that clients incur by accessing quorums, and while respecting each physical node's capacity (in terms of the load of client requests it can handle). We provide approximation
algorithms for this problem for two natural measures of delay (the max-delay and total-delay). All our algorithms ensure that each node's load is within a constant factor of its capacity, and
minimize delay to within a constant factor of the optimal delay for all capacity-respecting solutions. We also provide better approximations for several well-known quorum systems.
Publication series
Name Proceedings of the Annual ACM Symposium on Principles of Distributed Computing
Volume 24
Other 24th Annual ACM Symposium on Principles of Distributed Computing, PODC 2005
Country/Territory United States
City Las Vegas, NV
Period 7/17/05 → 7/20/05
• Approximation Algorithms
• LP Rounding
• Location Problems
• Quorum Systems
ASJC Scopus subject areas
• Software
• Hardware and Architecture
• Computer Networks and Communications
Dive into the research topics of 'Quorum placement in networks to minimize access delays'. Together they form a unique fingerprint.
|
{"url":"https://nyuscholars.nyu.edu/en/publications/quorum-placement-in-networks-to-minimize-access-delays","timestamp":"2024-11-09T20:00:11Z","content_type":"text/html","content_length":"55457","record_id":"<urn:uuid:a7b47c42-0c35-4f26-85c0-db29da38eb8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00484.warc.gz"}
|
Trigonometric Form of Ceva's Theorem
Trigonometric Form of Ceva's Theorem
Ceva's theorem provides a unifying concept for several apparently unrelated results. The theorem states that, in \(\Delta ABC,\) three Cevians \(AD,\) \(BE,\) and \(CF\) are concurrent iff the
following identity holds:
\(\displaystyle \frac{AF}{FB} \cdot \frac{BD}{DC} \cdot \frac{CE}{EA} = 1. \)
The theorem has a less known trigonometric form
\(\displaystyle \frac{\mbox{sin}(\angle ABE)}{\mbox{sin}(\angle CBE)} \cdot \frac{\mbox{sin}(\angle BCF)}{\mbox{sin}(\angle ACF)} \cdot \frac{\mbox{sin}(\angle CAD)}{\mbox{sin}(\angle BAD)} = 1, \)
\( \mbox{sin}(\angle ABE) \cdot \mbox{sin}(\angle BCF) \cdot \mbox{sin}(\angle CAD) = \mbox{sin}(\angle CBE) \cdot \mbox{sin}(\angle ACF) \cdot \mbox{sin}(\angle BAD). \)
The latter may serve as a source of great many trigonometric identities - some obvious, some much less so. Some unexpected identities can be obtained by placing the six points \(A,\) \(B,\) \(C\) at
the vertices of a regular polygon. If the Cevians are extended to intersect the circumscribed circle, they become three diagonals (or sides) of the regular polygon. We are of course concerned with
the case where a regular polygon has three concurrent diagonals.
For example, the original configuration in the applet below, suggests the following identity:
\( \mbox{sin}(20^{\circ})\cdot \mbox{sin}(50^{\circ})\cdot \mbox{sin}(70^{\circ}) = \mbox{sin}(30^{\circ})\cdot \mbox{sin}(30^{\circ})\cdot \mbox{sin}(80^{\circ}), \)
which can be easily verified. Play with the applet to find more such identities. The number of sides of the polygon, can be modified by clicking on the number - originally \(18\) - in the lower left
corner of the applet.
The applet also lends itself to discovery of problems of a different kind. Return to the \(18-\mbox{gon}.\) You may observe that the configuration now is reminiscent of a very popular problem.
Namely, in an isosceles \(\Delta ABC,\) angle \(B\) equals \(20^{\circ}.\) (The same triangle appears in a different configuration inside regular \(18-\mbox{gon}.\)) Two lines \(AD\) and \(BE\) are
drawn such that \(\angle CAD = 60^{\circ},\) whereas \(\angle ACE = 50^{\circ}.\) Find \(\angle ADE.\) (Check a more extensive discussion of this problem and a relevant solution.)
From the diagram it is immediate that the answer is \(30^{\circ}.\)
In a similar vein consider another problem. In an isosceles ΔABC, \(\angle ABC = 80^{\circ}.\) A point \(M\) is selected so that \(\angle MAC = 30^{\circ}\) and \(\angle MCA = 10^{\circ}.\) Find \(\
angle BMC.\)
Finally, see what you can make of the diagram below:
1. D. Gale, Tracking the Automatic Ant, Springer-Verlag, 1998
2. V. V. Prasolov, Essays On Numbers And Figures, AMS, 2000
|Contact| |Front page| |Contents| |Geometry|
Copyright © 1996-2018 Alexander Bogomolny
Proof of Ceva's Trigonometric Identity
\(\displaystyle \frac{AF}{FB} \cdot \frac{BD}{DC} \cdot \frac{CE}{EA} = 1 \)
is equivalent to
\(\displaystyle \frac{\mbox{Area}(\Delta AFK)}{\mbox{Area}(\Delta BFK)} \cdot \frac{\mbox{Area}(\Delta BKD)}{\mbox{Area}(\Delta CKD)} \cdot \frac{\mbox{Area}(\Delta CKE)}{\mbox{Area}(\Delta AKE)} = 1
This is because triangles \(AFK\) and \(BFK\) have the same altitude from the vertex \(K\) and similarly for other two pairs of triangles. On the other hand, \(\mbox{Area}(\Delta AFK) = AF\cdot AK\
cdot \mbox{sin}(\angle BAD)/2\), etc. which leads to
\(\displaystyle \frac{\mbox{sin}(\angle ABE)}{\mbox{sin}(\angle CBE)} \cdot \frac{\mbox{sin}(\angle BCF)}{\mbox{sin}(\angle ACF)} \cdot \frac{\mbox{sin}(\angle CAD)}{\mbox{sin}(\angle BAD)} = \frac
{FB}{AF} \cdot \frac{DC}{BD} \cdot \frac{EA}{CE} = 1, \)
and the theorem follows.
|Contact| |Front page| |Contents| |Geometry|
Copyright © 1996-2018 Alexander Bogomolny
In an isosceles \(\Delta ABC,\) \(\angle ABC = 80^{\circ}.\) A point \(M\) is selected so that \(\angle MAC = 30^{\circ}\) and \(\angle MCA = 10^{\circ}.\) Find \(\angle BMC.\)
Now, it's obvious that \(\angle BMC = 70^{\circ}.\)
|Contact| |Front page| |Contents| |Geometry|
Copyright © 1996-2018 Alexander Bogomolny
The following solution was found by S. T. Thompson, Tacoma, Washington (see Honsberger, pp 16-18).
With the change of notations, in \(\Delta A_{14}OA_{15},\) two lines \(A_{15}X\) and \(A_{14}Y\) are drawn such that \(\angle A_{14}A_{15}X = 60^{\circ}\) and \(\angle A_{15}A_{14}Y = 50^{\circ}.\)
The question in the above configuration is to determine \(\angle A_{15}XY.\)
Draw a circle with center \(O\) and radius \(OA_{14}.\) The chord \(A_{14}A_{15}\) subtends a \(20^{\circ}\) arc, so that \(A_{14}A_{15}\) is a side of the regular \(18-\mbox{gon}\) inscribed into
that circle. I numbered the vertices of that \(18-\mbox{gon}\) as shown in the diagram above.
Two observations are important for the proof:
1. \(A_{14}A_{2}\) passes through \(Y.\)
2. \(A_{10}A_{16}\) passes through both \(X\) and \(Y.\)
Indeed, \(A_{10}A_{16} = A_{14}A_{2},\) as chords subtending equal arcs. Furthermore, they are symmetric with respect to radius \(OA_{15}.\) Therefore, they intersect on that radius. In the isosceles
triangle \(OA_{14}A_{2},\) the angle at \(O\) is obviously equal \(120^{\circ}.\) Therefore, \(\angle OA_{14}A_{2} = 30^{\circ}.\) We see that \(A_{14}A_{2}\) passes through \(Y.\)
Further, \(A_{13}\) is the middle of the arc \(A_{10}A_{16}.\) Therefore, \(A_{10}A_{16} \perp OA_{13}.\)
Let's for the moment denote the point of intersection of \(A_{10}A_{16}\) with \(OA_{14}\) as \(X'.\) Since every point on \(A_{10}A_{16}\) is equidistant from \(O\) and \(A_{13},\) so is \(X':\) \
(OX' = X'A_{13}.\) In the isosceles triangle \(OXA_{13},\) \(\angle OA_{13}X = \angle A_{13}OX = 20^{\circ}.\) Therefore, \(X' = X,\) which proves the second of the two observations.
Now, as we've seen, in the isosceles \(OXA_{13},\) \(A_{10}A_{16}\) is the height to side \(OA_{13}.\) It's then bisects \(\angle OXA_{13},\) which implies \(\angle OXA_{10} = 70^{\circ}.\) But then
also \(\angle A_{14}XA_{16} = 70^{\circ}.\) On the other hand, \(\angle A_{14}XA_{15} = 180^{\circ} - \angle OA_{14}A_{15} - \angle XA_{15}A_{14}.\) \(\angle A_{14}XA_{15} = 180^{\circ} - 80^{\circ}
- 60^{\circ} = 40^{\circ}.\) Finally, \(\angle A_{15}XY = \angle A_{14}XA_{16} - \angle A_{14}XA_{15} = 70^{\circ} - 40^{\circ} = 30^{\circ}.\)
1. R. Honsberger, Mathematical Gems, II, MAA, 1976
Menelaus and Ceva
|
{"url":"https://www.cut-the-knot.org/triangle/TrigCeva.shtml","timestamp":"2024-11-03T13:24:20Z","content_type":"text/html","content_length":"23859","record_id":"<urn:uuid:e9aa7545-441a-4dc2-a888-2a13f74cae0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00645.warc.gz"}
|
Physics - Online Tutor, Practice Problems & Exam Prep
We've got that. We're going to use calculus to calculate the gravitational force for non-spherical distributions of mass. So let's recap everything that we've learned about Newton's law of gravity.
Whenever we had point masses like m₁ and m₂, and we had the distance involved, the gravitational force was just given by Newton's law of gravity. Now, even when we had planets, like in the second
example over here, we kinda just pretended that these planets or these objects could be treated as points, and all the mass was just concentrated at the center. So that's big M, and this guy over
here is sort of just like little m. And as long as we had the center of mass distance, which is that little r, it basically worked the exact same way. So instead of m₁s and m₂s, we just had big M's
and little m's. So we just used Newton's law of gravity, and we just replaced it with the appropriate variables. The difference is that when we have a non-spherical mass distribution, we can't just
imagine that all the masses are concentrated at the center here. It doesn't work that way. So, we need a new technique to solve these problems. And one of the things that we can do is we can sort of
pretend that this rod of mass M, big M that I have here, can be thought of or broken up into tiny little pieces of rectangles or mass right here. And the smaller they become, they become a
differential mass. So I'm going to call that dm, and we can treat that dm as a point mass. So that means that this point mass dm generates a tiny amount of force in this direction, and that's going
to be df, and it's due to that center of mass distance of little r right here. df=g•dm•m/r2 But remember that this is only one tiny piece of the force from one tiny piece of the mass. So, to solve
for this total force, we have to basically go along the rod, and we have to add up all of the tiny amounts of forces that are generated from all these tiny amounts of masses. So, think back to
calculus. What do we call it when we add up a bunch of tiny infinitesimal things? We call that an integral. So the way that we're going to solve these problems is we have to integrate along whatever
mass that we're given. So that means that the total amount of force is going to be given as the integral of df, which is just going to be the integral of g•dm•m/r2. But one of the things that we can
do with this equation is we can actually pull out the big g and the little m as constants to the outside of the integral. Right? Because those things don't change. So the more useful form that we're
always going to use here at Clutch when we solve these problems is we're going to use the g•m•∫−∞dm/r2. So we're always going to start from this equation when we're solving these problems.
Alright, guys. That's basically it. So we actually have a list of steps that we're going to need to solve any one of these mass distribution problems. But rather than telling you, I actually want to
go ahead and show you. So we're going to work out this example together. This is actually a very classic common type of problem that you'll see if you're doing this topic right here. So we've got a
hollow ring. It's got a mass m. It's got some distances over that radius. We have a distance which a little mass over here is sitting and we need to find out what the gravitational force is. So, the
first thing that we do in all of these problems is we write out the equation for f, which is right over here. So we have f is equal to and then we have g•m•∫−∞dm/r2. So that's the first step. Okay?
So the second step is we have to pick Let's see. It says pick 2 DMs and we have to write an expression for r. So I've got this DM that's going to be over here and then I'm going to pick another DM
like this over here and then we know that these DM's act like point masses and they produce forces on this mass over here. They're going to point in this direction like that. That's going to be 1 df
and then this guy is going to be in this direction and that's going to be another df. Okay? So, now we actually have to figure out what the r's are, the center of mass distance is. So, that is this
right here, these pieces right there, and then I've got one over there. So, this is going to be r. Okay. So, we have to use, write an expression for f using the problem's geometry. That means is that
we're just going to use the length variables that are given to us, r and d. Now, this little r, if you think about it, is actually just the hypotenuse of this triangle that we've made here. So we can
actually use the Pythagorean theorem to come up with that expression for little r. It's actually going to be the square roots of r² plus d². Okay? Now, we're just going to save this for later. We're
not actually going to start plugging it in yet because what happens is if we start plugging this in, we're going to have to write this a bunch of times. It's going to get really annoying. Okay? So we
have this second step right here, so we have the expression for r. Now, let's take a look at this third spep here. Step 3 says we have to break, the integral into its x and y components. So in other
words, what happens is that f which is the integral of df actually gets split into 2 things. The f x component is going to be the integral of all the DFXs, and the f y component is going to be the
integral of all the DFY's. Now, where do these components actually come from? Well, remember that these D F's actually are 2-dimensional vectors. Right? They point in opposite directions or different
directions. So, we have to use vector decomposition to break them up. So what I'm going to do is I'm going to, sort of, draw an angle right here relative to the x axis. That's my angle theta. And
what happens is now I can break this df into its components. So I've got dfx and then I've got dfy over here. And by the way, the same exact thing happens for this df vector. So I have another one of
these components that's going to be here and then another one of these components that's going to be over here, so dfy. So what we can see here is that my dfx components are always going to be sort
of pointing to the left, and they're always going to be together, and they're going to be adding together. Whereas these dfy's here are always going to be equal and opposite. So, because these things
are equal and opposite, what happens is that their components of dfy will always end up canceling out. So, for any mass here, for any little point, I can think of the mirror opposite point on the
opposite side of the ring as directly symmetrical and those y components will always end up canceling out. So that means that this integral just goes away, and I don't have to deal with it anymore.
Now remember that if I have this angle right here, I can write the dfx. So now that I've actually sort of split this and canceled out the components, I have to expand this into sines and cosines. So
remember that this dfx components can be written with this angle as the integral. Actually, I'm going to have g•m•∫−∞dm/r2, and this is actually going to be the cosine of that angle right here. So,
now what I have to do is I have to expand this into sine and cosine, which I've done. Now I have to rewrite this cosine in terms of the side lengths that are given because what happens is I'm
integrating dm and I've got this r variable here, so I don't want this cosine sitting in here. So what I have to do is I have to relate it using the triangle. Now, remember that this cosine angle
here is always the adjacent over the hypotenuse. So, given this triangle right here, in which I have the d as the adjacent side, this and I've got the r as my hypotenuse, this is actually just going
to be d / r. So we're actually just going to replace that cosine of θ in there for d / r. So that means that for 4, my equation becomes f equation becomes f x is equal to or we could just write f, is
equal to the integral, which actually is going to be g•m•∫−∞dm•d/r3. Okay? So that is step 4. So we're done with that. So now what we have to do is for step 5, we have to plug in the expression for R
from step 2, and then pull all the constants out of the integral. Okay. So we've got f=g•m•∫−∞dm. Now we have the integral, and then we have dm. And then what happens is we're going to plug in our
expression for r. Now remember that expression for r is this guy over here. So, actually, what this r³ becomes is it becomes r2+d2. And then because this is square roots, which really means that this
is, to the one-half power, this actually becomes to the 3/2 power over here. And then we have this d. Now, what we have to do is we have to pull all of these letters that are constants outside of the
integral. But if you take a look at this, remember that d is an integral is just a constant. Right? This is just a constant horizontal distance up here. So that is, that just gets pulled out. And
then this r² and then d², those are both constants as well because they are capital letters. Remember that this radius of the ring never changes, and the d, the distance, also never changes. So, in
fact, everything out of here is just a constant. All of these things here are constants and they get pulled out. So these are constants. Alright. Now, we're done with that, so we pulled all the
constants out of the integral, and then we're going to rewrite this. So, this actually ends up being, f is equal to, and then we've got g•m•d/rd3. Okay? And then we've got to just write that last
little remaining integral piece. So you have the integral of DM. Okay. So now what happens is if we look at step 6 a, if we're only left with DM, the integral of dm, then the integral of dm here is
just m. Right? So if we're just integrating our differential dm, that's just basically the whole entire mass. So then what happens is that we're done. Right? So we just replace that with big M and
we're done here. So that means that the force is equal to G•M•M•d/r+d23/½, and that is actually our force. So we don't have to do the integral because we already did it in this step. Okay, guys? So
let me know if you guys have any questions, and I'll see you guys in the next one.
|
{"url":"https://www.pearson.com/channels/physics/learn/patrick/centripetal-forces-gravitation/mass-distribution-with-calculus?chapterId=8fc5c6a5","timestamp":"2024-11-12T03:58:39Z","content_type":"text/html","content_length":"533834","record_id":"<urn:uuid:0a976f7c-2514-465a-a9b6-c696b4f38fd1>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00162.warc.gz"}
|
The Stacks project
Lemma 50.21.1. There is a unique rule which assigns to every quasi-compact and quasi-separated scheme $X$ a total Chern class
\[ c^{dR} : K_0(\textit{Vect}(X)) \longrightarrow \prod \nolimits _{i \geq 0} H^{2i}_{dR}(X/\mathbf{Z}) \]
with the following properties
1. we have $c^{dR}(\alpha + \beta ) = c^{dR}(\alpha ) c^{dR}(\beta )$ for $\alpha , \beta \in K_0(\textit{Vect}(X))$,
2. if $f : X \to X'$ is a morphism of quasi-compact and quasi-separated schemes, then $c^{dR}(f^*\alpha ) = f^*c^{dR}(\alpha )$,
3. given $\mathcal{L} \in \mathop{\mathrm{Pic}}\nolimits (X)$ we have $c^{dR}([\mathcal{L}]) = 1 + c_1^{dR}(\mathcal{L})$
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0FW9. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0FW9, in case you are confused.
|
{"url":"https://stacks.math.columbia.edu/tag/0FW9","timestamp":"2024-11-09T00:08:28Z","content_type":"text/html","content_length":"16867","record_id":"<urn:uuid:cc76fe98-6fda-4642-b7d3-05498002f0fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00747.warc.gz"}
|
SFC64 Small Fast Chaotic PRNG
SFC64 Small Fast Chaotic PRNG¶
class numpy.random.SFC64(seed=None)¶
BitGenerator for Chris Doty-Humphrey’s Small Fast Chaotic PRNG.
seed : {None, int, array_like[ints], ISeedSequence}, optional
A seed to initialize the BitGenerator. If None, then fresh, unpredictable entropy will be pulled from the OS. If an int or array_like[ints] is passed, then it will be passed to
SeedSequence to derive the initial BitGenerator state. One may also pass in an implementor of the ISeedSequence interface like SeedSequence.
SFC64 is a 256-bit implementation of Chris Doty-Humphrey’s Small Fast Chaotic PRNG ([1]). SFC64 has a few different cycles that one might be on, depending on the seed; the expected period will be
about [2]). SFC64 incorporates a 64-bit counter which means that the absolute minimum cycle length is
SFC64 provides a capsule containing function pointers that produce doubles, and unsigned 32 and 64- bit integers. These are not directly consumable in Python and must be consumed by a Generator
or similar object that supports low-level access.
State and Seeding
The SFC64 state vector consists of 4 unsigned 64-bit values. The last is a 64-bit counter that increments by 1 each iteration.
The input seed is processed by SeedSequence to generate the first 3 values, then the SFC64 algorithm is iterated a small number of times to mix.
Compatibility Guarantee
SFC64 makes a guarantee that a fixed seed will always produce the same random integer stream.
[2] “Random Invertible Mapping Statistics”
│state│Get or set the PRNG state │
│cffi │CFFI interface │
│ctypes │ctypes interface │
|
{"url":"https://numpy.org/doc/1.17/reference/random/bit_generators/sfc64.html","timestamp":"2024-11-01T19:48:17Z","content_type":"text/html","content_length":"13659","record_id":"<urn:uuid:84608465-5334-4afa-ace0-9bb76a3c2e9e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00327.warc.gz"}
|
Calculating Weighted Average in Excel (Using Formulas)
When we calculate a simple average of a given set of values, the assumption is that all the values carry an equal weight or importance.
For example, if you appear for exams and all the exams carry a similar weight, then the average of your total marks would also be the weighted average of your scores.
However, in real life, this is hardly the case.
Some tasks are always more important than the others. Some exams are more important than the others.
And that’s where Weighted Average comes into the picture.
Here is the textbook definition of Weighted Average:
Now let’s see how to calculate the Weighted Average in Excel.
Calculating Weighted Average in Excel
In this tutorial, you’ll learn how to calculate the weighted average in Excel:
• Using the SUMPRODUCT function.
• Using the SUM function.
So let’s get started.
Calculating Weighted Average in Excel – SUMPRODUCT Function
There could be various scenarios where you need to calculate the weighted average. Below are three different situations where you can use the SUMPRODUCT function to calculate weighted average in
Below are three different situations where you can use the SUMPRODUCT function to calculate weighted average in Excel
Example 1 – When the Weights Add Up to 100%
Suppose you have a dataset with marks scored by a student in different exams along with the weights in percentages (as shown below):
In the above data, a student gets marks in different evaluations, but in the end, needs to be given a final score or grade. A simple average can not be calculated here as the importance of different
evaluations vary.
For example, a quiz, with a weight of 10% carries twice the weight as compared with an assignment, but one-fourth the weight as compared with the Exam.
In such a case, you can use the SUMPRODUCT function to get the weighted average of the score.
Here is the formula that will give you the weighted average in Excel:
Here is how this formula works: Excel SUMPRODUCT function multiplies the first element of the first array with the first element of the second array. Then it multiplies the second element of the
first array with the second element of the second array. And so on..
And finally, it adds all these values.
Here is an illustration to make it clear.
Also read: How to Calculate Percentile in Excel
Example 2 – When Weights Don’t Add Up to 100%
In the above case, the weights were assigned in such a way that the total added up to 100%. But in real life scenarios, it may not always be the case.
Let’s have a look at the same example with different weights.
In the above case, the weights add up to 200%.
If I use the same SUMPRODUCT formula, it will give me the wrong result.
In the above result, I have doubled all the weights, and it returns the weighted average value as 153.2. Now we know a student can’t get more than 100 out of 100, no matter how brilliant he/she is.
The reason for this is that the weights don’t add up to 100%.
Here is the formula that will get this sorted:
In the above formula, the SUMPRODUCT result is divided by the sum of all the weights. Hence, no matter what, the weights would always add up to 100%.
One practical example of different weights is when businesses calculate the
weighted average cost of capital
. For example, if a company has raised capital using debt, equity, and preferred stock, then these will be serviced at a different cost. The company’s accounting team then calculates the weighted
average cost of capital that represents the cost of capital for the entire company.
Also read: How to Calculate Percentage Increase in Excel
Example 3 – When the Weights Need to be Calculated
In the example covered so far, the weights were specified. However, there may be cases, where the weights are not directly available, and you need to calculate the weights first and then calculate
the weighted average.
Suppose you are selling three different types of products as mentioned below:
You can calculate the weighted average price per product by using the SUMPRODUCT function. Here is the formula you can use:
Dividing the SUMPRODUCT result with the SUM of quantities makes sure that the weights (in this case, quantities) add up to 100%.
Also read: How to Calculate Ratios in Excel?
Calculating Weighted Average in Excel – SUM Function
While the SUMPRODUCT function is the best way to calculate the weighted average in Excel, you can also use the SUM function.
To calculate the weighted average using the SUM function, you need to multiply each element, with its assigned importance in percentage.
Using the same dataset:
Here the formula that will give you the right result:
This method is alright to use when you have a couple of items. But when you have many items and weights, this method could be cumbersome and error-prone. There is shorter and better way of doing this
using the SUM function.
Continuing with the same data set, here is the short formula that will give you the weighted average using the SUM function:
The trick while using this formula is to use Control + Shift + Enter, instead of just using Enter. Since SUM function can not handle arrays, you need to use Control + Shift + Enter.
When you hit Control + Shift + Enter, you would see curly brackets appear automatically at the beginning and the end of the formula (see the formula bar in the above image).
Again, make sure the weights add up to 100%. If it does not, you need to divide the result by the sum of the weights (as shown below, taking the product example):
You May Also Like the Following Excel Tutorials:
5 thoughts on “Calculating Weighted Average in Excel (Using Formulas)”
1. i want silver multification
2.145 weight
44.55 touch
exel automatically last 5 come add one number for multification(*) how to incress last 8 number come Add one number
2. I’m trying to weight 4 different column rankings for cumulative of 100%. Trying to figure formula would be 20%, 50%, 15%, 15%.
3. u should also mention frequency related weighted average
4. Excellent explanation Sumit! I like the modification to the formula when the %s add up to more than 100. I’m going to share this!
□ Thanks for commenting Kevin.. Glad you found this useful 🙂
Leave a Comment
|
{"url":"https://trumpexcel.com/weighted-average-in-excel/","timestamp":"2024-11-12T17:08:42Z","content_type":"text/html","content_length":"411084","record_id":"<urn:uuid:cdb9a6a9-8cc0-4b4d-9a44-4485f535be1a>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00005.warc.gz"}
|
Mattern et al. (2018) – SEDIGISM: The kinematics of ATLASGAL filaments
Analysing the kinematics of filamentary molecular clouds is a crucial step towards understanding their role in the star formation process. Therefore, we study the kinematics of 283 filament
candidates in the inner Galaxy, that were previously identified in the ATLASGAL dust continuum data. The $^{13}$CO(2 – 1) and C$^{18}$O(2 – 1) data of the SEDIGISM survey (Structure, Excitation, and
Dynamics of the Inner Galactic Inter Stellar Medium) allows us to analyse the kinematics of these targets and to determine their physical properties at a resolution of 30 arcsec and 0.25 km/s. To do
so, we developed an automated algorithm to identify all velocity components along the line-of-sight correlated with the ATLASGAL dust emission, and derive size, mass, and kinematic properties for all
velocity components. We find two-third of the filament candidates are coherent structures in position-position-velocity space. The remaining candidates appear to be the result of a superposition of
two or three filamentary structures along the line-of-sight. At the resolution of the data, on average the filaments are in agreement with Plummer-like radial density profiles with a power-law
exponent of p = 1.5 +- 0.5, indicating that they are typically embedded in a molecular cloud and do not have a well-defined outer radius. Also, we find a correlation between the observed mass per
unit length and the velocity dispersion of the filament of $m \sim \sigma_v^2$. We show that this relation can be explained by a virial balance between self-gravity and pressure. Another possible
explanation could be radial collapse of the filament, where we can exclude infall motions close to the free-fall velocity.
Mattern, M.; Kauffmann, J.; Csengeri, T.; Urquhart, J. S.; Leurini, S.; Wyrowski, F.; Giannetti, A.; Barnes, P. J.; Beuther, H.; Bronfman, L.; Duarte-Cabral, A.; Henning, T.; Kainulainen, J.; Menten,
K. M.; Schisano, E.; Schuller, F.
2018, ArXiv e-prints, 1808, arXiv:1808.07499
|
{"url":"https://gabi.hyperstars.fr/2018/09/03/mattern-et-al-2018-sedigism-the-kinematics-of-atlasgal-filaments/","timestamp":"2024-11-04T20:11:06Z","content_type":"text/html","content_length":"23553","record_id":"<urn:uuid:fec0c63f-02df-4038-87a6-7c0e9fedac87>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00078.warc.gz"}
|
[Tutorial] Searching Binary Indexed Tree in O(log(N)) using Binary Lifting - Codeforces
NOTE : Knowledge of Binary Indexed Trees is a prerequisite.
Problem Statement
Assume we need to solve the following problem. We have an array, A of length N with only non-negative values. We want to perform the following operations on this array:
1. Update value at a given position
2. Compute prefix sum of A upto i,i≤N
3. Search for a prefix sum (something like a lower_bound in the prefix sums array of A)
Basic Solution
Seeing such a problem we might think of using a Binary Indexed Tree (BIT) and implementing a binary search for type 3 operation. Its easy to see that binary search is possible here because prefix
sums array is monotonic (only non-negative values in A).
The only issue with this is that binary search in a BIT has time complexity of O(log^2(N)) (other operations can be done in O(log(N))). Even though this is naive,
Most of the times this would be fast enough (because of small constant of above technique). But if the time limit is very tight, we will need something faster. Also we must note that there are other
techniques like segment trees, policy based data structures, treaps, etc. which can perform operation 3 in O(log(N)). But they are harder to implement and have a high constant factor associated with
their time complexity due to which they might be even slower than O(log^2(N)) of BIT.
Hence we need an efficient searching method in BIT itself.
Efficient Solution
We will make use of binary lifting to achieve O(log(N)) (well I actually do not know if this technique has a name but I am calling it binary lifting because the algorithm is similar to binary lifting
in sparse tables).
What is binary lifting?
In binary lifting, a value is increased (or lifted) by powers of 2, starting with the highest possible power of 2, 2^ceil(log(N)), down to the lowest power, 2^0.
So, we initialize the target position, pos = 0 and also maintain the corresponding prefix sum. We increase (or lift) pos when the v
Implementation :
// This is equivalent to calculating lower_bound on prefix sums array
// LOGN = log(N)
int bit[N]; // BIT array
int bit_search(int v)
int sum = 0;
int pos = 0;
for(int i=LOGN; i>=0; i--)
if(pos + (1 << i) < N and sum + bit[pos + (1 << i)] < v)
sum += bit[pos + (1 << i)];
pos += (1 << i);
return pos + 1;
Taking this forward
You must have noted that proof of correctness of this approach relies on the property of the prefix sums array that it monotonic. This means that this approach can be used for with any operation that
maintains the monotonicity of the prefix array, like multiplication of positive numbers, etc.
|
{"url":"https://mirror.codeforces.com/topic/61709/en5","timestamp":"2024-11-12T22:40:14Z","content_type":"text/html","content_length":"93319","record_id":"<urn:uuid:8d076f5c-2741-4ce8-b06f-d04b79c19a7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00238.warc.gz"}
|
Overview of capabilities · Bifurcation Analysis in Julia
• Newton-Krylov solver with generic linear / eigen preconditioned solver. Idem for the arc-length continuation.
• Newton-Krylov solver with nonlinear deflation and preconditioner. It can be used for branch switching for example. It is used for deflated continuation.
• Continuation written as an iterator
• Monitoring user functions along curves computed by continuation, see events
• Continuation methods: PALC, Moore Penrose, Multiple, Polynomial, Deflated continuation, ANM, ...
• Bifurcation points / events located with bisection
• Compatible with GPU
• Detection of Branch, Fold, Hopf bifurcation points of stationary solutions and computation of their normal form.
• Automatic branch switching at branch points (whatever the dimension of the kernel) to equilibria
• Automatic computation of bifurcation diagrams of equilibria
• Fold / Hopf continuation based on Minimally Augmented formulation, with Matrix Free / Sparse / Dense Jacobian.
• Detection of all codim 2 bifurcations of equilibria and computation of the normal forms of Bogdanov-Takens, Bautin, Cusp, Zero-Hopf. (Hopf-Hopf normal form not implemented)
• Branching from Bogdanov-Takens / Zero-Hopf / Hopf-Hopf points to Fold / Hopf curve
• continuation of fixed points of maps
• computation of normal form of Period-doubling, Neimark-Sacker, Branch point bifurcations.
Note that you can combine most solvers, like use Deflation for Periodic orbit computation or Fold of periodic orbits family.
Custom state means, you can use something else than AbstractArray, for example your own struct.
Features Matrix Free Custom state Tutorial GPU
(Deflated) Krylov-Newton Yes Yes link Yes
Continuation PALC (Natural, Secant, Tangent, Polynomial) Yes Yes All Yes
Deflated Continuation Yes Yes link Yes
Bifurcation / Fold / Hopf point detection Yes Yes All / All / link Yes
Fold Point continuation Yes Yes PDE, PDE, ODE Yes
Hopf Point continuation Yes AbstractArray ODE
Branch point / Fold / Hopf normal form Yes Yes Yes
Branch switching at Branch points Yes AbstractArray link Yes
Automatic bifurcation diagram computation of equilibria Yes AbstractArray link
Bogdanov-Takens / Bautin / Cusp / Zero-Hopf / Hopf-Hopf point detection Yes Yes ODE
Bogdanov-Takens / Bautin / Cusp normal forms Yes AbstractArray ODE Yes
Branching from Bogdanov-Takens / Zero-Hopf / Hopf-Hopf to Fold / Hopf curve Yes AbstractArray ODE
• PO computation and continuation using parallel (Standard or Poincaré) Shooting, Finite Differences or Orthogonal Collocation (mesh adaptive).
• Automatic branch switching from simple Hopf points to PO
• Automatic branch switching from simple Period-Doubling points to PO
• Assisted branch switching from simple Branch points to PO
• Detection of Branch, Fold, Neimark-Sacker (NS), Period Doubling (PD) bifurcation points of PO.
• Fold / PD / NS continuation based on Minimally Augmented formulation (for shooting and collocation). Trapezoid method only allows continuing Fold of PO.
• Detection of all codim 2 bifurcations of PO (R1, R2, R3, R4, GPD, NS-NS, Chenciner, Fold-Flip, Fold-NS, PD-NS)
• Computation of the normal forms of PD, NS (for shooting and collocation) using the method based on Poincaré return map or the Iooss normal form (more precise).
• automatic branching from Bautin to curve of Fold of PO
• automatic branching from Zero-Hopf to curve of NS of PO
• automatic branching from Hopf-Hopf to curve of NS of PO
Legend for the table: Standard shooting (SS), Poincaré shooting (PS), Orthogonal collocation (OC), trapezoid (T).
Features Method Matrix Free Custom state Tutorial GPU
Branch switching at Hopf points SS/PS/OC/T See each ODE
Newton / continuation T Yes AbstractVector PDE, PDE Yes
Newton / continuation OC AbstractVector ODE
Newton / continuation SS Yes AbstractArray ODE Yes
Newton / continuation PS Yes AbstractArray PDE Yes
Fold, Neimark-Sacker, Period doubling detection SS/PS/OC/T See each AbstractVector link
Branch switching at Branch point SS/PS/OC/T See each ODE
Branch switching at PD point SS/PS/OC/T See each ODE
Continuation of Fold points SS/PS/OC/T See each AbstractVector ODE PDE Yes
Continuation of Period-doubling points SS/OC AbstractVector ODE
Continuation of Neimark-Sacker points SS/OC AbstractVector ODE
detection of codim 2 bifurcations of periodic orbits SS/OC AbstractVector ODE
Branch switching at Bautin point to curve of Fold of periodic orbits SS/OC AbstractVector ODE
Branch switching at ZH/HH point to curve of NS of periodic orbits SS/OC AbstractVector ODE
This is available through the plugin HclinicBifurcationKit.jl. Please see the specific docs for more information.
• compute Homoclinic to Hyperbolic Saddle Orbits (HomHS) using Orthogonal collocation or Standard shooting
• compute bifurcation of HomHS
• start HomHS from a direct simulation
• automatic branch switching to HomHS from Bogdanov-Takes bifurcation point
A left-to-right arrow in the following graph from $E_1$ to $E_2$ means that $E_2$ can be detected when continuing an object of type $E_1$.
A right-to-left arrow from $E_2$ to $E_1$ means that we can start the computation of object of type $E_1$ from $E_2$.
Each object of codim 0 (resp. 1) can be continued with 1 (resp. 2) parameters.
graph LR
S[ ]
C[ Equilibrium ]
PO[ Periodic orbit ]
BP[ Fold/simple branch point ]
H[ Hopf \n :hopf]
BT[ Bogdanov-Takens \n :bt ]
ZH[Zero-Hopf \n :zh]
GH[Bautin \n :gh]
HH[Hopf-Hopf \n :hh]
FPO[ Fold Periodic orbit ]
NS[ Neimark-Sacker \n :ns]
PD[ Period Doubling \n :pd ]
CH[Chenciner \n :ch]
GPD[Generalized period doubling \n :gpd]
BPC[Branch point PO]
R1[1:1 resonance point\n :R1]
R2[1:2 resonance point\n :R2]
R3[1:3 resonance point\n :R3]
R4[1:4 resonance point\n :R4]
S --> C
S --> PO
C --> nBP[ non simple\n branch point ]
C --> BP
C --> H
BP --> CP
BP <--> BT
PO --> H
PO --> FPO
PO --> NS
PO --> PD
FPO <--> GH
FPO <--> BPC
FPO --> R1
NS --> R1
NS --> R3
NS --> R4
NS --> CH
NS --> LPNS
NS --> NSNS
NS --> R2
NS --> PDNS
PD --> PDNS
PD --> R2
PD --> LPPD
PD --> GPD
H <--> BT
H <--> ZH
BP <--> ZH
H <--> HH
H <--> GH
NS <--> ZH
PO <--> BPC
NS <--> HH
FPO --> LPNS
FPO --> LPPD
_ --> Codim0 --> Codim1 --> Codim2
|
{"url":"https://bifurcationkit.github.io/BifurcationKitDocs.jl/dev/capabilities/","timestamp":"2024-11-07T13:05:22Z","content_type":"text/html","content_length":"36014","record_id":"<urn:uuid:b8283cc4-d9ee-4fb7-a7e7-06d9023e5a27>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00641.warc.gz"}
|
How does radioactive dating rocks work?
How does radioactive dating rocks work?
The basic logic behind radiometric dating is that if you compare the presence of a radioactive isotope within a sample to its known abundance on Earth, and its known half-life (its rate of decay),
you can calculate the age of the sample.
What rock is used for radioactive dating?
Answer and Explanation: Igneous rocks are the type of rock that is used for radiometric dating. Radioactive dating is the use of isotope ratios to determine the age of a substance. It can only be
used for igneous rocks because they have a single date of origination.
Which decay series is used to date rocks?
Another important atomic clock used for dating purposes is based on the radioactive decay of the isotope carbon-14, which has a half-life of 5,730 years.
How does the process of radioactive decay allow us to determine the age of a rock or a fossil?
The age of rocks is determined by radiometric dating, which looks at the proportion of two different isotopes in a sample. Radioactive isotopes break down in a predictable amount of time, enabling
geologists to determine the age of a sample using equipment like this thermal ionization mass spectrometer.
How do geologists use radioactive decay to find the age of a rock?
Why can radioactive elements be used to determine the ages of rocks quizlet?
Because radioactive isotopes decay at a constant rate, they can be used like clocks to measure the age of the material that contains them. This process is called radiometric dating. Scientists
measure the amount of parent isotope and daughter isotope in a sample of the material they want to date.
What is radioactive dating and how it can determine the age of the material?
The technique of comparing the abundance ratio of a radioactive isotope to a reference isotope to determine the age of a material is called radioactive dating. Many isotopes have been studied,
probing a wide range of time scales.
Why are radioactive isotopes useful for determining the ages of rocks?
The isotopes will decay into a stable isotope over time. Scientists can tell how old the rock was from looking at the radioactive isotope’s half-life, which tells them how long it would take for
there to be half the radioactive isotope and half the stable isotope.
How does radioactive decay relate to radiometric dating?
Radiometric dating is a method used to date rocks and other objects based on the known decay rate of radioactive isotopes. The decay rate is referring to radioactive decay, which is the process by
which an unstable atomic nucleus loses energy by releasing radiation.
Why can radioactive elements be used to determine the ages of rocks?
The nuclear decay of radioactive isotopes is a process that behaves in a clock-like fashion and is thus a useful tool for determining the absolute age of rocks. Radioactive decay is the process by
which a “parent” isotope changes into a “daughter” isotope.
Why are radioactive materials used to date rocks?
Explanation. Radioactive dating allows scientists to determine an absolute age of fossils by determining how much of certain radioactive substances has decayed and how long it would have taken for
that amount of decay to occur. All organisms have a mix of different types of elements in them while they are living.
Why can radioactive decay be used to determine the age of a rock or fossil?
The mass spectrometer is able to give information about the type and amount of isotopes found in the rock. Scientists find the ratio of parent isotope to daughter isotope. By comparing this ratio to
the half-life logarithmic scale of the parent isotope, they are able to find the age of the rock or fossil in question.
How is radioactive decay used to date archaeological remains and fossils?
Radioactive Dating of Fossils Scientists find the ratio of parent isotope to daughter isotope. By comparing this ratio to the half-life logarithmic scale of the parent isotope, they are able to find
the age of the rock or fossil in question.
How do scientists use radioactive decay to date fossils and artifacts quizlet?
radioactive isotopes decay at a constant rate, they can be used like clocks to measure the age of material that contains them. Scientists measure the amount of parent isotope and daughter isotope in
a sample they want to date.
How is the radioactive decay of an element used to determine the age of a rock layer quizlet?
Because radioactive isotopes decay at a constant rate, they can be used like clocks to measure the age of the material that contains them. In this process, called radiometric dating, scientists
measure the amount of parent isotope and daughter isotope in a sample of the material they want to date.
How do scientists use radioactive elements to determine the actual age of fossils?
|
{"url":"https://www.evanewyork.net/how-does-radioactive-dating-rocks-work/","timestamp":"2024-11-03T07:28:26Z","content_type":"text/html","content_length":"45185","record_id":"<urn:uuid:8934b92e-3672-45ec-9311-3a965167ee15>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00370.warc.gz"}
|
[Solved] Label all the bars in the truss of Figure | SolutionInn
Label all the bars in the truss of Figure 4.13 that are unstressed when the 60-kip load
Label all the bars in the truss of Figure 4.13 that are unstressed when the 60-kip load acts.
Figure 4.13
Transcribed Image Text:
60 kips B 0 MOL K J D 180 kips E - F G H 120 kips
Fantastic news! We've Found the answer you've been seeking!
Step by Step Answer:
Answer rating: 75% (4 reviews)
Although the two cases discussed in this section apply to many of the bar...View the full answer
Answered By
I have been tutor on chegg for approx 5 months and had solved a lot of questions.
5.00+ 1+ Reviews 10+ Question Solved
Students also viewed these Engineering questions
Study smarter with the SolutionInn App
|
{"url":"https://www.solutioninn.com/study-help/fundamentals-of-structural-analysis/label-all-the-bars-in-the-truss-of-figure-413-1251672","timestamp":"2024-11-08T19:11:40Z","content_type":"text/html","content_length":"79550","record_id":"<urn:uuid:d3b897b9-d13b-4473-8a33-0ec57f6d3a30>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00379.warc.gz"}
|
Inelastic collision followed by circular motion
• Thread starter Sal Coombs
• Start date
In summary, an inelastic collision followed by circular motion is when two objects collide and stick together, resulting in a change in their velocities and causing them to move in a circular path.
This is different from an elastic collision, where the objects bounce off each other and maintain their individual velocities. Factors such as the masses and initial velocities of the objects, as
well as external forces, can affect the outcome of an inelastic collision followed by circular motion. Momentum is conserved in this type of collision because the total momentum before and after the
collision is the same. Real-life examples include car crashes and games of pool.
Homework Statement
A 3.0-kg mass is sliding on a horizontal frictionless surface with a speed of 3.0 m/s when it collides with a 1.0-kg mass initially at the bottom of a circular track. The masses stick together
and slide up a frictionless circular track of radius 0.40 m. To what maximum height, h, above the horizontal surface (the original height of the masses) will the masses slide?
Relevant Equations
mv = mv Momentum
1/2mv^2 Kinetic Energy
mgh Potential Energy
(mv^2)/r Centripetal Force
Found the speed at which the masses will travel after their collision: 2.25m/s
Not sure what to do next...
Science Advisor
Homework Helper
Gold Member
2023 Award
Sal Coombs said:
Found the speed at which the masses will travel after their collision: 2.25m/s
Not sure what to do next...
What happens when the masses follow the circular track?
Science Advisor
Homework Helper
Gold Member
FAQ: Inelastic collision followed by circular motion
1. What is an inelastic collision?
An inelastic collision is a type of collision in which the kinetic energy of the system is not conserved. This means that the total kinetic energy of the objects before and after the collision is not
the same. In an inelastic collision, some of the kinetic energy is converted into other forms of energy, such as heat or sound.
2. How does an inelastic collision affect circular motion?
An inelastic collision can affect circular motion by changing the velocity and direction of the object in motion. In circular motion, the object moves in a circular path at a constant speed. However,
if an inelastic collision occurs, the object's velocity and direction may change, causing it to deviate from its circular path.
3. What factors influence the outcome of an inelastic collision?
The outcome of an inelastic collision is influenced by factors such as the masses, velocities, and angles of the objects involved. The type of materials and the forces acting on the objects can also
affect the outcome of the collision.
4. How is momentum conserved in an inelastic collision?
In an inelastic collision, momentum is conserved even though kinetic energy is not. This means that the total momentum of the system before and after the collision remains the same. This can be seen
in the change in velocities and directions of the objects involved in the collision.
5. Can an inelastic collision followed by circular motion result in a perfectly circular path?
No, an inelastic collision followed by circular motion cannot result in a perfectly circular path. This is because inelastic collisions involve some loss of kinetic energy, which affects the object's
velocity and direction. The object may still move in a circular path, but it will not be a perfect circle due to the change in velocity and direction caused by the collision.
|
{"url":"https://www.physicsforums.com/threads/inelastic-collision-followed-by-circular-motion.1047289/","timestamp":"2024-11-14T01:17:02Z","content_type":"text/html","content_length":"86186","record_id":"<urn:uuid:4aae28ea-a50f-43f3-80b4-7bb258b1752b>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00745.warc.gz"}
|
CQF: Magma package for quadratic forms theory
Przemysław Koprowski
CQF: Magma package for quadratic forms theory
Work in progress: Anything that comes for free comes with no warranty.
Current version (last update August 9, 2024):
CQF Magma package CQF manual
CQF is released under standard
MIT license
Test suite
to benchmark the speed of the command
. It works only with version 0.5.9 of CQF.
> _ := PolynomialRing(Rationals());
> K := NumberField(x^4 - 5*x^2 + 25);
> time C := SolveSumOfSquares(theta + 1); C;
Time: 0.290
1/2*(theta + 2),
1/4*(-theta^2 + 5)
> &+ [ c^2 : c in C ];
theta + 1
What is new?
Version 0.5.9:
• New algorithm for constructing isotropic vectors. At present implemented over number field. Global function field will come next.
Version 0.5:
• CQF can now costruct isotropic vector (see function IsotropicVector).
• More new functions: RandomDQF, Subform, Evaluate.
Version 0.4.1:
Fixed a bug in that in some cases returned false positives for (local) isotropy of a form over a number field.
Version 0.4:
Added a new function SolveSumOfSquares that decomposes any totally positive element into a sum of squares of minimal length.
Version 0.3:
• Completely new function AnisotropicPart that computes an anisotropic part of a given quadratic form.
• More new functions: Find, OrderingSeparation, Coefficients, RundomShuffle, AreSimilar, RealRootBound, RealRootIntervals.
• The internal folder structure and file naming convention has been changed.
• Some bugs have been removed.
Version 0.2.2:
• Corrected bug in AnisotropicDimension of quadratic forms over rational function fields over finite fields.
Version 0.2.1:
• Corrected bug that returned wrong Pythagoras element for some formally real number fields of even degree over QQ.
Version 0.2:
• One can create the Witt ring as a Magma's object and perform basic operations on elements of Witt rings.
• As a consequence of the previous point, one can test Witt equivalence for any combination of fields supported by CQF. For example, CQF will recognize that the field of complex numbers is Witt
equivalent to the finite field of two elements.
• Relevant primes are now cached for every diagonal quadratic form over a global field. This speeds up repeated computations on the same form.
Past versions:
|
{"url":"http://www.pkoprowski.eu/cqf/","timestamp":"2024-11-08T11:36:50Z","content_type":"application/xhtml+xml","content_length":"4174","record_id":"<urn:uuid:1b76a679-1932-4796-9c51-3ccaa5cd3085>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00687.warc.gz"}
|
The Perimeter of a Parallelogram: Understanding and Calculating - Advantage Biz Marketing
The Perimeter of a Parallelogram: Understanding and Calculating
A parallelogram is a four-sided polygon with opposite sides that are parallel and equal in length. Understanding the perimeter of a parallelogram is essential in various fields, including
mathematics, engineering, and architecture. In this article, we will explore the concept of the perimeter of a parallelogram, its formula, and how to calculate it. We will also provide real-life
examples and practical applications to help you grasp the importance of this mathematical concept.
What is the Perimeter of a Parallelogram?
The perimeter of a parallelogram refers to the total length of its boundary. It is the sum of all the sides of the parallelogram. Since a parallelogram has two pairs of parallel sides, the opposite
sides are equal in length. Therefore, to calculate the perimeter, we can simply add the lengths of all four sides.
Formula for Calculating the Perimeter of a Parallelogram
The formula for calculating the perimeter of a parallelogram is:
Perimeter = 2 × (Length + Width)
Here, the length refers to the longer side of the parallelogram, while the width refers to the shorter side. Since opposite sides of a parallelogram are equal in length, we can use either pair of
opposite sides to calculate the perimeter.
Example Calculation
Let’s consider an example to understand how to calculate the perimeter of a parallelogram. Suppose we have a parallelogram with a length of 8 units and a width of 5 units. Using the formula mentioned
above, we can calculate the perimeter as follows:
Perimeter = 2 × (8 + 5) = 2 × 13 = 26 units
Therefore, the perimeter of the given parallelogram is 26 units.
Real-Life Applications
The concept of the perimeter of a parallelogram finds practical applications in various fields. Here are a few examples:
1. Architecture and Construction
In architecture and construction, the perimeter of a parallelogram is crucial for determining the amount of material required to build structures. By calculating the perimeter, architects and
engineers can estimate the quantity of materials such as bricks, tiles, or fencing needed for a given area.
2. Land Surveying
Land surveyors often use the concept of the perimeter of a parallelogram to measure and map out land boundaries. By calculating the perimeter, they can accurately determine the length of the boundary
lines and mark the corners of the land.
3. Carpentry and Woodworking
Carpenters and woodworkers frequently use the perimeter of a parallelogram to calculate the length of wooden boards or moldings required for a project. By accurately measuring the perimeter, they can
minimize waste and ensure they have enough material for the job.
Frequently Asked Questions (FAQs)
Q1: Can the perimeter of a parallelogram be negative?
No, the perimeter of a parallelogram cannot be negative. The perimeter represents the total length of the boundary, which is always a positive value.
Q2: Can the perimeter of a parallelogram be zero?
No, the perimeter of a parallelogram cannot be zero. A parallelogram, by definition, has four sides, and the sum of these sides will always be greater than zero.
Q3: Can the perimeter of a parallelogram be infinite?
No, the perimeter of a parallelogram cannot be infinite. A parallelogram is a finite shape with a finite perimeter.
Q4: Can the perimeter of a parallelogram be equal to its area?
No, the perimeter of a parallelogram and its area are two different measurements. The perimeter represents the length of the boundary, while the area represents the space enclosed by the
Q5: Can the perimeter of a parallelogram be greater than its area?
Yes, it is possible for the perimeter of a parallelogram to be greater than its area. The perimeter depends on the lengths of the sides, while the area depends on the base and height of the
parallelogram. In certain cases, the perimeter may be larger than the area.
The perimeter of a parallelogram is the total length of its boundary. It can be calculated by adding the lengths of all four sides. The formula for calculating the perimeter is 2 × (Length + Width).
The concept of the perimeter of a parallelogram has practical applications in various fields, including architecture, construction, land surveying, and carpentry. By understanding and calculating the
perimeter, professionals in these fields can make accurate measurements and estimates for their projects.
Remember, the perimeter of a parallelogram cannot be negative or zero, and it is different from the area of the parallelogram. By mastering the concept of the perimeter, you can enhance your
mathematical skills and apply them to real-life situations.
|
{"url":"https://advantagebizmarketing.com/perimeter-of-a-parallelogram/","timestamp":"2024-11-07T18:25:12Z","content_type":"text/html","content_length":"70728","record_id":"<urn:uuid:061d8903-8add-4364-bc98-5944472fe8df>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00260.warc.gz"}
|
TSPSG: TSP Solver and Generator
Copyright © 2007-2010 Lёppa <contacts[at]oleksii[dot]name>
Homepage: tspsg.info
TSPSG is intended to generate and solve Travelling Salesman Problem (TSP) tasks. It uses Branch and Bound method for solving. Its input is a number of cities and a matrix of city-to-city travel
costs. The matrix can be populated with random values in a given range (which is useful for generating tasks). The result is an optimal route, its price, step-by-step matrices of solving and a
solving graph. The task can be saved in an internal binary format and opened later. The result can be printed or saved as PDF, HTML, or ODF.
TSPSG may be useful for teachers to generate test tasks or just for regular users to solve TSPs. Also, it may be used as an example of using Branch and Bound method to solve a particular task.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
{"url":"https://tspsg.com/docs/0.1.4/html/","timestamp":"2024-11-04T03:48:26Z","content_type":"application/xhtml+xml","content_length":"9733","record_id":"<urn:uuid:474a9e85-31af-4ef9-a3c0-f4164b8a8dff>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00373.warc.gz"}
|
What our customers say...
Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences:
I would like to thank the creator for preparing such a tremendous piece of software. It has made algebra simple by providing expert assistance with fractions and equations.
Melissa Jordan, WA
If anybody needs algebra help, I highly recommend 'Algebrator'. My son had used it and he has shown tremendous improvement in this subject.
Jeff Ply, CO
As a mother of a son with a learning disability, I was astounded to see his progress with your software. Hes struggled for years with algebra but the step-by-step instructions made it easy for him to
understand. Hes doing much better now.
Dana White, IL
Search phrases used on 2013-11-23:
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
• saxon algebra 2 answers problem set 78
• Fraction percent and decimal worksheet
• 6th derivative calculator
• get free answers on 7th grade math homework right now
• TI-84 emulator
• real life example of a "Cubic Function"
• glencoe pre algebra answers
• algebra equations using variables-grade 9
• online ks3 maths papers
• free dividing polynomials long division solver
• elementary algebra ppt
• the hardest math problems in the world
• exponents and square roots
• free 8th grade printable worksheets
• fractions for third grade worksheets
• maths worksheets on rotation
• Radical number quiz
• radicals symplifying solver
• how do you write expressions for triangles?
• what is the value of pie in math
• algebra homework textbook help
• free ebooks on aptitude questions
• transformation tutorial numeracy
• how to find the square root of fractions
• solving 1 variable equations worksheets
• prentice hall mathematics algebra 1 chapter 10
• simplifying exponents worksheet
• gcse card tricks probability
• suare root calc
• changing mixed numbers to decimal
• printable algebra cards
• like terms worksheet
• Inequalities Algebra Solver
• dividing polynomials online calculator
• school activities for greatest common factor with three numbers
• T1-84 online calculator
• factorial permutation vba
• online maths test year 7
• How do you find a quadratic equation if you are only given the solution?
• ti-84 complex numbers program
• 4th grade lesson plan on ordering pairs on coordinate plane
• how to do system of equations by the substitution method with no key
• find zeroes from the vertex form
• regents ninth grade algebra sample tests
• binomial equations
• example of math trivia and answers
• math rules cheat sheet
• 5th grade online mathbook
• free kumon worksheets online
• boolean algebra test
• prentice hall math books free online
• trivia about geometry math
• fun with fractions.com
• binomial expansion + quadratic
• extracting a square root
• combinations and permutations quiz
• printable homework sheets for 3rd grade teachers
• calculator for integration by parts
• factor grouping calculator
• algebraic inequalities fifth grade
• Printable Worksheets for 3 graders with answer sheet
• Adding, Subtracting, multiply, and dividing integers online
• How do quadratic equations help solve real life problems?
• math 7 distributive property equations worksheet
• simplify radical worksheet
• simplifying difference of two squares under square root
• best algebra textbook
• holt math powerpoint
• Quiz Review for Moving Straight Ahead Investigation 2 7th grade
• like term equations
• balancing equations solver
• ti 89 log 10
• maximum y-intercept of a hyperbola
• Complete Factoring Calculator
• free download for algebrator
• common denominator calculator
• free online scientific calculator with fractions
• higher order polynomials word problems
• ti 84 quadratic formula solver
• fun game learning rules of exponents free
• fun math problems for algebra 2
• free aptitude test papers
• third root calculator
• percentage formulas
• algebraic fractions equations worksheet
• printable worksheets on rate, ratio
|
{"url":"https://softmath.com/math-book-answers/sum-of-cubes/subtracting-radical.html","timestamp":"2024-11-07T19:59:58Z","content_type":"text/html","content_length":"35013","record_id":"<urn:uuid:4a8e8e77-fb35-41c6-82f1-de11a43c9c6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00140.warc.gz"}
|
HSC English 1st Paper Question Solution 2024 - (১০০% নির্ভুল)
HSC English 1st Paper Question Solution 2024 – All PDF
HSC English 1st Paper Question Solution 2024 is now available on our website. Those of you who are looking for HSC English First Paper Question Solution 2024. They can now see the solution of the
question through our website. Because after completing the exam, we are experienced teachers who prepare the solutions to the questions. The solutions of the questions that we are going to give you
are completely accurate and correct. So you must see the solution of the question from here. HSC English first paper exam was held today all over Bangladesh. A total of 21 lakh candidates
participated in the SSC examination. And the most difficult subject for these 21 lakh candidates is English. So after completing the exam all the candidates will need to solve this question.
HSC English 1st Paper Question Solution 2024
Have you participated in the HSC exam of 2024? And after completing your English test today, you are solving the questions. Then today’s post will be very important for you. Because through today’s
post, I have revealed to you the solutions of HSC English first paper. Many candidates do not know where and how to see the solution of HSC exam questions. So for them today after the end of the
exam, the experienced teachers have prepared the answers to the questions. And we are delivering the answers to you through this BD web result website. If you want to solve English first paper then
read this post carefully.
HSC English 1st Paper Question Solved
HSC English First Paper Question Answers are now available on our website. Today 4th of July HSC English first paper exam was held all over Bangladesh. About 21 lakh candidates of Bangladesh
participated in this exam. Candidates participating in English first paper exam are worried a lot. From where they see the solutions of the questions after completing the test. Accordingly, we are
now publishing the solutions to the questions through our website. Because we publish the solutions of the questions by the experienced teachers after the end of the exam. Many people think that the
solutions of the questions that we publish may be wrong. But I inform you that the solutions of the questions that we publish are completely accurate and correct. If you want, you can check the
answers with the question paper.
Dhaka Board HSC English 1st Paper Question Solved
Dear friends, have you participated in HSC exam from Dhaka board. So you must have participated in HSC English first exam today. I would like to inform all the students participating in the exam that
you will get the solution of the question from here after completing the exam. Because after the end of your exam, I have collected the question papers from Dhaka Board and are publishing the answers
to the questions. English is a very difficult subject for all students. So every candidate told us their questions need to be solved. I have many students told us that this year’s English first paper
has been made very difficult. So they worry whether the answers to the questions they have written are correct. Below is the solution of the question for you.
Last Word
Through today’s post, I have revealed to you the HSC English first paper question solutions of all boards. Those who were looking for the solution of the question through online will now get the
solution of the question from us here. Many candidates think that the solutions of the questions that I publish may be wrong. I inform you that the solutions of the questions that I am revealing to
you are completely correct and reliable. I also write educational posts on the website about the solution of such questions. So I hope you will find any educational information you need on my
website. Thank you very much for reading today’s post carefully.
|
{"url":"https://bdwebresult.com/hsc-english-1st-paper-question-solution/","timestamp":"2024-11-11T00:43:44Z","content_type":"text/html","content_length":"147462","record_id":"<urn:uuid:513d565d-8ec4-4de6-b5a2-d14040943676>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00194.warc.gz"}
|
How to group gene dataset and run t.test for each row for every gene in R?
Hi all,
I have the following gene dataset:
I would like to group the samples into the following please:
Malignant cell samples are: AOCS1, G33, G164
Fibroblasts are: G342, G351, G369
After I grouped them into the two sample categories, Malignant and Fibroblast, I would like to do the test test for each row of the genes,
For example,
I am using R studio
I am new to this kind of analysis so any help will be greatly appreciated.
Many Thanks,
If you are getting this data from raw RNA-Seq data your best bet is to use a well established method like DESeq2 or edgeR or limma.
Even if you do not have the raw data it is a much better solution to use limma via its trend functionality! Those values you post are mostly likely not normal distributed so you should NOT use a
Hi, I applied this line pValues <- apply(df, 1, function(x) t.test(x[2:4],x[5:7])$p.value)
But I got the following error Error in if (stderr < 10 * .Machine$double.eps * max(abs(mx), abs(my))) stop("data are essentially constant") : missing value where TRUE/FALSE needed
Can someone shed some light on this? Many Thanks Chris
Here is my my data https://drive.google.com/open?id=1LiJD7T6oR5MtABwYqkhUrJFfo7XRxJ_z
Entering edit mode
Thanks very much for your help Shawn, very much appreciated for the clear explanation
Entering edit mode
Hi, I applied this line pValues <- apply(df, 1, function(x) t.test(x[2:4],x[5:7])$p.value)
But I got the following error Error in if (stderr < 10 * .Machine$double.eps * max(abs(mx), abs(my))) stop("data are essentially constant") : missing value where TRUE/FALSE needed
Can you shed some light on this? Many Thanks Chris
Here is my my data https://drive.google.com/open?id=1LiJD7T6oR5MtABwYqkhUrJFfo7XRxJ_z
Entering edit mode
Hi Chris, can you try the following please?
pValues <- apply(df, 1, function(x) t.test(x[1:3],x[4:6])$p.value)
I think you are refering to the wrong column numbers. Your data has 6 columns, 1-3 columns are group 1 and 4-6 columns are group 2.
|
{"url":"https://www.biostars.org/p/380884/","timestamp":"2024-11-08T19:33:22Z","content_type":"text/html","content_length":"31839","record_id":"<urn:uuid:35bb8622-21e1-4a46-9308-8ee850157576>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00210.warc.gz"}
|
Writing functions in bc | Computing Apéry’s constant
Functions in bc
The previous post discussed how you would plan an attempt to set a record in computing ζ(3), also known as Apéry’s constant. Specifically that post looked at how to choose your algorithm and how to
anticipate the number of terms to use.
Now suppose you wanted to actually do the calculation. This post will carry out a calculation using the Unix utility bc. I often use bc for extended precision calculation, but mostly for simple
things [1].
Calculating Apéry’s constant will require a function to compute binomial coefficients, something not built into bc, and so this post will illustrate how to write custom functions in bc.
(If you wanted to make a serious attempt at setting a record computing Apéry’s constant, or anything else, you would probably use an extended precision library written in C MPFR or write something
even lower level; you would not use bc.)
Apéry’s series
The series we want to evaluate is
To compute this we need to compute the binomial coefficients in the denominator, and to do that we need to compute factorials.
From calculations in the previous post we estimate that summing n terms of this series will give us 2n bits of precision, or 2n/3 decimal places. So if we carry out our calculations to n decimal
places, that gives us more precision than we need: truncation error is much greater than numerical error.
bc code
Here’s the code, which I saved in the file zeta3.b. I went for simplicity over efficiency. See [2] for a way to make the code much more efficient.
# inefficient but simple factorial
define fact(x) {
if (x <= 1)
return (1);
return (x*fact(x-1));
# binomial coefficient n choose k
define binom(n, k) {
return (fact(n) / (fact(k)*fact(n-k)));
define term(n) {
return ((-1)^(n-1)/(n^3 * binom(2*n, n)))
define zeta3(n) {
scale = n
sum = 0
for (i = 1; i <= n; ++i)
sum += term(i);
return (2.5*sum);
Now say we want 100 decimal places of ζ(3). Then we should need to sum about 150 terms of the series above. Let’s sum 160 terms just to be safe. I run the code above as follows [3].
$ bc -lq zeta3.b
This returns
How well did we do?
I tested this by computing ζ(3) to 120 decimals in Mathematica with
N[RiemannZeta[3], 120]
and subtracting the value returned by bc. This shows that the error in our calculation above is approximately 10^−102. We wanted at least 100 decimal places of precision and we got 102.
[1] I like bc because it’s simple. It’s a little too simple, but given that almost all software errs on the side of being too complicated, I’m OK with bc being a little too simple. See this post
where I used (coined?) the phrase controversially simple.
[2] Not only is the recursive implementation of factorial inefficient, computing factorials from scratch each time, even by a more efficient algorithm, is not optimal. The more efficient thing to do
is compute each new coefficient by starting with the previous one. For example, once we’ve already computed the binomial coefficient (200, 100), then we can multiply by 202*201/101² in order to get
the binomial coefficient (202, 101).
Along the same lines, computing (−1)^(n−1) is wasteful. When bootstrapping each binomial coefficient from the previous one, multiply by −1 as well.
[3] Why the options -lq? The -l option does two things: it loads the math library and it sets the precision variable scale to 20. I always use the -l flag because the default scale is zero, which
lops off the decimal part of all floating point numbers. Strange default behavior! I also often need the math library. Turns out -l wasn’t needed here because we explicitly set scale in the function
zeta3, and we don’t use the math library.
I also use the -q flag out of habit. It starts bc in quiet mode, suppressing the version and copyright announcement.
One thought on “Functions in bc”
1. Another option is to use Python’s “decimal” module. The translation is straight-forward, with the main new complexity being “with localcontext() as cxt:” / “cxt.prec = n” instead of “scale = n”,
and using a Decimal(“2.5”) and Decimal(i).
Though it looks like Python’s “prec” is more like bc’s “length” than “scale”, so make that “cxt.prec = n+1”.
|
{"url":"https://www.johndcook.com/blog/2021/09/09/functions-in-bc/","timestamp":"2024-11-05T17:23:45Z","content_type":"text/html","content_length":"54405","record_id":"<urn:uuid:42b92c81-8551-4dd4-9b32-661ae6af1224>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00511.warc.gz"}
|
How should I troubleshoot
1989 coupe.
I had a 'new' radio and four new speakers installed a month ago. Since last week, the passenger door speaker gives out about 25% less volume then the other front speaker or when all four are balanced
50/50. Last Sunday, the passenger rear speaker, when turning on the radio, will give a 1/2 second of what sounds like a needle skipping over a record (a scratch). I do not have another radio to
replace it with but was wondering where to troubleshoot first. The radio (maybe a bad connection directed towards the passenger side) or the speakers? Anyone have/had a similar problem?
Guest Mc_Reatta
Suggest you get your hearing checked first. :cool:
Sounds like bad connection. Pick your poison, get to the radio module, or start inside the door.
No matter where you start, it will be in the other location. :mad:
Edited by Mc_Reatta (see edit history)
Just a thought, speakers have to be wired "in phase", kind of like polarity.....John
Guest Corvanti
i'd start with speaker wiring problems. sounds like it may be a dirty/loose electrical connection on the right side. did you go with original speakers or others that need a "Metro" or similar
Guest crazytrain2
I know that with home stereo equipment you can have similar problems. One potential solution is to pull the volume, balance, fader and tuning knobs and clean the between the adjustment legs. Vacuum
or a pressurized can of air spray like that which you use on a keyboard is recommended. Just a thought.
Thanks all. I think I will start with the wiring (its a simple place to start and maybe a connection is loose).
I can't pull off the knobs to clean them because the CRT won't let me.
Ervin, did they snip off the factory connectors at the speaker ends and solder on new terminals? If so, I'd check those first. (Is why I always do my own stereo work, and like to use proper reverse
connectors to plug into stock harnesses. Most car stereo places won't spend the extra $5 for them.)
Also check to see if a wire got accidentally pinched.
Guest Reatta Bob
Sounds like you had someone install it for you. If you did I would go back and see them.
|
{"url":"https://forums.aaca.org/topic/217250-how-should-i-troubleshoot/","timestamp":"2024-11-04T01:39:36Z","content_type":"text/html","content_length":"130524","record_id":"<urn:uuid:0c20befb-47b6-4ec0-8ab8-792860f38e77>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00851.warc.gz"}
|
Elementary Statistics Sixth Edition Bluman Pdf Download
FREE BOOK Elementary Statistics Sixth Edition Bluman PDF Book is the book you are looking for, by download PDF Elementary Statistics Sixth Edition Bluman book you are also motivated to search from
other sources
Elementary Statistics 8th Edition BlumanStep By Step Approach (Eighth Edition ©2012) And Elementary Statistics: A Brief Version (Fifth Edition ©2010), Al Is A Co-author On A Liberal Arts ...
Elementary Statistics 8th Edition Solutions Are Available For This Textbook. Page 5/9. Read Free Elementary Statistics 8th Edition Bluman 1th, 2024Answers To Elementary Statistics 8th Edition
BlumanApril 28th, 2018 - Access Elementary Statistics 8th Edition Solutions Now Our Solutions Are Written By Chegg Experts So You Can Be Assured Of The Highest Quality''elementary Statistics Bluman
8th Edition International ... 'elementary Statistics 8th Eighth Edition Bybluman May 2nd, 2018 - Buy Elementary Statistics 8th Eighth Edition Bybluman On ... 2th, 2024Elementary Statistics Bluman 9th
Edition - EduGeneralIn Addition To Elementary Statistics: A Step By Step Approach (Eighth Edition ©2012) And Elementary Statistics: A Brief Version (Fifth Edition ©2010), Al Is A Co-author On A
Liberal Arts Mathematics Text Published By McGraw-Hill, Math In Our World (2nd Edition ©2011). 4th, 2024.
Elementary Statistics Bluman 8th EditionEighth Edition Elementary Statistics Bluman Mc Graw Hill ... In Addition To Elementary Statistics: A Step By Step Approach (Eighth Edition ©2012) And ...
McGraw-Hill Companies, The. Elementary Statistics 8th Edition Solutions Are Available For This Textbook. Need More Help With Elementary Statistics ASAP? Elementary Statistics: A Brief Version 4th,
2024Elementary Statistics Bluman 5th EditionElementary Statistics Bluman 5th Edition Solutions Manual ... Student Solution's Manual For Elementary Statistics: ... (Eighth Edition ©2012) And
Elementary Statistics: A Brief Version (Fifth Edition ©2010), Al Is A Co-author On A Liberal Arts Mathematics Text Published By McGraw-Hill, Math In Our World (2nd Edition ©2011). Al Also The ...
2th, 2024Elementary Statistics Bluman 6th Edition Solutions ManualMacroeconomics For Today-Irvin B. Tucker 2016-01-01 A Unique Textual And Visual Learning System, Colorful Graphs, And Causation
Chains Clarify Concepts. The Book Presents And Reinforces Core Concepts, Then Give 2th, 2024.
Elementary Statistics 7th Edition Bluman PdfElementary Statistics-Neil A. Weiss 2012 Understanding Child Development-Rosalind Charlesworth 2016-01-01 UNDERSTANDING CHILD DEVELOPMENT, 10th Edition,
Introduces Pre-service And Inservice Teachers To The Unique Qualities Of Young Children From Infants To A 3th, 2024Elementary Statistics Bluman 8th Edition SolutionsElementary Statistics-Neil A.
Weiss 2011-11-21 This Is The EBook Of The Printed Book And May Not Include Any Media, Website Access Codes, Or Print Supplements That May Come Packaged With The Bound Book. Weiss’s Elementary
Statistics, Eighth Edition Is The Ideal Textbook For Introductory 4th, 2024Elementary Statistics 8th Edition Bluman Answer Keys ...Elementary Statistics-Neil A. Weiss 2011-11-21 This Is The EBook Of
The Printed Book And May Not Include Any Media, Website Access Codes, Or Print Supplements That May Come Packaged With The Bound Book. Weiss’s Elementary Statistics 2th, 2024.
Elementary Statistics Bluman 8th Edition Manuals Solutions…Elementary Statistics, MyLab Revision Plus MyLab Statistics (9th Edition) By Neil A. Weiss Test Bank And Solutions Manual. Elementary
Statistics: Picturing The World (7th Edition) By Ron Larson And Betsy Farber Test Bank And Solutions Manual . Elements Of The Nature And Properties Of 2th, 2024Bluman Elementary Statistics 8th
Edition Bionominal ...Elementary Statistics - Mario F. Triola - 1998-01-01 Elementary Statistics - Mario F. Triola - 1998-01-01 Elementary Statistics: A Step By Step Approach With Data CD And Formula
Card - Allan Bluman - 2011-01-06 ELEMENTARY STATISTICS: A STEP BY STEP APPROACH Is For Introductory Stati 2th, 2024Elementary Statistics 9th Edition Bluman Solution
ManualElementary-statistics-9th-edition-bluman-solution-manual 3/3 Downloaded From Live.regisjesuit.com On October 18, 2021 By Guest Revision Plus MyLab Statistics (9th Edition) By Neil A. Weiss Test
Bank And Solutions Manual. Elementary Statistics: Picturing The World (7th Edition) By R 4th, 2024.
Elementary Statistics Bluman 10th Edition SolutionsAnnual Interest Rate, To The Nearest Tenth Of A Percent, ... Teachers Edition Elementary And Intermediate Algebra Third Edition Dugopolski; Math For
.... Jul 4, 2021 — Edition Bluman Solution ManualBluman Statistics - Travelusandcanada.orgElementary 2th, 2024Elementary Statistics By Bluman 7th EditionYeah, Reviewing A Books Elementary Statistics
By Bluman 7th Edition Could Grow Your Near Connections Listings. This Is Just One Of The Solutions For You To Be Successful. ... Handbook Of Univariate And Multivariate Data Analysis With IBM
SPSS-Robert Ho 2013-10-25 Using The Same Accessible, H 4th, 2024Elementary Statistics Bluman 8th Edition Solutions PdfThe 8th Edition Of Bluman Provides A Significant Leap Forward In Terms Of Online
Course Management With McGraw-Hill's New Homework Platform, Connect Statistics - Hosted By ALEKS. Statistic Instructors Served As Digital Contributors To Choose The Prob 4th, 2024.
Elementary Statistics Bluman Problems Solutions ManualSolutions Manual For Use With Books By Allan G. Bluman. Elementary Statistics: A Step By Step Approach, 8th Edition ... Instructor Solution
Manual: Elementary ... 2007 New Titles BLUMAN Elementary Statistics: World Behaviors And Apply Mathematical Concepts To The Solution Of Elementary Statistics, 3th,
2024Elementary-statistics-student-solution-manual-bluman-5th 1 ...Elementary Statistics Student Solution Manual Bluman 5th Student's Solutions Manual To Accompany Elementary Statistics Using The
Graphing Calculator For The TI-83/84 Plus-Mario F. Triola 2004-04 Student Solutions Manual To Accompany Elementary Statistics-Allan G. Bluman 2014 2th, 2024Math 227 – Elementary Statistics: A Brief
Version, 5/e BlumanE. Classification Of Children In A Day-care Center (infant, Toddler, Preschool). Qualitative . F. Weights Of Fish Caught In Lake George. Quantitative . G. Marital Status Of Faculty
Members In A Large University. Qualitative . 9. Classify Each Variable As Discrete Or Continuous. A. Number Of Doughnuts Sold Each Day By Doughnut Heaven. Discrete . B. 3th, 2024.
Bluman Elementary Statistics PowerpointAnd Data Analysis" Of Devore 4th Edition. Chapter And PowerPoint Focus On Central Trend Measures Including, Media, Mode, Variability, Standard Deviation,
Z-score, ChebyshPage 13This Bingo Game Comes In The Format Of A Power Point. It Covers Basic Statistics, Including Median, Mode, Min, Max, Range 1th, 2024Elementary Statistics Bluman Solution
ManualSolution Elementary 3rd Edition Pdf.pdf - Free Download Solution Elementary 3rd Edition Pdf Third Edition Solution Elementary Workbook Key Elementary Statistics 13th Edition Triola Solution
Elementary Linear Algebra 11th Edition Solution Elementary Statistics Bluman 10th Edition Solution Solution Pdf Elementary Linear Algebra, 11th 2th, 2024Elementary Probability And Statistics Sixth
EditionFree 6th Grade Math Worksheets Given Random Variables,, …, That Are Defined On A Probability Space, The Joint Probability Distribution For ,, … Is A Probability Distribution That Gives The
Probability That Each Of ,, … Falls In Any Part 4th, 2024.
Elementary Statistics - Picturing The World Mp Elementary ...Elementary Statistics Elementary Statistics Is A Lively, Contemporary And Engaging Text With More Real Data Sets, Contemporary
Applications, And Real-world Examples Than Any Other Introductory Statistics Text. Student's Solutions Manual - Elementary Statistics, Seventh Edition Elementary Statistics - 2th, 2024Elementary
Statistics - Picturing The World Elementary ...Elementary Statistics - Picturing The World Every Aspect Of Elementary Statistics Has Been Carefully Crafted To Help Readers Learn Statistics. The Third
Edition Features Many Updates And Revisions That Place Increased Emphasis On Interpretation Of Results And Critical Thinki 1th, 2024Elementary Statistics Books A La Carte Edition 9th
EditionElementary Statistics Using Excel- 2015 Elementary Statistics-Neil A. Weiss 2012 Elementary Statistics-Ron Larson 2014-01-14 This Edition Features The Same Content As The Traditional Text In A
Convenient, Three-hole-punched, Loose-leaf Version. Books A La Carte Also Offer A Great Value– 2th, 2024.
Business Statistics A First Course Sixth EditionBusiness Statistics: A First Course, 3rd Edition, By Sharpe, De Veaux, And Velleman, Narrows The Gap Between Theory And Practice — Relevant Page 4/26.
Download File PDF Business Statistics A First Course Sixth Editionstatistical Methods Empower Business Students To Make Effective, Data-informed Decisions. Business Statistics: A First Course | 7th
Edition | Pearson Business Statistics, A ... 4th, 2024
|
{"url":"https://forms.asm.apeejay.edu/books/elementary-statistics-sixth-edition-bluman.html","timestamp":"2024-11-12T16:57:32Z","content_type":"text/html","content_length":"24109","record_id":"<urn:uuid:06beb13f-f310-4910-83f7-fb388dd24062>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00616.warc.gz"}
|
Factor a Trinomial with Leading Coefficient = 1
Learning Outcomes
• Factor a trinomial with leading coefficient [latex]= 1[/latex]
Trinomials are polynomials with three terms. We are going to show you a method for factoring a trinomial whose leading coefficient is [latex]1[/latex]. Although we should always begin by looking for
a GCF, pulling out the GCF is not the only way that trinomials can be factored. The trinomial [latex]{x}^{2}+5x+6[/latex] has a GCF of [latex]1[/latex], but it can be written as the product of the
factors [latex]\left(x+2\right)[/latex] and [latex]\left(x+3\right)[/latex].
Recall how to use the distributive property to multiply two binomials:
[latex]\left(x+2\right)\left(x+3\right) = x^2+3x+2x+6=x^2+5x+6[/latex]
We can reverse the distributive property and return [latex]x^2+5x+6\text{ to }\left(x+2\right)\left(x+3\right) [/latex] by finding two numbers with a product of [latex]6[/latex] and a sum of [latex]5
Factoring a Trinomial with Leading Coefficient 1
In general, for a trinomial of the form [latex]{x}^{2}+bx+c[/latex], you can factor a trinomial with leading coefficient [latex]1[/latex] by finding two numbers, [latex]p[/latex] and [latex]q[/latex]
whose product is c and whose sum is b.
Let us put this idea to practice with the following example.
Factor [latex]{x}^{2}+2x - 15[/latex].
Show Solution
In the following video, we present two more examples of factoring a trinomial with a leading coefficient of 1.
To summarize our process, consider the following steps:
How To: Given a trinomial in the form [latex]{x}^{2}+bx+c[/latex], factor it
1. List factors of [latex]c[/latex].
2. Find [latex]p[/latex] and [latex]q[/latex], a pair of factors of [latex]c[/latex] with a sum of [latex]b[/latex].
3. Write the factored expression [latex]\left(x+p\right)\left(x+q\right)[/latex].
We will now show an example where the trinomial has a negative c term. Pay attention to the signs of the numbers that are considered for p and q.
In our next example, we show that when c is negative, either p or q will be negative.
Factor [latex]x^{2}+x–12[/latex].
Show Solution
Think About It
Which property of multiplication can be used to describe why [latex]\left(x+4\right)\left(x-3\right) =\left(x-3\right)\left(x+4\right)[/latex]. Use the textbox below to write down your ideas before
you look at the answer.
Show Solution
In our last example, we will show how to factor a trinomial whose b term is negative.
Factor [latex]{x}^{2}-7x+6[/latex].
Show Solution
In the last example, the b term was negative and the c term was positive. This will always mean that if it can be factored, p and q will both be negative.
Think About It
Can every trinomial be factored as a product of binomials?
Mathematicians often use a counterexample to prove or disprove a question. A counterexample means you provide an example where a proposed rule or definition is not true. Can you create a trinomial
with leading coefficient [latex]1[/latex] that cannot be factored as a product of binomials?
Use the textbox below to write your ideas.
Show Solution
|
{"url":"https://courses.lumenlearning.com/intermediatealgebra/chapter/read-factor-a-trinomial-with-leading-coefficient-1/","timestamp":"2024-11-02T05:14:48Z","content_type":"text/html","content_length":"54739","record_id":"<urn:uuid:a05251f8-2dac-47a3-bc53-755e2bb80063>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00222.warc.gz"}
|
Distributed Computing with R
distcomp {distcomp} R Documentation
Distributed Computing with R
distcomp is a collection of methods to fit models to data that may be distributed at various sites. The package arose as a way of addressing the issues regarding data aggregation; by allowing sites
to have control over local data and transmitting only summaries, some privacy controls can be maintained. Even when participants have no objections in principle to data aggregation, it may still be
useful to keep data local and expose just the computations. For further details, please see the reference cited below.
The initial implementation consists of a stratified Cox model fit with distributed survival data and a Singular Value Decomposition of a distributed matrix. General Linear Models will soon be added.
Although some sanity checks and balances are present, many more are needed to make this truly robust. We also hope that other methods will be added by users.
We make the following assumptions in the implementation: (a) the aggregate data is logically a stacking of data at each site, i.e., the full data is row-partitioned into sites where the rows are
observations; (b) Each site has the package distcomp installed and a workspace setup for (writeable) use by the opencpu server (see distcompSetup(); and (c) each site is exposing distcomp via an
opencpu server.
The main computation happens via a master process, a script of R code, that makes calls to distcomp functions at worker sites via opencpu. The use of opencpu allows developers to prototype their
distributed implementations on a local machine using the opencpu package that runs such a server locally using localhost ports.
Note that distcomp computations are not intended for speed/efficiency; indeed, they are orders of magnitude slower. However, the models that are fit are not meant to be recomputed often. These and
other details are discussed in the paper mentioned above.
The current implementation, particularly the Stratified Cox Model, makes direct use of code from survival::coxph(). That is, the underlying Cox model code is derived from that in the R survival
survival package.
For an understanding of how this package is meant to be used, please see the documented examples and the reference.
Software for Distributed Computation on Medical Databases: A Demonstration Project. Journal of Statistical Software, 77(13), 1-22. doi:10.18637/jss.v077.i13
Appendix E of Modeling Survival Data: Extending the Cox Model by Terry M. Therneau and Patricia Grambsch. Springer Verlag, 2000.
See Also
The examples in system.file("doc", "examples.html", package="distcomp")
The source for the examples: system.file("doc_src", "examples.Rmd", package="distcomp").
version 1.3-3
|
{"url":"https://search.r-project.org/CRAN/refmans/distcomp/html/distcomp.html","timestamp":"2024-11-01T22:39:16Z","content_type":"text/html","content_length":"4634","record_id":"<urn:uuid:32cceaaa-8ffe-4198-a63b-ca2685e1e157>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00441.warc.gz"}
|
Version 1.14 of ai12s/ai12-0208-1.txt
!standard A.5.5(0) 18-12-02 AI12-0208-1/06
!standard A.5.6(0)
!standard A.5.7(0)
!standard A.5.8(0)
!class Amendment 16-12-19
!status work item 16-12-19
!status received 16-09-27
!priority Low
!difficulty Medium
!subject Predefined Big numbers support
Define Big Numbers packages to support arbitrary precision mathematics.
Some applications need larger numbers than Standard.Integer. All Ada compilers have this capability in order to implement static expressions; shouldn't some such package be available to Ada users as
well? (Yes.)
(See Summary.)
A.5.5 Big Numbers
Support is provided for integer arithmetic involving values larger than than those supported by the target machine, and for arbitrary-precision rationals.
The package Ada.Numerics.Big_Numbers has the following declaration:
package Ada.Numerics.Big_Numbers with Pure, Nonblocking is subtype Field is Integer range 0 .. implementation-defined; subtype Number_Base is Integer range 2 .. 16; end Ada.Numerics.Big_Numbers;
A.5.6 Big Integers
The package Ada.Numerics.Big_Numbers.Big_Integers has the following definition:
with Ada.Streams; package Ada.Numerics.Big_Numbers.Big_Integers with Preelaborate, Nonblocking is type Big_Integer is private with Default_Initial_Condition => not Is_Valid (Big_Integer),
Integer_Literal => From_String, Put_Image => Put_Image;
function Is_Valid (Arg : Big_Integer) return Boolean;
subtype Valid_Big_Integer is Big_Integer with Dynamic_Predicate => Is_Valid (Valid_Big_Integer), Predicate_Failure => (raise Constraint_Error);
function Invalid return Big_Integer with Post => not Is_Valid (Invalid'Result);
function "=" (L, R : Valid_Big_Integer) return Boolean; function "<" (L, R : Valid_Big_Integer) return Boolean; function "<=" (L, R : Valid_Big_Integer) return Boolean; function ">" (L, R :
Valid_Big_Integer) return Boolean; function ">=" (L, R : Valid_Big_Integer) return Boolean;
function "+" (Arg : Integer) return Valid_Big_Integer; function To_Big_Integer (Arg : Integer) return Valid_Big_Integer renames "+";
subtype Big_Positive is Valid_Big_Integer with Dynamic_Predicate => Big_Positive > 0, Predicate_Failure => (raise Constraint_Error);
subtype Big_Natural is Valid_Big_Integer with Dynamic_Predicate => Big_Natural >= 0, Predicate_Failure => (raise Constraint_Error);
function In_Range (Arg, Low, High : Valid_Big_Integer) return Boolean is ((Low <= Arg) and (Arg <= High));
function To_Integer (Arg : Valid_Big_Integer) return Integer with Pre => In_Range (Arg, Low => +Integer'First, High => +Integer'Last) or else (raise Constraint_Error);
generic type Int is range <>; package Signed_Conversions is function To_Big_Integer (Arg : Int) return Valid_Big_Integer; function From_Big_Integer (Arg : Valid_Big_Integer) return Int with Pre =>
In_Range (Arg, Low => To_Big_Integer (Int'First), High => To_Big_Integer (Int'Last)) or else (raise Constraint_Error); end Signed_Conversions;
generic type Int is mod <>; package Unsigned_Conversions is function To_Big_Integer (Arg : Int) return Valid_Big_Integer; function From_Big_Integer (Arg : Valid_Big_Integer) return Int with Pre =>
In_Range (Arg, Low => To_Big_Integer (Int'First), High => To_Big_Integer (Int'Last)) or else (raise Constraint_Error); end Unsigned_Conversions;
function To_String (Arg : Valid_Big_Integer; Width : Field := 0; Base : Number_Base := 10) return String with Post => To_String'Result'First = 1; function From_String (Arg : String; Width : Field :=
0) return Valid_Big_Integer;
procedure Put_Image (Arg : Valid_Big_Integer; Stream : not null access Ada.Streams.Root_Stream_Type'Class);
function "-" (L : Valid_Big_Integer) return Valid_Big_Integer; function "abs" (L : Valid_Big_Integer) return Valid_Big_Integer; function "+" (L, R : Valid_Big_Integer) return Valid_Big_Integer;
function "-" (L, R : Valid_Big_Integer) return Valid_Big_Integer; function "*" (L, R : Valid_Big_Integer) return Valid_Big_Integer; function "/" (L, R : Valid_Big_Integer) return Valid_Big_Integer;
function "mod" (L, R : Valid_Big_Integer) return Valid_Big_Integer; function "rem" (L, R : Valid_Big_Integer) return Valid_Big_Integer; function "**" (L : Valid_Big_Integer; R : Natural) return
Valid_Big_Integer; function Min (L, R : Valid_Big_Integer) return Valid_Big_Integer; function Max (L, R : Valid_Big_Integer) return Valid_Big_Integer;
function Greatest_Common_Divisor (L, R : Valid_Big_Integer) return Big_Positive with Pre => (L /= 0 and R /= 0) or else (raise Constraint_Error);
private ... -- not specified by the language end Ada.Numerics.Big_Numbers.Big_Integers;
To_String and From_String behave analogously to the Put and Get procedures defined in Text_IO.Integer_IO (in particular, with respect to the interpretation of the Width and Base parameters) except
that Constraint_Error, not Data_Error, is propagated in error cases and the result of a call To_String with a Width parameter of 0 and a nonnegative Arg parameter does not include a leading blank.
Put_Image calls To_String (passing in the default values for the Width and Base parameters), prepends a leading blank if the argument is nonnegative, converts that String to a Wide_Wide_String using
To_Wide_Wide_String, and writes the resulting value to the stream using Wide_Wide_String'Write.
The other functions have their usual mathematical meanings.
Implementation Requirements
No storage associated with a Big_Integer object shall be lost upon assignment or scope exit.
For purposes of determining whether predicate checks are performed as part of default initialization, the type Big_Integer shall be considered to have a subcomponent that has a default_expression.
AARM Note: This means that the elaboration of
Default_Initialized_Object : Valid_Big_Integer;
is required to propagate Assertion_Error.
A.5.7 Bounded Big Integers
An instance of the language-defined generic package Numerics.Big_Numbers.Bounded_Big_Integers provides a Big_Integer type and operations corresponding to those declared in
Numerics.Big_Numbers.Big_Integers, but with the difference that the maximum storage (and, consequently, the set of representable values) is bounded.
The declaration of the generic library package Big_Numbers.Bounded_Big_Integers has the same contents and semantics as Big_Numbers.Big_Integer except:
- Bounded_Big_Integers is a generic package and takes a generic formal:
Capacity : Natural;
- two additional visible expression functions are declared:
function Last return Valid_Big_Integer is ((+256) ** Capacity); function First return Valid_Big_Integer is (-Last);
- the partial view of Bounded_Big_Integers.Big_Integer includes
a type invariant specification,
Type_Invariant => (if Is_Valid (Bounded_Big_Integer) then In_Range (Bounded_Big_Integer, First, Last) or else (raise Constraint_Error))
Each subprogram of an instance of Bounded_Big_Integers.Bounded_Big_Integer behaves like the corresponding Big_Numbers.Big_Integers subprogram except that type invariant checks are performed as
described in 7.3.2.
Implementation Requirements
For each instance of Bounded_Big_Integers, the output generated by Big_Integer'Output or Big_Integer'Write shall be readable by the corresponding Input or Read operations of the Big_Integer type
declared in either another instance of Bounded_Big_Integers or in the package Numerics.Big_Numbers.Big_Integers. [This is subject to the preceding requirement that type invariant checks are
performed, so that Constraint_Error may be raised in some cases.]
Implementation Advice
The implementation of (an instance of) Bounded_Big_Integers should not make use of controlled types or dynamic allocation.
[end of Implementation Advice]
The generic unit Ada.Numerics.Big_Numbers.Conversions provides operations for converting between the Big_Integer types declared in Big_Numbers.Big_Integers and in an instance of
The generic package Ada.Numerics.Big_Numbers.Conversions has the following definition:
with Ada.Numerics.Big_Numbers.Big_Integers; with Ada.Numerics.Big_Numbers.Bounded_Big_Integers; generic with package Bounded is new Bounded_Big_Integers (<>); package
Ada.Numerics.Big_Numbers.Conversions with Preelaborate, Nonblocking is function To_Bounded (Arg : Big_Integers.Valid_Big_Integer) return Bounded.Valid_Big_Integer; function From_Bounded (Arg :
Bounded.Valid_Big_Integer) return Big_Integers.Valid_Big_Integer; end Ada.Numerics.Big_Numbers.Conversions;
A.5.8 Big Rationals
The package Ada.Numerics.Big_Numbers.Big_Rationals has the following definition:
with Ada.Numerics.Big_Numbers.Big_Integers; with Ada.Streams; package Ada.Numerics.Big_Numbers.Big_Rationals with Preelaborate, Nonblocking is use Big_Integers;
type Big_Rational is private with Default_Initial_Condition => not Is_Valid (Big_Rational), Real_Literal => From_String, Put_Image => Put_Image, Type_Invariant => (not Is_Valid (Big_Rational)) or
else (Big_Rational = 0.0) or else (Greatest_Common_Divisor (Numerator (Big_Rational), Denominator (Big_Rational)) = 1);
function Is_Valid (Arg : Big_Rational) return Boolean;
function Invalid return Big_Rational with Post => not Is_Valid (Invalid'Result);
subtype Valid_Big_Rational is Big_Rational with Dynamic_Predicate => Is_Valid (Valid_Big_Rational), Predicate_Failure => (raise Constraint_Error);
function "/" (Num, Den : Valid_Big_Integer) return Valid_Big_Rational with Pre => (Den /= 0) or else (raise Constraint_Error);
function Numerator (Arg : Valid_Big_Rational) return Valid_Big_Integer; function Denominator (Arg : Valid_Big_Rational) return Big_Positive;
function "+" (Arg : Integer) return Valid_Big_Rational is ((+Arg) / 1);
function To_Big_Rational (Arg : Integer) return Valid_Big_Rational renames "+";
function "=" (L, R : Valid_Big_Rational) return Boolean; function "<" (L, R : Valid_Big_Rational) return Boolean; function "<=" (L, R : Valid_Big_Rational) return Boolean; function ">" (L, R :
Valid_Big_Rational) return Boolean; function ">=" (L, R : Valid_Big_Rational) return Boolean;
function In_Range (Arg, Low, High : Valid_Big_Rational) return Boolean is ((Low <= Arg) and (Arg <= High));
generic type Num is digits <>; package Float_Conversions is function To_Big_Rational (Arg : Num) return Valid_Big_Rational;
function From_Big_Rational (Arg : Valid_Big_Rational) return Num with Pre => In_Range (Arg, Low => To_Big_Rational (Num'First), High => To_Big_Rational (Num'Last)) or else (raise Constraint_Error);
end Float_Conversions;
function To_String (Arg : Valid_Big_Rational; Fore : Field := 2; Aft : Field := 3; Exp : Field := 0) return String with Post => To_String'Result'First = 1; function From_String (Arg : String; Width :
Field := 0) return Valid_Big_Rational;
function To_Quotient_String (Arg : Valid_Big_Rational) return String is (To_String (Numerator (Arg)) & " /" & To_String (Denominator (Arg))); function From_Quotient_String (Arg : String) return
procedure Put_Image (Arg : Valid_Big_Rational; Stream : not null access Ada.Streams.Root_Stream_Type'Class);
function "-" (L : Valid_Big_Rational) return Valid_Big_Rational; function "abs" (L : Valid_Big_Rational) return Valid_Big_Rational; function "+" (L, R : Valid_Big_Rational) return Valid_Big_Rational;
function "-" (L, R : Valid_Big_Rational) return Valid_Big_Rational; function "*" (L, R : Valid_Big_Rational) return Valid_Big_Rational; function "/" (L, R : Valid_Big_Rational) return
Valid_Big_Rational; function "**" (L : Valid_Big_Rational; R : Integer) return Valid_Big_Rational; function Min (L, R : Valid_Big_Rational) return Valid_Big_Rational; function Max (L, R :
Valid_Big_Rational) return Valid_Big_Rational; private ... -- not specified by the language end Ada.Numerics.Big_Numbers.Big_Rationals;
To_String and From_String behave analogously to the Put and Get procedures defined in Text_IO.Float_IO (in particular, with respect to the interpretation of the Fore, Aft, Exp, and Width parameters),
except that Constraint_Error (not Data_Error) is propagated in error cases. From_Quotient_String implements the inverse function of To_Quotient_String; Constraint_Error is propagated in error cases.
Put_Image calls To_String, converts that String to a Wide_Wide_String using To_Wide_Wide_String, and the resulting value to the stream using Wide_Wide_String'Write.
To_Big_Rational is exact (i.e., the result represents exactly the same mathematical value as the argument). From_Big_Rational is subject to the same precision rules as a type conversion of a value of
type T to the target type Num, where T is a hypothetical floating point type whose model numbers include all of the model numbers of Num as well as the exact mathematical value of the argument.
[TBD: Is Constraint_Error the exception we want on the Predicate_Failure aspect specs for Valid_Big_Integer and Valid_Big_Rational?] [TBD: do we want a Fixed_Conversions generic package analogous to
Float_Conversions?] [TBD: the range check on From_Big_Rational is slightly too tight. For example,
X : IEEE_Float32 := IEEE_Float32 (IEEE_Float64'Succ (IEEE_Float64 (IEEE_Float32'Last)));
does not overflow but the corresponding conversion using From_Big_Rational would fail the range check. Do we care?]
The other functions have their usual mathematical meanings.
Implementation Requirements
No storage associated with a Big_Rational object shall be lost upon assignment or scope exit.
For purposes of determining whether predicate checks are performed as part of default initialization, the type Big_Rational shall be considered to have a subcomponent that has a default_expression.
AARM Note: This means that the elaboration of
Default_Initialized_Object : Valid_Big_Rational;
is required to propagate Assertion_Error.
AARM Note: No Bounded_Big_Rationals generic package is provided.
This section, or at least the AARM note, is intended to follow the structure of the analogous wording for AI12-0112-1 (contracts for containers).
Add after 11.5(23):
Perform the checks associated with Pre, Static_Predicate, Dynamic_Predicate, or Type_Invariant aspect specifications occuring in the visible part of package Ada.Big_Numbers or of any of its
[TBD: Include Static_Predicate in this list just for completeness, even though it happens that there are no Static_Predicate specifications in these units?]
AARM Reason: One could use the Assertion_Policy to eliminate such checks, but that would require recompiling the Ada.Big_Numbers packages (the assertion policy that determines whether the checks are
made is that used to compile the unit). In addition, we do not want to specify the behavior of the Ada.Big_Numbers operations if a precondition or predicate fails; that is different than the usual
behavior of Assertion_Policy. By using Suppress for this purpose, we make it clear that suppressing a check that would have failed results in erroneous execution.
** None yet.
No ASIS effect (assuming this is ONLY a library).
!ACATS test
An ACATS C-Test is needed to check that the new capabilities are supported.
From: Steve Baird
Sent: Tuesday, September 27, 2016 4:09 PM
professor at U. of Utah:
Regehr says:
In most programming languages, the default integer type should be a
bignum: an arbitrary-precision integer that allocates more space when
needed. Efficient bignum libraries exist and most integers never end
up needing more than one machine word anyway, except in domains like
Nobody is suggesting changing how Standard.Integer works for Ada, but a
language-defined Bignum package (presumably supporting Rationals as well as
Integers) would be a step in the right direction.
It seems like the same arguments which were used (correctly, IMO) to justify
adding predefined container packages to the language also apply here. As Tuck
phrased it in a private message: portability and more capability "out of the
Does some de facto standard already exist?
From: Bob Duff
Sent: Tuesday, September 27, 2016 4:32 PM
> Nobody is suggesting changing how Standard.Integer works
But somebody might suggest that things like "type T is range 1..10**100;"
should be supported by all Ada compilers.
> It seems like the same arguments which were used (correctly, IMO) to
> justify adding predefined container packages to the language also
> apply here. As Tuck phrased it in a private message:
> portability and more capability "out of the box."
Plus the fact that all Ada compilers have to support that functionality at
compile time, but can't provide it to their users in a portable way at run time.
> Does some de facto standard already exist?
For C and C++, yes. For Ada, no.
For Common Lisp, Java, C#, and many others, a de jure standard exists.
From: Randy Brukardt
Sent: Wednesday, September 28, 2016 12:49 PM
> Does some de facto standard already exist?
No. I could be convinced to contribute RR's Univmath package as a starting point
for discussion.
From: Jean-Pierre Rosen
Sent: Thursday, September 29, 2016 12:22 AM
There are several packages available, see http://bignumber.chez.com/index.html
From: Randy Brukardt
Sent: Thursday, September 29, 2016 12:28 PM
Surely, like containers there are as many Bignum packages as there are Ada
programmers (much like containers - everybody has one). But is someone putting
them into RM format?? That's what it means to "contribute" a package here.
From: John Barnes
Sent: Thursday, September 29, 2016 2:05 PM
I see there has been chatter on big number packages.
I wrote such a package many years ago. I was intending to write a book called
Fun with Ada using big examples of Ada 83 programs. But it got overtaken by
events such as having to write the book on Ada 95.
But I kept the package, used some child stuff from Ada 95 but otherwise left it
alone, I still use it for dabbling with large prime numbers and so on. I think
it is based on base 10,000 which will run on a 16 bit machine and is easy for
conversion for printing.
But I fear that agreeing on something might be tricky.
From: Florian Schanda
Sent: Friday, September 30, 2016 2:35 AM
> But I kept the package, used some child stuff from Ada 95 but
> otherwise left it alone, I still use it for dabbling with large prime
> numbers and so on. I think it is based on base 10,000 which will run
> on a 16 bit machine and is easy for conversion for printing.
Generally, these days, you would probably want to stick to largest power-of- two
as printing these is not a massive concern but performance is. :)
Anyway, I think whatever we come up with, it should be possible to implement it
via a binding to GMP [https://gmplib.org] which is more or less the gold
standard for arbitrary precision arithmetic. Of course, some runtime may wish to
have a more verifiable implementation... So, I think there are two requirements
we should make sure to fulfil:
1. the api should be amenable to static analysis and formal verification
2. the api should make it easy to bind to gmp
(Not saying this list is exhaustive.)
I just want to avoid people starting from various in-house and private projects;
its probably a good idea instead to start from established libraries.
From: Steve Baird
Sent: Friday, September 30, 2016 12:27 PM
> So, I think there are two
> requirements we should make sure to fulfil:
> 1. the api should be amenable to static analysis and formal verification
> 2. the api should make it easy to bind to gm
It is also at least possible that we'll want something similar to what we have
with the containers, where we have one version for use in situations where
controlled types and dynamic storage allocation are ok and another for use in
other situations.
From: Jean-Pierre Rosen
Sent: Friday, September 30, 2016 2:44 PM
Hmmm... bounded and unbounded bignums?
From: Tucker Taft
Sent: Friday, September 30, 2016 3:52 PM
Perhaps: "big enough nums, already..."
From: Steve Baird
Sent: Tuesday, December 12, 2017 7:20 PM
I thought I'd take a look at how Java and C++ do bignums to see if there are any
ideas there worth incorporating.
My going-in idea is to have two packages with similar specs; one has "Capacity"
discriminants and the other is implemented using dynamic storage allocation of
some sort (e.g., controlled types and allocators). Like the bounded/unbounded
versions of the containers.
C++ doesn't really have a standard for bignums, but the GCC/GMP stuff
looks pretty similar to what I expected.
Java, however, surprised me (note that I am far from a Java expert so it could
be that I am just confused here).
The Java big-real spec doesn't have Numerator and Denominator functions which
yield big-ints.
The Java type seems to be named BigDecimal.
BigDecimal is implemented as a single big-int value accompanied by two ints
(Scale and Precision), at least according to
Which leads to my question:
If Ada defined a spec where the intended implementation for bigreal
is clearly two bigints (one for numerator, one for denominator),
would this result in lots of "I coded up the same algorithm in Ada
and Java and performance was a lot worse in Ada" horror stories?
Apparently BigDecimal lets you have, in effect, a lot of decimal digits but the
value "one third" still cannot be represented exactly.
Why did the Java folks do it that way? It seems like you lose a lot of value if
you can't exactly represent, for example, one third.
But perhaps most folks don't care about that functionality and the
performance/functionality tradeoff chosen by Java is closer to what most folks
Opinions? Opinions about Java are of some interest, but what I really want is
opinions about what we should do in Ada.
p.s. Note that the current plan for this AI is to add one or more new predefined
packages but no changes to language rules. In particular, numeric literals for a
non-numeric type is the topic of another AI.
From: Tucker Taft
Sent: Wednesday, December 13, 2017 9:15 AM
We want rational, not decimal, computations, I believe. So I would ignore
Java's BigDecimal.
A different and interesting capability is true "real" arithmetic, which works
for transcendentals, etc. It is intriguing, but probably not what people really
I'll send the PDF for an article by Hans Boehm about "real" arithmetic
separately, since it will probably not make it through the SPAM filter!
From: Randy Brukardt
Sent: Wednesday, December 13, 2017 10:57 AM
> We want rational, not decimal, computations, I believe. So I would
> ignore Java's BigDecimal.
Isn't that Steve's question? Ada compiler vendors use rational computations
since that is required by the ACATS (it's necessary that 1/3 /=
0.33333333333333333333333333333). But is that the best choice for the Ada
community? I don't know.
> A different and interesting capability is true "real"
> arithmetic, which works for transcendentals, etc. It is intriguing,
> but probably not what people really want.
> I'll send the PDF for an article by Hans Boehm about "real"
> arithmetic separately, since it will probably not make it through the
> SPAM filter!
It might be too large for the list as well. If so, I can post it in the Grab Bag
if you send it directly to me.
From: Edmond Schonberg
Sent: Wednesday, December 13, 2017 1:36 PM
> I'll send the PDF for an article by Hans Boehm about "real" arithmetic
> separately, since it will probably not make it through the SPAM filter!
Both the rational representation and Boehm’s approach require arbitrary
precision integer arithmetic, so the spec of that new package is
straightforward. The papers describing the implementation of Boehm’s approach
claim that it is much more efficient than working on rationals, where numerator
and denominator grow very rapidly, while the other method only computes required
bits. I have no idea whether numerical analysts use this method, From the
literature it seems to be of interest to number theorists.
From: Randy Brukardt
Sent: Wednesday, December 13, 2017 4:49 PM
> > It might be too large for the list as well. If so, I can post it in
> > the Grab Bag if you send it directly to me.
> It was too big.
By about 4 Megabytes. :-)
Since the article is copyrighted, I put it in the private part of the website.
Find it at:
Ed suggested privately:
> Additional details on the underlying model and its implementation in:
> http://keithbriggs.info/documents/xr-paper2.pdf
From: Randy Brukardt
Sent: Wednesday, December 13, 2017 5:10 PM
> Both the rational representation and Boehm's approach require
> arbitrary precision integer arithmetic, so the spec of that new
> package is straightforward.
> The papers describing the implementation of Boehm's approach claim
> that it is much more efficient than working on rationals, where
> numerator and denominator grow very rapidly, while the other method
> only computes required bits. I have no idea whether numerical analysts
> use this method, From the literature it seems to be of interest to
> number theorists.
The problem with the Boehm method is that it requires specifying those "required
bits", which seems problematic in a programming environment. One could do it
with a form of type declaration (Ada's "digits" seems to be the right general
idea), but that doesn't make much sense in a library form. Boehm's actual use
gets those on the fly (by interacting with the user), and they also use a
rational representation as a backup. So it seems that a rational representation
is going to show up somewhere.
I've repeatedly had the fever dream of a class-wide bignum base type, something
package Root_Bignum is
type Root_Bignum_Type is abstract tagged null record;
function Bignum (Val : in Long_Float) return Root_Bignum_Type is abstract;
-- To get the effect of literals and conversions.
function "+" (Left, Right : in Root_Bignum_Type) return Root_Bignum_Type is abstract;
-- And all of the rest.
function Expected_Digits return Natural is abstract;
-- The number of digits supported by this type; 0 is returned
-- if the number is essentially infinite.
-- And probably some other queries.
end Root_Bignum;
And then there could be multiple specific implementations with different
performance characteristics, everything from Long_Float itself thru infinite
rational representations.
This would allow one to create most of the interesting algorithms as class-wide
operations (with implementations that could adjust to the characteristics of the
underlying type), for instance:
function Sqrt (Val : in Root_Bignum_Type'Class;
Required_Digits : Natural := 0) return Root_Bignum_Type'Class;
Here with "Required_Digits" specifies how many digits of result are needed.
If 0, the value would be retrieved from the underlying representation.
(Probably have to raise an exception if that gives "infinite".)
Such a layout would also allow easy changing of representations, which probably
would be needed for tuning purposes (most of these maths being slow, at least by
conventional standards).
This would have the clear advantage of avoiding being locked into a single form
of Bignum math, when clearly there are other choices out there useful for
particular purposes.
From: Randy Brukardt
Sent: Wednesday, December 13, 2017 5:25 PM
I said:
> The problem with the Boehm method is that it requires specifying those
> "required bits", which seems problematic in a programming environment.
but then also noted:
> function Sqrt (Val : in Root_Bignum_Type'Class;
> Required_Digits : Natural := 0) return
> Root_Bignum_Type'Class;
essentially, recognizing that many non-terminating algorithms have to have some
sort of termination criteria.
For ease of use purposes, one would prefer to only specify numbers of digits if
they're really needed (as in Sqrt or PI, etc.). But if there are going to be a
lot of such operations, one would want to be able to specify that once. Hope
that explains my thinking here.
Also, a Bignum library needs a corresponding Text_IO library. And probably a
custom version of GEF. (The Janus/Ada compiler library has most of these
features, and they are used extensively.)
From: Steve Baird
Sent: Friday, January 19, 2018 2:36 PM
We have agreed that we want bignum support in the form of one or more predefined
packages with no other language extensions (e.g., no new rules for numeric
literals) as part of this AI.
The general approach seems fairly clear, although there are a lot of details to
decide (not the least of which are the choices for names). I think we want two
forms, "vanilla" and "bounded" (analogous to, for example,
Ada.Containers.Vectors and Ada.Containers.Bounded_Vectors). In one form, the two
"big" numeric types (tentatively named Big_Integer and Big_Rational) are defined
as undiscriminated types. In the second form, these types are discriminated with
some sort of a capacity discriminant. The idea is that the first form is allowed
to use dynamic storage allocation and controlled types in its implementation
while the second form is not; the discriminant somehow indicates the set of
representable values via some mapping (should this mapping be implementation
At a high level, we might have something like
package Ada.Big_Numbers is
-- empty spec like Ada.Containers package
package Ada.Big_Numbers.Big_Integers is
type Big_Integer is private;
function GCD (Left, Right : Big_Integer) return Integer;
function "+" (Arg : Some_Concrete_Integer_Type_TBD)
return Big_Integer;
... ops for Big_Integer ...
end Ada.Big_Numbers.Big_Integers.
with Ada.Big_Numbers.Big_Integers;
package Ada.Big_Numbers.Big_Rationals is
use type Big_Integers.Big_Integer;
type Big_Rational is private with
Type_Invariant =>
Big_Rational = +0 or else
(Big_Integers.Numerator (Big_Rational),
Big_Integers.Denominator (Big_Rational)) = +1;
function Numerator (Arg : Big_Rational) return Big_Integer;
function Denominator (Arg : Big_Rational) return Big_Integer;
function "/" (Num, Den : Big_Integer) return Big_Rational
with Pre => Den /= +0;
... other ops for Big_Rational ...
end Ada.Big_Numbers.Big_Rationals;
package Ada.Big_Numbers.Bounded_Big_Integers is ... end;
package Ada.Big_Numbers.Bounded_Big_Rationals is ... end;
Questions/observations include:
1) Do we declare deferred constants, parameterless functions, or neither
for things like Zero, One, and Two?
2) Which ops do we include? It seems obvious that we define at least
the arithmetic and relational ops that are defined for any
predefined integer (respectively float) type for Big_Integer
(respectively, Big_Rational).
What Pre/Postconditions are specified for these ops?
These might involve subtype predicates.
For example (suggested by Bob), do we want
subtype Nonzero_Integer is Big_Integer with
Predicate => Nonzero_Integer /= Zero;
function "/"
(X: Big_Integer; Y: Nonzero_Integer) return Big_Integer;
-- similar for "mod", "rem".
What other operations should be provided?
- Conversion between Big_Int and what concrete integer types?
I'd say define a type with range Min_Int .. Max_Int
and provide conversion functions for that type. Also provide
two generic conversion functions that take a generic formal
signed/modular type.
- Conversion between Big_Rational and what concrete integer or
float types? Same idea. Conversion between a maximal
floating point type and then a pair of conversion generics
with formal float/fixed parameters.
- What shortcuts do we provide (i.e., ops that can easily be
built out of other ops)? Assignment procedures like
Add (X, Y); -- X := X + Y
or mixed-type operators whose only purpose is to spare users
from having to write explicit conversion?
3) It seems clear that we don't want the bounded form of either
package to "with" the unbounded form but we do want conversion
functions for going between corresponding bounded and unbounded
types. Perhaps these go in child units of the two bounded packages
(those child units could then "with" the corresponding unbounded
packages). Should streaming of the two forms be compatible as with
vectors and bounded vectors?
4) We need an Assign procedure. In the unbounded case it can be just
a wrapper for predefined assignment, but in the bounded case it
has to deal with the case where the two arguments have different
capacities. It's fairly obvious what to do in most cases, but what
about assigning a Big_Rational value which cannot be represented
exactly given the capacity of the target. Raise an exception or
round? In either case, we probably want to provide a Round function
that deterministically finds an approximation to a given
value which can be represented as a value having a given
capacity. This can be useful in the unbounded case just to save
storage. Should this Round function be implementation-dependent?
If not, then we might end up talking about convergents and
semi-convergents in the Ada RM (or at least in the AARM),
which would be somewhat odd (see
). I do not think we want to define Succ/Pred functions which take
a Big_Rational and a capacity value.
5) We want to be sure that a binding to GNU/GMP is straightforward in
the unbounded case. [Fortunately, that does not require using the
same identifiers used in GNU/GMP (mpz_t and mpq_t).]
See gmplib.org/manual for the GNU/GMP interfaces.
6) Do we want functions to describe the mapping between Capacity
discriminant values and the associated set of representable values?
For example, a function from a value (Big_Integer or Big_Rational)
to the smallest capacity value that could be used to represent it.
For Big_Integer there could presumably be Min and Max functions
that take a capacity argument. For Big_Rational, it's not so clear.
We could require, for example, that a given capacity value allows
representing a given Big_Rational value if it is >= the sum of
the capacity requirements of the Numerator and the Denominator.
7) Bob feels (and I agree) that the ARG should not formally approve any
changes until we have experience with an implementation. At this
point the ARG should be focused on providing informal guidance on
this topic.
From: Randy Brukardt
Sent: Friday, January 19, 2018 10:18 PM
> Questions/observations include:
0) Should Big_Integer and (especially) Big_Rational be visibly tagged?
If so, then we can use prefix notation on functions like Numerator and
Denominator. We could also consider deriving both versions (usual and bounded)
from an abstract ancestor.
> 1) Do we declare deferred constants, parameterless functions,
> or neither for things like Zero, One, and Two?
If tagged, I'll finally get an excuse to show why what I called "tag
propagation" is necessary to implement the dispatching rules in 3.9.2. :-) (One
has to consider a set of calls, not a single call, for determining the static or
dynamic tag for dispatching. That's demonstratably necessary to process tagged
expressions with constants or literals.)
Anyway, the answer to this depends on whether there is a sufficiently short
constructor -- and that really depends on whether Tucker invents a useful
"literals for private type" AI. So I don't think this can be answered until we
find out about that.
> 2) Which ops do we include? It seems obvious that we define at least
> the arithmetic and relational ops that are defined for any
> predefined integer (respectively float) type for Big_Integer
> (respectively, Big_Rational).
> What Pre/Postconditions are specified for these ops?
> These might involve subtype predicates.
> For example (suggested by Bob), do we want
> subtype Nonzero_Integer is Big_Integer with
> Predicate => Nonzero_Integer /= Zero;
> function "/"
> (X: Big_Integer; Y: Nonzero_Integer) return Big_Integer;
> -- similar for "mod", "rem".
> ?
Shouldn't this predicate raise Constraint_Error rather than defaulting to
Assertion_Error, to be more like the other numeric operations? Otherwise, I'm
all in favor of this formulation. Note, however, that since the underlying type
is likely to be controlled and thus tagged, this would require some changes to
other rules; there is already an AI about that (AI12-0243-1).
> What other operations should be provided?
> - Conversion between Big_Int and what concrete integer types?
> I'd say define a type with range Min_Int .. Max_Int
> and provide conversion functions for that type. Also provide
> two generic conversion functions that take a generic formal
> signed/modular type.
Sounds OK.
> - Conversion between Big_Rational and what concrete integer or
> float types? Same idea. Conversion between a maximal
> floating point type and then a pair of conversion generics
> with formal float/fixed parameters.
Sounds OK again.
> - What shortcuts do we provide (i.e., ops that can easily be
> built out of other ops)? Assignment procedures like
> Add (X, Y); -- X := X + Y
> or mixed-type operators whose only purpose is to spare users
> from having to write explicit conversion?
The only reason for mixed type operators is to make literals available. But if
one does those, then we can't add literals properly in the future
(Ada.Strings.Unbounded is damaged by this). So I say no.
I wouldn't bother with any other routines until at least such time as Bob
:-) has built some ACATS tests.
> 3) It seems clear that we don't want the bounded form of either
> package to "with" the unbounded form but we do want conversion
> functions for going between corresponding bounded and unbounded
> types. Perhaps these go in child units of the two bounded packages
> (those child units could then "with" the corresponding unbounded
> packages).
Alternatively, both could be derived from an abstract type, and a class-wide
conversion provided. That would get rid of the empty package in your proposal.
> Should streaming of the two forms be compatible as with
> vectors and bounded vectors?
> 4) We need an Assign procedure. In the unbounded case it can be just
> a wrapper for predefined assignment, but in the bounded case it
> has to deal with the case where the two arguments have different
> capacities. It's fairly obvious what to do in most cases, but what
> about assigning a Big_Rational value which cannot be represented
> exactly given the capacity of the target. Raise an exception or
> round?
I think I'd raise Capacity_Error. (Isn't that what the containers do?) Having
exact math be silently non-exact seems like exactly (pun) the wrong thing to do.
> In either case, we probably want to provide a Round function
> that deterministically finds an approximation to a given
> value which can be represented as a value having a given
> capacity. This can be useful in the unbounded case just to save
> storage. Should this Round function be implementation-dependent?
> If not, then we might end up talking about convergents and
> semi-convergents in the Ada RM (or at least in the AARM),
> which would be somewhat odd (see
> shreevatsa.wordpress.com/2011/01/10/not-all-best-rational-appr
> oximations-are-the-convergents-of-the-continued-fraction
> ). I do not think we want to define Succ/Pred functions which take
> a Big_Rational and a capacity value.
I don't think Round (or any other operation) ought to be
implementation-dependent, so I think it would need a real definition. Hopefully
with "semi-convergents" or other terms that no one has heard of. ;-)
> 5) We want to be sure that a binding to GNU/GMP is straightforward in
> the unbounded case. [Fortunately, that does not require using the
> same identifiers used in GNU/GMP (mpz_t and mpq_t).]
> See gmplib.org/manual for the GNU/GMP interfaces.
Makes sense.
> 6) Do we want functions to describe the mapping between Capacity
> discriminant values and the associated set of representable values?
> For example, a function from a value (Big_Integer or Big_Rational)
> to the smallest capacity value that could be used to represent it.
> For Big_Integer there could presumably be Min and Max functions
> that take a capacity argument. For Big_Rational, it's not so clear.
> We could require, for example, that a given capacity value allows
> representing a given Big_Rational value if it is >= the sum of
> the capacity requirements of the Numerator and the Denominator.
It seems that the Capacity needs to mean something to the end user, not just the
compiler. So such functions seem necessary, but KISS for those!!
> 7) Bob feels (and I agree) that the ARG should not formally approve any
> changes until we have experience with an implementation. At this
> point the ARG should be focused on providing informal guidance on
> this topic.
I agree that Bob should prototype these packages, including writing ACATS-style
tests for them, so that we can put them into the Ada 2020 Standard. I'll put it
on his action item list. ;-)
Seriously, we already have an ARG rule that all Amendment AIs are supposed to
include (some) ACATS tests, and we really should have a similar rule that
proposed packages are prototyped as well. This is the assumed responsibility of
an AI author, so if you can't get Bob to help, you're pretty much stuck, and
need to do that before the AI could be assumed complete.
OTOH, we haven't required that from any other AI author, so why start now??
(We really ought to, I don't have a very big budget to write Ada 2020 ACATS
tests. Topic to discuss during the call?)
From: Jean-Pierre Rosen
Sent: Saturday, January 20, 2018 12:22 AM
> Questions/observations include:
> [...]
I'd add:
8) IOs
Should an IO package be associated to each of these bignums?
Note that the issue of IO may influence the representation of
of bignums: I once knew an implementation where each super-digit
was limited to 1_000_000_000 (instead of the natural 2_147_483_647),
just to avoid terribly inefficient IOs.
From: Tucker Taft
Sent: Saturday, January 20, 2018 11:08 AM
> ...
>> 1) Do we declare deferred constants, parameterless functions,
>> or neither for things like Zero, One, and Two?
> If tagged, I'll finally get an excuse to show why what I called "tag
> propagation" is necessary to implement the dispatching rules in 3.9.2.
> :-) (One has to consider a set of calls, not a single call, for
> determining the static or dynamic tag for dispatching. That's
> demonstratably necessary to process tagged expressions with constants
> or literals.)
I agree that you have to do "tag propagation" to properly handle tag
indeterminate calls. Has anyone claimed otherwise?
> Anyway, the answer to this depends on whether there is a sufficiently
> short constructor -- and that really depends on whether Tucker invents
> a useful "literals for private type" AI. So I don't think this can be
> answered until we find out about that.
I'm on it. ;-)
From: Randy Brukardt
Sent: Saturday, January 20, 2018 7:29 PM
> I agree that you have to do "tag propagation" to properly handle tag
> indeterminate calls. Has anyone claimed otherwise?
Not that I know of, but based on my compiler surveys, no one implements it other
than Janus/Ada. Admittedly, I haven't checked this recently.
I've long had a tagged Bignum-like package on my ACATS test to-construct list
(because one needs usage-orientation for such tests) in order to test this rule.
So far as I can tell, the ACATS doesn't currrently test cases like those that
arise in Bignum:
procedure Something (Val : in out Num'Class) is
Val := + Zero; -- Zero gets the tag of Val, propagated through "+".
Org : Num'Class := Val + (- One); -- Org and One get the tag of Val.
end Something;
I'll probably come up with more realistic-looking expressions for this test, but
the idea should be obvious. (I'll have to test both static and dynamic binding,
as well as tag indeterminate cases.)
From: John Barnes
Sent: Monday, January 22, 2018 5:49 AM
I wrote a bignum package in Ada 83 some 30 years ago. I did make some updates to
use Ada 95, mainly child packages. I still use it for numerical stuff for
courses at Oxford.
Notable points perhaps.
I did use a power of 10 for the base to ease IO. It was originally on a 16 bit
machine. (386 perhaps). It still works on this horrid Windows 10. Not much
faster than on my old XP laptop. I don't know what Windows 10 is doing.
Obviously playing with itself - ridiculous.
I provided constants Zero and One. I didn't think any others were necessary.
Others were provided by eg
Two: Number := Make-Number(2);
I provided a package for subprograms Add, Sub, Mul, Div, Neg, Compare, Length,
To_Number, To_Text, To_Integer.
And a package for functions +. -, abs, *, / rem, mod, <, <=, >, >=, =
And other packages for I/O.
Long time ago. Certainly very useful.
From: Steve Baird
Sent: Monday, January 22, 2018 12:33 PM
> I'd add:
> 8) IOs
> Should an IO package be associated to each of these bignums?
Good question.
If we provide conversion functions to and from String then would any further IO
support be needed?
From: Steve Baird
Sent: Monday, January 22, 2018 1:24 PM
> ...
>> Questions/observations include:
> 0) Should Big_Integer and (especially) Big_Rational be visibly tagged?
> If so, then we can use prefix notation on functions like Numerator and
> Denominator. We could also consider deriving both versions (usual and
> bounded) from an abstract ancestor.
If we go this way, then should this common ancestor be an interface type? I'd
say yes.
Does it then get all the same ops, so that the non-abstract ops declared for the
Bounded and Unbounded types would all be overriding?
Would this make the AI12-0243-ish issues any worse (consider the proposed
Nonzero_Integer parameter subtype mentioned earlier)? I know these problems are
bad enough already, but my question is whether this would make matters any
>> 2) Which ops do we include? It seems obvious that we define at least
>> the arithmetic and relational ops that are defined for any
>> predefined integer (respectively float) type for Big_Integer
>> (respectively, Big_Rational).
>> What Pre/Postconditions are specified for these ops?
>> These might involve subtype predicates.
>> For example (suggested by Bob), do we want
>> subtype Nonzero_Integer is Big_Integer with
>> Predicate => Nonzero_Integer /= Zero;
>> function "/"
>> (X: Big_Integer; Y: Nonzero_Integer) return Big_Integer;
>> -- similar for "mod", "rem".
>> ?
> Shouldn't this predicate raise Constraint_Error rather than defaulting
> to Assertion_Error, to be more like the other numeric operations?
Good point; I agree.
>> 3) It seems clear that we don't want the bounded form of either
>> package to "with" the unbounded form but we do want conversion
>> functions for going between corresponding bounded and unbounded
>> types. Perhaps these go in child units of the two bounded packages
>> (those child units could then "with" the corresponding unbounded
>> packages).
> Alternatively, both could be derived from an abstract type, and a
> class-wide conversion provided. That would get rid of the empty
> package in your proposal. :-)
Could you provide a more detailed spec? I don't see how this would work, but I
suspect that I'm misunderstanding your proposal.
>> 4) We need an Assign procedure. In the unbounded case it can be just
>> a wrapper for predefined assignment, but in the bounded case it
>> has to deal with the case where the two arguments have different
>> capacities. It's fairly obvious what to do in most cases, but what
>> about assigning a Big_Rational value which cannot be represented
>> exactly given the capacity of the target. Raise an exception or
>> round?
> I think I'd raise Capacity_Error. (Isn't that what the containers do?)
> Having exact math be silently non-exact seems like exactly (pun) the
> wrong thing to do.
Is it that simple? Suppose somebody wants large rationals (e.g., 2048-bit
numerators and denominators) with rounding. It's not that they require exact
arithmetic - they just want a lot more range/precision than what you get from
Ada's numeric types. It may be that this is an unimportant corner case and you
are right to dismiss it; I don't know.
>> 6) Do we want functions to describe the mapping between Capacity
>> discriminant values and the associated set of representable values?
>> For example, a function from a value (Big_Integer or Big_Rational)
>> to the smallest capacity value that could be used to represent it.
>> For Big_Integer there could presumably be Min and Max functions
>> that take a capacity argument. For Big_Rational, it's not so clear.
>> We could require, for example, that a given capacity value allows
>> representing a given Big_Rational value if it is >= the sum of
>> the capacity requirements of the Numerator and the Denominator.
> It seems that the Capacity needs to mean something to the end user,
> not just the compiler. So such functions seem necessary, but KISS for those!!
Am I right in guessing that you'd like these functions to be portable (as
opposed to being implementation-defined)?
From: Randy Brukardt
Sent: Monday, January 22, 2018 3:41 PM
> > I'd add:
> > 8) IOs
> > Should an IO package be associated to each of these bignums?
> Good question.
> If we provide conversion functions to and from String then would any
> further IO support be needed?
We currently have Text_IO nested packages or children for pretty much any type
for which it makes sense to have text input-output, despite the fact that every
such type has an Image function or the equivalent (To_String for unbounded
So I'd rather expect a Ada.Text_IO.BigNum_IO package. If we don't define it now,
we will the next time around.
(The Janus/Ada UnivMath package has a complete set of Text_IO packages, and they
are heavily used. I believe they can output both rational and decimal
representation for the universal_real type.)
From: Randy Brukardt
Sent: Monday, January 22, 2018 3:36 PM
> > Steve Baird writes:
> > ...
> >> Questions/observations include:
> >
> > 0) Should Big_Integer and (especially) Big_Rational be visibly tagged?
> >
> > If so, then we can use prefix notation on functions like Numerator
> > and Denominator. We could also consider deriving both versions
> > (usual and
> > bounded) from an abstract ancestor.
> If we go this way, then should this common ancestor be an interface
> type? I'd say yes.
I suggested making it abstract so it could have some concrete operations if
those made sense. But perhaps they don't make sense.
> Does it then get all the same ops, so that the non-abstract ops
> declared for the Bounded and Unbounded types would all be overriding?
I would expect that the vast majority of operations are in the interface, so
dispatching can be used, and one can write class-wide algorithms that work with
any Bignum representation. Probably the capacity-specific operations would be
left out.
> Would this make the AI12-0243-ish issues any worse (consider the
> proposed Nonzero_Integer parameter subtype mentioned earlier)? I know
> these problems are bad enough already, but my question is whether this
> would make matters any worse.
It just makes a solution more urgent, but it doesn't change the issues any.
> >> 3) It seems clear that we don't want the bounded form of either
> >> package to "with" the unbounded form but we do want conversion
> >> functions for going between corresponding bounded and unbounded
> >> types. Perhaps these go in child units of the two bounded packages
> >> (those child units could then "with" the corresponding unbounded
> >> packages).
> >
> > Alternatively, both could be derived from an abstract type, and a
> > class-wide conversion provided. That would get rid of the empty
> > package in your proposal. :-)
> Could you provide a more detailed spec? I don't see how this would
> work, but I suspect that I'm misunderstanding your proposal.
I was thinking about including cross-cut operations in the spec, something
type BigNum is abstract tagged with private;
function Convert (Val : in Bignum'Class) return Bignum;
but thinking about it now, I can't figure out how one would implement one of
You'd probably have to have a concrete universal representation to make that
function Convert (Val : in Bignum) return Universal_Big;
function Convert (Val : in Universal_Big) return BigNum;
but of course that would bring in the memory allocation/finalization issues
that you are trying to avoid.
So at this moment I'm thinking that direct conversions would have to be left
out; you could generally do it through intermediary types like Max_Integer
using Numerator/Demomonator.
> >> 4) We need an Assign procedure. In the unbounded case it can be just
> >> a wrapper for predefined assignment, but in the bounded case it
> >> has to deal with the case where the two arguments have different
> >> capacities. It's fairly obvious what to do in most cases, but what
> >> about assigning a Big_Rational value which cannot be represented
> >> exactly given the capacity of the target. Raise an exception or
> >> round?
> >
> > I think I'd raise Capacity_Error. (Isn't that what the containers
> > do?) Having exact math be silently non-exact seems like exactly
> > (pun) the wrong thing to do.
> Is it that simple? Suppose somebody wants large rationals (e.g.,
> 2048-bit numerators and denominators) with rounding.
> It's not that they require exact arithmetic - they just want a lot
> more range/precision than what you get from Ada's numeric types.
> It may be that this is an unimportant corner case and you are right to
> dismiss it; I don't know.
We're not trying to be all things to all people. I'd consider these "exact"
math packages and treat them accordingly. If there is an abstract root, one
can "easily" make a clone version that uses rounding if someone needs that.
(Defining the rounding is hard, as you noted elsewhere.)
> >> 6) Do we want functions to describe the mapping between Capacity
> >> discriminant values and the associated set of representable values?
> >> For example, a function from a value (Big_Integer or Big_Rational)
> >> to the smallest capacity value that could be used to represent it.
> >> For Big_Integer there could presumably be Min and Max functions
> >> that take a capacity argument. For Big_Rational, it's not so clear.
> >> We could require, for example, that a given capacity value allows
> >> representing a given Big_Rational value if it is >= the sum of
> >> the capacity requirements of the Numerator and the Denominator.
> >
> > It seems that the Capacity needs to mean something to the end user,
> > not just the compiler. So such functions seem necessary, but KISS
> > for those!!
> Am I right in guessing that you'd like these functions to be portable
> (as opposed to being implementation-defined)?
I think so; otherwise it rather defeats the purpose of language-defined packages
(to provide the ultimate in portability).
From: Bob Duff
Sent: Sunday, January 28, 2018 11:29 AM
> Steve Baird writes:
> ...
> > Questions/observations include:
> 0) Should Big_Integer and (especially) Big_Rational be visibly tagged?
Surely not. I think we want to be competetive (efficiency-wise) with all
sorts of other languages, and taggedness will destroy that.
Let's not have another "tampering" fiasco.
> If so, then we can use prefix notation on functions like Numerator and
> Denominator.
I'm not a big fan of that feature, but if we want it, we should figure out
how to do it for untagged types.
>... We could also consider deriving both versions (usual and
> bounded) from an abstract ancestor.
Consider, ..., and reject. ;-)
From: Jeff Cousins
Sent: Sunday, January 28, 2018 12:21 PM
John Barnes wrote:
I wrote a bignum package in Ada 83 some 30 years ago
Would you be able to let us see the spec for this?
From: Randy Brukardt
Sent: Sunday, January 28, 2018 9:13 PM
> > 0) Should Big_Integer and (especially) Big_Rational be
> visibly tagged?
> Surely not. I think we want to be competetive
> (efficiency-wise) with all sorts of other languages, and taggedness
> will destroy that.
??? Tags (as opposed to controlled types) add almost no overhead, especially
in a case like unbounded Bignum which probably will have to be controlled
anyway. (The only overhead of a tagged type is initializing the tag in the
object.) So long as one uses a single specific type, everything is statically
bound and the cost is essentially the same as an untagged type (again,
especially as the underlying specific type most likely will be tagged and
certainly will be large).
I wasn't suggesting that we define any class-wide operations other than
representation conversion (which should be rarely used in any case).
Class-wide operations are the only operations that add overhead.
> Let's not have another "tampering" fiasco.
I'm still waiting for an example program showing this supposed "fiasco". No
one has ever submitted one to the ARG. We've essentially been asked to believe
this issue by repeated assertion. (And most tampering checks can be done at
compile-time, with sufficient will.)
If there was a fiasco here, it was that the goals of the containers did not
include making them particularly fast. If they are then misused for
high-performance code, one is going to get the expected disappointment.
Perhaps we started with the wrong set of goals.
> > If so, then we can use prefix notation on functions like Numerator
> > and Denominator.
> I'm not a big fan of that feature, but if we want it, we should figure
> out how to do it for untagged types.
We've already discussed that in a different e-mail thread. It seems dangerous.
> >... We could also consider deriving both versions (usual and
> > bounded) from an abstract ancestor.
> Consider, ..., and reject. ;-)
Again, why? We have a request for a "universal" numeric type, and the only
sane way to provide that is with dispatching. Probably, we'll just forget
that request, but it seems worth spending a bit of time to see if it makes
From: John Barnes
Sent: Tuesday, January 30, 2018 4:22 AM
I am feverishly giving lectures on numbers at Oxford at the moment but I am
trying to keep an eye on what the ARG is up to.
Did you know that a new Mersenne prime was discovered on Boxing Day (26
December) 2017. It is 2**77232917 - 1 and has only 23,249,425 digits. Will
the Bignum package cope with it?
From: Tucker Taft
Sent: Tuesday, January 30, 2018 3:02 PM
Yes, I noticed that new Mersenne prime as well. And 23 mega-digit is nothing
for a modern iPhone. ;-) Just be sure to set aside a bit of extra time to
print it out using the Image function. Except in base 2, of course, which
I could do right now. Ready: 1111111111 [... 77,232,900 1's] 1111111!
From: Jeff Cousins
Sent: Wednesday, January 31, 2018 6:31 AM
[This is John Barnes' Bignum package and some test programs - Editor.]
-- file books\fun\progs\numbers.ada
-- Restructured using children
-- Types and No_Of_Places in parent package
-- 20-10-06
package Numbers is
Max_Index: constant := 1000;
subtype Index is Integer range 0 .. Max_Index;
type Number(Max_Digits: Index := 1) is private;
Zero, One: constant Number;
Number_Error : exception;
Base_Exp: constant := 4;
Base: constant := 10 ** Base_Exp;
type Base_Digit is range -Base .. 2 * Base - 1;
type Base_Digit_Array is
array(Index range <>) of Base_Digit;
type Number(Max_Digits: Index := 1) is
Sign: Integer := +1;
Length: Index := 0;
D: Base_Digit_Array(1..Max_Digits);
end record;
Zero: constant Number := (0, +1, 0, (others => 0));
One: constant Number := (1, +1, 1, (1 => 1));
function No_Of_Places(N: Number) return Integer;
end Numbers;
package body Numbers is
function No_Of_Places(N: Number) return Integer is
if N.Length = 0 then
return 1;
return N.Length * Base_Exp;
end if;
end No_Of_Places;
end Numbers;
package Numbers.Proc is
subtype Index is Numbers.Index;
subtype Number is Numbers.Number;
Zero: Number renames Numbers.Zero;
One: Number renames Numbers.One;
Number_Error: exception renames Numbers.Number_Error;
procedure Add(X, Y: Number; Z: out Number);
procedure Sub(X, Y: Number; Z: out Number);
procedure Mul(X, Y: Number; Z: out Number);
procedure Div(X, Y: Number; Quotient,
Remainder: out Number);
procedure Neg(X: in out Number);
function Compare(X, Y: Number) return Integer;
function Length(N: Number) return Index;
procedure To_Number(S: String; N: out Number);
procedure To_Number(I: Integer; N: out Number);
procedure To_Text(N: Number; S: out String);
procedure To_Integer(N: Number; I: out Integer);
end Numbers.Proc;
package Numbers.IO is
Default_Width: Natural := 0;
procedure Put(Item: Number;
Width: Natural := Default_Width);
procedure Get(Item: out Number);
end Numbers.IO;
package Numbers.Func is
subtype Number is Numbers.Number;
Zero: Number renames Numbers.Zero;
One: Number renames Numbers.One;
Number_Error: exception renames Numbers.Number_Error;
function "+" (X: Number) return Number;
function "-" (X: Number) return Number;
function "abs" (X: Number) return Number;
function "+" (X, Y: Number) return Number;
function "-" (X, Y: Number) return Number;
function "*" (X, Y: Number) return Number;
function "/" (X, Y: Number) return Number;
function "rem" (X, Y: Number) return Number;
function "mod" (X, Y: Number) return Number;
function "**" (X: Number; N: Natural) return Number;
function "<" (X, Y: Number) return Boolean;
function "<=" (X, Y: Number) return Boolean;
function ">" (X, Y: Number) return Boolean;
function ">=" (X, Y: Number) return Boolean;
function "=" (X, Y: Number) return Boolean;
function Make_Number(S: String) return Number;
function Make_Number(I: Integer) return Number;
function String_Of(N: Number) return String;
function Integer_Of(N: Number) return Integer;
end Numbers.Func;
package body Numbers.Proc is
Base_Squared: constant := Base**2;
subtype Single is Base_Digit;
type Double is range -Base_Squared .. 2*Base_Squared - 1;
function Unsigned_Compare(X, Y: Number) return Integer is
-- ignoring signs
-- returns +1, 0 or -1 according as X >, = or < Y
if X.Length > Y.Length then return +1; end if;
if X.Length < Y.Length then return -1; end if;
for I in reverse 1 .. X.Length loop
if X.D(I) > Y.D(I) then return +1; end if;
if X.D(I) < Y.D(I) then return -1; end if;
end loop;
return 0; -- the numbers are equal
end Unsigned_Compare;
function Compare(X, Y: Number) return Integer is
-- returns +1, 0 or -1 according as X >, = or < Y
if X.Sign /= Y.Sign then return X.Sign; end if;
return Unsigned_Compare(X, Y) * X.Sign;
end Compare;
procedure Raw_Add(X, Y: Number; Z: out Number) is
-- assumes X not smaller than Y
Carry: Single := 0;
Digit: Single;
ZL: Index := X.Length; -- length of answer
if Z.Max_Digits < ZL then
raise Number_Error; -- Z not big enough to hold X
end if;
for I in 1 .. ZL loop
Digit := X.D(I) + Carry;
if I <= Y.Length then
Digit := Digit + Y.D(I);
end if;
if Digit >= Base then
Carry := 1; Digit := Digit - Base;
Carry := 0;
end if;
Z.D(I) := Digit;
end loop;
if Carry /= 0 then
if ZL = Z.Max_Digits then
raise Number_Error; -- too big to fit in Z
end if;
ZL := ZL + 1;
Z.D(ZL) := Carry;
end if;
Z.Length := ZL;
end Raw_Add;
procedure Raw_Sub(X, Y: Number; Z: out Number) is
-- assumes X not smaller than Y
Carry: Single := 0;
Digit: Single;
ZL: Index := X.Length; -- length of answer
if Z.Max_Digits < ZL then
raise Number_Error; -- Z not big enough to hold X
end if;
for I in 1 .. ZL loop
Digit := X.D(I) - Carry;
if I <= Y.Length then
Digit := Digit - Y.D(I);
end if;
if Digit < 0 then
Carry := 1; Digit := Digit + Base;
Carry := 0;
end if;
Z.D(I) := Digit;
end loop;
while Z.D(ZL) = 0 loop -- SHOULD THIS NOT FAIL???
ZL := ZL - 1; -- remove leading zeroes
end loop;
Z.Length := ZL;
end Raw_Sub;
procedure Add(X, Y: Number; Z: out Number) is
UCMPXY: Integer := Unsigned_Compare(X, Y);
if X.Sign = Y.Sign then
Z.Sign := X.Sign;
if UCMPXY >= 0 then
Raw_Add(X, Y, Z);
Raw_Add(Y, X, Z); -- reverse if Y larger
end if;
if UCMPXY > 0 then
Raw_Sub(X, Y, Z);
Z.Sign := X.Sign;
elsif UCMPXY < 0 then
Raw_Sub(Y, X, Z);
Z.Sign := -X.Sign;
else -- answer is zero
Z.Sign := +1; Z.Length := 0;
end if;
end if;
end Add;
procedure Sub(X, Y: Number; Z: out Number) is
UCMPXY: Integer := Unsigned_Compare(X, Y);
if X.Sign /= Y.Sign then
Z.Sign := X.Sign;
if UCMPXY >= 0 then
Raw_Add(X, Y, Z);
Raw_Add(Y, X, Z); -- reverse if Y larger
end if;
if UCMPXY > 0 then
Raw_Sub(X, Y, Z);
Z.Sign := X.Sign;
elsif UCMPXY < 0 then
Raw_Sub(Y, X, Z);
Z.Sign := -X.Sign;
else -- answer is zero
Z.Sign := +1; Z.Length := 0;
end if;
end if;
end Sub;
procedure Neg(X: in out Number) is
-- do nothing in zero case
if X.Length = 0 then return; end if;
X.Sign := -X.Sign;
end Neg;
function Length(N: Number) return Index is
return N.Length;
end Length;
procedure Mul(X, Y: Number; Z: out Number) is
Carry: Double;
Digit: Double;
ZL: Index;
if Z.Max_Digits < X.Length + Y.Length then
raise Number_Error;
end if;
if X.Length = 0 or Y.Length = 0 then -- zero case
Z.Sign := +1; Z.Length := 0;
end if;
ZL := X.Length + Y.Length - 1;
-- lower possible length of answer
-- copy X to top of Z; so X and Z can be same array
for I in reverse 1 .. X.Length loop
Z.D(I + Y.Length) := X.D(I);
end loop;
declare -- initialise limits and length of cycle
Z_Index: Index;
Y_Index: Index;
Initial_Z_Index: Index := Y.Length + 1;
Initial_Y_Index: Index := 1;
Cycle_Length: Index := 1;
Carry := 0;
for I in 1 .. ZL loop
Digit := Carry;
Carry := 0;
Z_Index := Initial_Z_Index;
Y_Index := Initial_Y_Index;
for J in 1 .. Cycle_Length loop
if Digit > Base_Squared then
Digit := Digit - Base_Squared;
Carry := Carry + Base;
end if;
Digit := Digit + Double(Z.D(Z_Index))
* Double(Y.D(Y_Index));
Z_Index := Z_Index + 1;
Y_Index := Y_Index - 1;
end loop;
-- now adjust limits and length of cycle
if I < Y.Length then
Cycle_Length := Cycle_Length + 1;
Initial_Y_Index := Initial_Y_Index + 1;
Initial_Z_Index := Initial_Z_Index + 1;
end if;
if I < X.Length then
Cycle_Length := Cycle_Length + 1;
end if;
Cycle_Length := Cycle_length - 1;
Carry := Carry + Digit / Base;
Z.D(I) := Single(Digit mod Base);
end loop;
if Carry /= 0 then -- one more digit in answer
ZL := ZL + 1;
Z.D(ZL) := Single(Carry);
end if;
Z.Length := ZL;
Z.Sign := X.Sign * Y.Sign;
end Mul;
procedure Div(X, Y: Number; Quotient,
Remainder: out Number) is
U: Number renames Quotient;
V: Number renames Remainder;
Digit, Scale, Carry: Double;
U0, U1, U2: Double;
V1, V2: Double;
QD: Double;
LOQ: constant Index := Y.Length;
HIQ: constant Index := X.Length;
QL: Index;
RL: Index;
QStart: Index;
J : Index;
if Y.Length = 0 then
raise Number_Error;
end if;
if Quotient.Max_Digits < X.Length or
Remainder.Max_Digits < Y.Length then
raise Number_Error;
end if;
if X.Length < Y.Length then -- Quotient is definitely zero
Quotient.Sign := +1;
Quotient.Length := 0;
Remainder.Sign := X.Sign;
Remainder.Length := X.Length;
for I in 1 .. X.Length loop
Remainder.D(I) := X.D(I);
end loop;
end if;
QL := X.Length - Y.Length + 1;
RL := Y.Length;
QStart := QL;
-- compute normalizing factor
Scale := Base/Double(Y.D(Y.Length)+1);
-- scale X and copy to U
Carry := 0;
for I in 1 .. X.Length loop
Digit := Double(X.D(I)) * Scale + Carry;
Carry := Digit / Base;
U.D(I) := Single(Digit mod Base);
end loop;
U0 := Carry; -- leading digit of dividend
-- scale Y and copy to V
Carry := 0;
for I in 1 .. Y.Length loop
Digit := Double(Y.D(I)) * Scale + Carry;
Carry := Digit / Base;
V.D(I) := Single(Digit mod Base);
end loop;
-- no further carry
-- set V1 and V2 to first two digits of divisor
V1 := Double(V.D(Y.Length));
if Y.Length > 1 then
V2 := Double(V.D(Y.Length-1));
V2 := 0;
end if;
-- now iterate over digits in answer
-- with U0, U1 and U2 being first three digits of dividend
for I in reverse LOQ .. HIQ loop
U1 := Double(U.D(I));
if Y.Length > 1 then
U2 := Double(U.D(I-1));
U2 := 0;
end if;
-- now set initial estimate of digit in quotient
if U0 = V1 then
QD := Base - 1;
QD := (U0 * Base + U1) / V1;
end if;
-- now refine estimate by considering U2 also
while V2*QD > (U0*Base+U1-QD*V1)*Base + U2 loop
QD := QD - 1;
end loop;
-- QD is now correct digit or possibly one too big
-- subtract QD times V from U
Carry := 0;
J := QStart;
for I in 1 .. Y.Length loop
Digit := Double(U.D(J)) - Carry
- QD * Double(V.D(I));
if Digit < 0 then
Carry := (-1-Digit) / Base + 1;
Digit := Digit + Carry * Base;
Carry := 0;
end if;
U.D(J) := Single(Digit);
J := J + 1;
end loop;
if Carry > U0 then -- estimate was too large
Carry, Digit: Single;
QD := QD - 1;
Carry := 0; J := QStart;
for I in 1 .. Y.Length loop
Digit := U.D(J) + Carry + V.D(I);
if Digit >= Base then
Carry := 1;
Digit := Digit - Base;
Carry := 0;
end if;
U.D(J) := Digit;
J := J + 1;
end loop;
end if;
-- QD is now the required digit
U0 := Double(U.D(I)); U.D(I) := Single(QD);
QStart := QStart - 1;
end loop;
-- delete possible leading zero in quotient
if U.D(HIQ) = 0 then
QL := QL - 1;
end if;
-- copy remainder into place and scale
-- top digit is in U0 still
Digit := U0;
for I in reverse 2 .. RL loop
Remainder.D(I) := Single(Digit/Scale);
Carry := Digit mod Scale;
Digit := Double(U.D(I-1)) + Carry * Base;
end loop;
Remainder.D(1) := Single(Digit/Scale);
-- delete leading zeroes in remainder
while RL > 0 and then Remainder.D(RL) = 0 loop
RL := RL - 1;
end loop;
Remainder.Length := RL;
if Remainder.Length = 0 then
Remainder.Sign := +1;
Remainder.Sign := X.Sign;
end if;
-- slide quotient into place
-- Quotient.D(1 .. QL) := U.D(LOQ .. HIQ);
for I in 1 .. QL loop
Quotient.D(I) := U.D(I + LOQ - 1);
end loop;
Quotient.Length := QL;
if Quotient.Length = 0 then
Quotient.Sign := +1;
Quotient.Sign := X.Sign * Y.Sign;
end if;
end Div;
procedure To_Number(S: String; N: out Number) is
NL: Index := 0;
Place: Integer := 0;
Is_A_Number: Boolean := False;
Digit: Single := 0;
Ch: Character;
Last_I: Positive;
Dig_Of: constant array (Character range '0' .. '9') of
Single := (0, 1, 2, 3, 4, 5, 6, 7, 8, 9);
N.Sign := +1; -- set default sign
-- scan string from end
for I in reverse S'Range loop
Last_I := I; -- note how far we have got
Ch := S(I);
case Ch is
when '0' .. '9' =>
-- add digit to number so far
if Place = 0 then
NL := NL + 1;
if NL > N.Max_Digits then
raise Number_Error;
end if;
end if;
Digit := Digit + Dig_Of(Ch) * 10**Place;
Place := Place + 1;
if Place = Base_Exp then
N.D(NL) := Digit;
Digit := 0;
Place := 0;
end if;
Is_A_Number := True;
when '_' =>
-- underscore must be embedded in digits
if not Is_A_Number then
raise Number_Error;
end if;
Is_A_Number := False;
when '+' | '-' | ' ' =>
-- lump so far must be a valid number
if not Is_A_Number then
raise Number_Error;
end if;
if Ch ='-' then N.Sign := -1; end if;
exit; -- leave loop
when others =>
raise Number_Error;
end case;
end loop;
-- check we had a number
if not Is_A_Number then
raise Number_Error;
end if;
-- add the last digit if necessary
if Place /= 0 then
N.D(NL) := Digit;
end if;
-- check that any other characters are leading spaces
for I in S'First .. Last_I - 1 loop
if S(I) /= ' ' then
raise Number_Error;
end if;
end loop;
-- remove leading zeroes if any, beware zero case
while NL > 0 and then N.D(NL) = 0 loop
NL := NL - 1;
end loop;
N.Length := NL;
end To_Number;
procedure To_Number(I: Integer; N: out Number) is
NL: Index := 0;
II: Integer;
if I = 0 then
N.Sign := +1; N.Length := 0;
end if;
if I > 0 then
II := I; N.Sign := +1;
II := -I; N.Sign := -1;
end if;
while II /= 0 loop
NL := NL + 1;
if NL > N.Max_Digits then
raise Number_Error;
end if;
N.D(NL) := Single(II mod Base);
II := II / Base;
end loop;
N.Length := NL;
end To_Number;
procedure To_Text(N: Number; S: out String) is
SI: Natural := S'Last;
Digit: Single;
Char_Of: constant array (Single range 0 .. 9) of
Character := "0123456789";
if N.Length = 0 then -- zero case
if SI < 2 then
raise Number_Error;
end if;
S(SI) := Char_Of(0);
S(SI-1) := '+';
for I in 1 .. SI-2 loop
S(I) := ' ';
end loop;
end if;
if SI < Base_Exp * N.Length + 1 then
raise Number_Error;
end if;
for I in 1 .. N.Length loop
Digit := N.D(I);
for J in 1 .. Base_Exp loop
S(SI) := Char_Of(Digit mod 10);
Digit := Digit / 10;
SI := SI - 1;
end loop;
end loop;
while S(SI + 1) = '0' loop
SI := SI + 1; -- delete leading zeroes
end loop;
if N.Sign = +1 then
S(SI) := '+';
S(SI) := '-';
end if;
for I in 1 .. SI - 1 loop
S(I) := ' ';
end loop;
end To_Text;
procedure To_Integer(N: Number; I: out Integer) is
II: Integer := 0;
for I in reverse 1 .. N.Length loop
II := II * Base + Integer(N.D(I));
end loop;
if N.Sign = -1 then II := -II; end if;
I := II;
end To_Integer;
end Numbers.Proc;
with Ada.Text_IO; use Ada;
with Numbers.Proc;
package body Numbers.IO is
use Proc;
procedure Put(Item: Number;
Width: Natural := Default_Width) is
Block_Size: constant := 3;
Places: Integer := No_Of_Places(Item);
S: String(1 .. Places + 1);
SP: Positive := 1;
Before_Break: Integer;
To_Text(Item, S);
-- allow for leading spaces in S
while S(SP) = ' ' loop
SP := SP + 1; Places := Places - 1;
end loop;
-- now output leading spaces for padding if any
for I in 1 .. Width -
(Places + 1 + (Places - 1) / Block_Size) loop
Text_IO.Put(' ');
end loop;
if S(SP) = '+' then S(SP) := ' '; end if;
Text_IO.Put(S(SP)); -- output minus or space
-- output digits with underscores every "Blocksize"
Before_Break := (Places - 1) rem Block_Size + 1;
for I in SP + 1 .. S'Last loop
if Before_Break = 0 then
Before_Break := Block_Size;
end if;
Before_Break := Before_Break - 1;
end loop;
end Put;
procedure Get(Item: out Number) is
-- declare string large enough to hold maximum value
-- allows every other character to be an underscore!
S: String(1 .. Base_Exp * Max_Index * 2);
SP: Positive := 1;
Places: Integer := 0;
Ch: Character;
EOL: Boolean; -- end of line
-- loop for first digit or sign, skipping spaces
case Ch is
when ' ' =>
when '+'| '-' =>
S(SP) := Ch; SP := SP + 1;
when '0' .. '9' =>
S(SP) := Ch; SP := SP + 1; Places := 1;
when others =>
raise Number_Error;
end case;
end loop;
-- now accept only digits and underscores
-- count the digits in Places
-- stop on end of line or other character
Text_IO.Look_Ahead(Ch, EOL);
exit when EOL;
case Ch is
when '0' .. '9' =>
S(SP) := Ch; SP := SP + 1;
Places := Places + 1;
when '_' =>
S(SP) := Ch; SP := SP + 1;
when others =>
end case;
end loop;
-- now declare a Number big enough
-- note Item assumed unconstrained
Result: Number((Places - 1)/Base_Exp + 1);
To_Number(S(1 .. SP - 1), Result);
Item := Result;
end Get;
end Numbers.IO;
with Numbers.Proc;
package body Numbers.Func is
use Proc;
function "+" (X: Number) return Number is
return X;
end "+";
function "-" (X: Number) return Number is
N: Number(X.Max_Digits);
N := X; Neg(N);
return N;
end "-";
function "abs" (X: Number) return Number is
if X < Zero then return -X; else return X; end if;
end "abs";
function "+" (X, Y: Number) return Number is
Z: Number(Index'Max(Length(X), Length(Y)) + 1);
Add(X, Y, Z);
return Z;
end "+";
function "-" (X, Y: Number) return Number is
Z: Number(Index'Max(Length(X), Length(Y)) + 1);
Sub(X, Y, Z);
return Z;
end "-";
function "*" (X, Y: Number) return Number is
Z: Number(Length(X) + Length(Y));
Mul(X, Y, Z);
return Z;
end "*";
function "/" (X, Y: Number) return Number is
Q: Number(Length(X));
R: Number(Length(Y));
Div(X, Y, Q, R);
return Q;
end "/";
function "rem" (X, Y: Number) return Number is
Q: Number(Length(X));
R: Number(Length(Y));
Div(X, Y, Q, R);
return R;
end "rem";
function "mod" (X, Y: Number) return Number is
Q: Number(Length(X));
R: Number(Length(Y));
Div(X, Y, Q, R);
if (X < Zero and Y > Zero) or (X > Zero and Y < Zero) then
R := R + Y;
end if;
return R;
end "mod";
function "**" (X: Number; N: Natural) return Number is
Result: Number := One;
Term: Number := X;
M: Natural := N;
if M rem 2 /= 0 then
Result := Term * Result;
end if;
M := M / 2;
exit when M = 0;
Term := Term * Term;
end loop;
return Result;
end "**";
function "<" (X, Y: Number) return Boolean is
return Compare(X, Y) < 0;
end "<";
function "<=" (X, Y: Number) return Boolean is
return Compare(X, Y) <= 0;
end "<=";
function ">" (X, Y: Number) return Boolean is
return Compare(X, Y) > 0;
end ">";
function ">=" (X, Y: Number) return Boolean is
return Compare(X, Y) >= 0;
end ">=";
function "=" (X, Y: Number) return Boolean is
return Compare(X, Y) = 0;
end "=";
function Make_Number(S: String) return Number is
Result: Number((S'Length - 1) / Base_Exp + 1);
To_Number(S, Result);
return Result;
end Make_Number;
function Make_Number(I: Integer) return Number is
Base_Digits: Index := 0;
II: Integer := abs I;
Base: constant := 10 ** Base_Exp;
-- loop to determine discriminant for result
while II /= 0 loop
Base_Digits := Base_Digits + 1;
II := II / Base;
end loop;
Result: Number(Base_Digits);
To_Number(I, Result);
return Result;
end Make_Number;
function String_Of(N: Number) return String is
Places: Integer := No_Of_Places(N);
S: String(1 .. Places + 1);
SP: Positive := 1;
To_Text(N, S);
-- allow for leading spaces in S
while S(SP) = ' ' loop
SP := SP + 1;
end loop;
return S(SP .. S'Last);
end String_Of;
function Integer_Of(N: Number) return Integer is
Result: Integer;
To_Integer(N, Result);
return Result;
end Integer_Of;
end Numbers.Func;
--- Numbers calculator ---------------------------------------------
--with ID, Numbers.Func; use Numbers.Func;
--package Numbers_ID is new ID(Number, Zero, One);
--with ID.IO, Numbers.IO; use Numbers.IO;
--package Numbers_ID.The_IO is new Numbers_ID.IO;
--with Numbers_ID.The_IO;
--with Numbers.Func;
--with Calculator;
--procedure Numcalc is
-- new Calculator.Run(Numbers_ID,
-- Numbers_ID.The_IO,
-- Numbers.Func."+");
--- test 1 - powers of 11 and 99 -----------------------------------
with Numbers.Proc; use Numbers.Proc;
with Ada.Text_IO; use Ada;
procedure Test_11_99 is
U: Number(2);
procedure P(X: Number) is
S: String(1 .. 150);
To_Text(X, S);
Text_IO.Put(S); Text_IO.New_Line;
end P;
procedure Power(U: Number; V: Integer) is
W: Number(50);
To_Number(1, W);
for I in 1 .. V loop
Mul(W, U, W);
end loop;
end Power;
To_Number(11, U);
Power(U, 50);
To_Number(99, U);
Power(U, 50);
-- test 2 - Mersenne using procedural forms ------------------------
with Ada.Calendar; use Ada.Calendar;
with Numbers.Proc; use Numbers.Proc;
with Ada.Text_IO, Ada.Integer_Text_IO;
use Ada.Text_IO, Ada.Integer_Text_IO;
procedure Test2 is
Nop: constant := 30;
Loop_Start, Loop_End: Integer;
T_Start, T_End: Time;
Is_Prime: Boolean;
MM: Number(50);
Primes: array(1 .. Nop) of Integer :=
package Duration_IO is new Fixed_IO(Duration);
use Duration_IO;
procedure Lucas_Lehmer (P: Integer;
Mersenne: out Number;
Is_Prime: out Boolean) is
Two: Number(1);
M: Number(Mersenne.Max_Digits);
L, W, Quotient: Number(M.Max_Digits*2);
To_Number(2, Two);
To_Number(4, L);
To_Number(1, M);
for I in 1 .. P loop
Mul(M, Two, M);
end loop;
Sub(M, One, M);
for I in 1 .. P-2 loop
Mul(L, L, W);
Sub(W, Two, W);
Div(W, M, Quotient, L);
-- L := (L**2 - Two) mod M;
end loop;
Is_Prime := Compare(L, Zero) = 0;
Mersenne := M;
end Lucas_Lehmer;
procedure Put(X: Number) is
S: String(1 .. 45);
To_Text(X, S);
end Put;
Put_Line("Start loop? "); Get(Loop_Start);
Put_Line("End loop? "); Get(Loop_End);
Put_Line("Mersenne Primes");
Put_Line(" Time P 2**P-1");
for I in Loop_Start .. Loop_End loop
T_Start := Clock;
Lucas_Lehmer(Primes(I), MM, Is_Prime);
T_End := Clock;
New_Line; Put(T_End - T_Start, 2, 1);
Put(Primes(I), 4); Put(" : ");
if Is_Prime then
Put(" is prime");
Put(" is not prime");
end if;
end loop;
end Test2;
--- test 5 - Mersenne using functional forms -----------------------
with Ada.Calendar; use Ada.Calendar;
with Numbers.Func; use Numbers.Func;
with Numbers.IO; use Numbers.IO;
with Ada.Text_IO, Ada.Integer_Text_IO;
use Ada.Text_IO, Ada.Integer_Text_IO;
procedure Test5 is
Nop: constant := 30;
Loop_Start, Loop_End: Integer;
T_Start, T_End: Time;
Is_Prime: Boolean;
LL, MM: Number;
Primes: array(1 .. Nop) of Integer :=
package Duration_IO is new Fixed_IO(Duration);
use Duration_IO;
procedure Lucas_Lehmer (Q: Integer;
Mersenne: out Number;
Lout: out Number;
Is_Prime: out Boolean) is
Two: constant Number := Make_Number(2);
M: constant Number := Two**Q - One;
L: Number := Make_Number(4);
for I in 1 .. Q-2 loop
L := (L**2 - Two); -- mod M; -- mod M here is optional;
Put(L); New_line(2);
end loop;
Is_Prime := L mod M = Zero;
Lout := l;
Mersenne := M;
end Lucas_Lehmer;
Put_Line("Start loop? "); Get(Loop_Start);
Put_Line("End loop? "); Get(Loop_End);
Put_Line("Mersenne Primes");
Put_Line(" Time P 2**P-1");
for I in Loop_Start .. Loop_End loop
T_Start := Clock;
Lucas_Lehmer(Primes(I), MM, LL, Is_Prime);
T_End := Clock;
New_Line; Put(T_End - T_Start, 2, 1);
Put(Primes(I), 4); Put(" : ");
Put(MM, 20);
if Is_Prime then
Put(" is prime");
put((MM+One)/Make_number(2)*MM, 30); Put(" is perfect");
Put(" is not prime");
end if;
-- comment out next four lines to avoid detail
Put("L is "); Put(LL); Put(" equals "); Put(MM); Put(" times "); Put(LL/MM);
if not Is_prime then
New_Line; Put(" remainder = "); Put(LL mod MM);
end if;
-- end of comment
end loop;
end Test5;
--- test 6 - GCD and Mersenne --------------------------------------
with Ada.Text_IO, Ada.Integer_Text_IO;
use Ada.Text_IO, Ada.Integer_Text_IO;
with Numbers.Proc; use Numbers.Proc;
procedure Test6 is
subtype CNumber is Number(50);
M1, M2, M3: CNumber;
P1, P2, P3: Integer;
G, H: CNumber;
XX, YY, QQ, ZZ: CNumber;
Two: CNumber;
N: Cnumber;
Start, Stop: Integer;
procedure GCD(X, Y: CNumber; Z: out CNumber) is
XX := X; YY := Y;
while Compare(YY, Zero) /= 0 loop
Div(XX, YY, QQ, ZZ);
XX := YY;
YY := ZZ;
end loop;
Z := XX;
end GCD;
procedure Mersenne(P: Integer; M: in out CNumber) is
To_Number(2, Two);
To_Number(1, N);
for I in 1 .. P loop
Mul(N, Two, N);
end loop;
Sub(N, One, N);
M := N;
end Mersenne;
To_Number(2, Two);
Put_Line("Start? "); Get(Start);
Put_Line("Stop? "); Get(Stop);
for I in Start .. Stop loop
P1 := 2*I + 1;
Mersenne(P1, M1);
for J in 1 .. I -1 loop
P2 := 2*J + 1;
Mersenne(P2, M2);
GCD(M1, M2, G);
for K in 1 .. I loop
P3 := 2*K + 1;
Mersenne(P3, M3);
if Compare(M3, G) > 0 then
end if;
if Compare(M3, G) =0 then
Put(P1); Put(P2); Put(P3);
end if;
end loop;
end loop;
end loop;
end Test6;
------- test 7 multipication
with Ada.Text_IO, Ada.Integer_Text_IO;
use Ada.Text_IO, Ada.Integer_Text_IO;
with Numbers.Proc; use Numbers.Proc;
with Numbers.IO; use Numbers.IO;
procedure Test7 is
X: Number;
Y: Number;
Z: Number(50);
Put_line("Multiplier test");
Put("X = "); Get(X);
Put("Y = "); Get(Y);
Mul(X, Y, Z);
Put_Line("product is ");
end Test7;
From: Jeff Cousins
Sent: Wednesday, January 31, 2018 6:17 AM
[This some additional test programs for John Barnes' Bignum package; the
Numbers package was duplicated at the front, which I removed. - Editor.]
--package Monitor is
-- C1, C2, C3, C4: Integer;
package Primes is
Max: constant := 20000;
Prime: array (1..Max) of Integer;
pragma Elaborate_Body;
package body Primes is
N: Integer:= 2;
Index: Integer := Prime'First;
Found: Boolean := False;
-- initialization part to build prime table
Prime(Prime'First) := 2;
N := N+1; Found := True;
for I in Prime'First .. Index loop
if n rem Prime(I) = 0 then -- divides by existing prime
Found := False; exit;
end if;
end loop;
if Found then
-- found a new prime
Index := Index+1; Prime(Index) := n;
exit when Index = Max;
end if;
end loop;
end Primes;
Package Squares is
Square_Digits: Integer := 4;
Square_Ten_Power: Integer := 10**Square_Digits;
Poss_Last_Digits: array(0..Square_Ten_Power-1) of Boolean := (others => False);
pragma Elaborate_Body;
package body Squares is
-- make Poss_Last_Digits array
for I in 1 .. Square_Ten_Power loop
Poss_Last_Digits(I*I mod Square_Ten_Power) := True;
end loop;
end Squares;
with Numbers.Func; use Numbers.Func;
-- with Monitor;
procedure Square_root(XX: in Number; Try: Number; X: out Number; R: out Number) is
-- X is the largest X such that X*X + R equals XX with R >= zero
-- Try is initial guess
K, KK: Number;
DK: Number;
Three: Number := Make_Number(3);
k := Try;
KK := K*K;
DK := (XX - KK)/(K+K);
-- iterate until nearly there
if abs DK < Three then -- nearly there
if XX < KK then
K := XX/K; -- ensure K*K is less than XX
end if;
end if;
-- do another iterate
K := K+DK;
-- Monitor.c2 := Monitor.C2 + 1;
end loop;
-- now loop from below
KK := K*K;
-- Monitor.C2 := Monitor.C2 + 1;
if KK >= XX then
if KK = XX then
X := K; R := Zero; return;
end if;
X := K - One; R := XX - X*X; return;
end if;
K := K+One;
end loop;
end Square_Root;
with Square_Root;
with Numbers.Func; use Numbers.Func;
-- with Monitor;
with Squares; use Squares;
procedure Fermat_Long(N: in Number; Min_Prime: in Number; P, Q: out Number) is
-- we know that factors up to Min_Prime have been removed, so max square to try is
-- roughly (N/Min_Prime + Min_Prime)/2; we add 2 for luck
Two: Number := Make_Number(2);
Number_Square_Digits: Number := Make_Number(Square_Digits);
Number_Square_Ten_Power: Number := Make_Number(Square_Ten_Power);
X: Number;
Y: Number;
R: Number;
K: Number;
DK: Number;
N2, X2, K2: Integer;
Last_Digits: Integer;
Try: Number := One;
Max_square : Number := (N/Min_Prime + Min_Prime)/ Two + Two;
Square_Root(N, One, X, R);
if R = Zero then
-- N was a perfect square
P := X; Q := X;
end if;
-- Monitor.C1 := Monitor.C2;
-- Monitor.C2 := 0;
K := X*X-n;
DK := X+X+One;
N2 := Integer_Of(N rem Number_Square_Ten_Power);
X2 := Integer_Of(X rem Number_Square_Ten_Power);
-- Monitor.C3 := Monitor.C3 + 1;
X := X + One;
if X > Max_Square then
-- must be prime
P := N; Q := One;
end if;
X2 := (X2 + 1) rem Square_Ten_Power;
K := K + DK; -- omit if DK not used
DK := DK + Two;
-- K := X*X-N; -- omit if DK used
K2 := (X2*X2-N2) mod Square_Ten_Power;
-- Last_Digits := Integer_Of(K rem Number_Square_Ten_Power);
Last_Digits := K2;
if Poss_Last_Digits(Last_Digits) then
-- Monitor.C4 := Monitor.C4 + 1;
Square_Root(K, Try, Y, R);
if R = Zero then
-- X*X-N was a perfect square
P := X+Y; Q := X-Y;
end if;
Try := Y;
end if;
end loop;
when others => p := Zero; Q := Zero;
end Fermat_long;
with Numbers.IO; use Numbers.IO;
with Numbers.Func; use Numbers.Func;
with Ada.Text_IO; use Ada.Text_IO;
with Ada.Integer_Text_IO; use Ada.Integer_Text_IO;
-- with Monitor;
with Primes; use Primes;
with Fermat_Long;
with Ada.Calendar; use Ada.Calendar;
procedure Do_Fermat_Long is
NN, PP, QQ: Number;
T_Start, T_End: Time;
package Duration_IO is new Fixed_IO(Duration);
use Duration_IO;
Put_Line("Welcome to Fermat's method (multilength)");
Put("Insert number N = "); Get(NN);
when others =>
Put_Line("Not a number or too big"); Skip_Line;
goto Again;
exit when NN = Zero;
-- check to see if N divides by a known prime
for i in Prime'Range loop
if NN rem make_Number(Prime(I)) = Zero then
Put("N divides by "); Put(Prime(I), 0); Put_Line(" so removing factor");
NN := NN / Make_Number(Prime(I));
end if;
end loop;
end loop;
if nN rem Make_number(4) = Make_Number(2) then
Put_Line("Algorithm fails on odd multiples of 2, so halving N");
NN := NN/Make_Number(2);
Put("New N = "); Put(NN, 0); New_Line;
end if;
if NN = One then
Put_Line("all factors removed");
-- Monitor.C1 := 0;
-- Monitor.C2 := 0;
-- Monitor.C3 := 0;
-- Monitor.C4 := 0;
T_Start := Clock;
Fermat_Long(NN, Make_Number(Prime(Max)), PP, QQ);
T_End := Clock;
if PP = Zero and QQ = zero then
Put_Line("Failed internally");
goto Again;
end if;
Put("Two factors are "); Put(PP, 0); Put(" "); Put(QQ, 0); New_Line;
Put(T_End-T_Start, 2, 1); New_Line;
-- Put(Monitor.C1, 9); Put(Monitor.C2, 9); Put(Monitor.C3, 9); Put(Monitor.C4, 9); New_Line;
end if;
end loop;
Put_line("Goodbye"); Skip_Line(2);
end Do_Fermat_Long;
From: Randy Brukardt
Sent: Wednesday, January 31, 2018 5:03 PM
>Sending in three parts as the original message was too big for the mail
Just so the rest of you know, "Part 3" was an executable Windows program which
we've decided not to distribute at all (it's too big for the list, and
executables for specific computers seem out-of-bounds for this list anyway,
given all of us know how to operate our favorite Ada compiler and probably
several non-favorite compilers as well).
So don't look for the non-existent part 3.
From: Steve Baird
Sent: Wednesday, February 28, 2018 8:03 PM
I'm attaching some preliminary specs that reflect the feedback from the last
discussion of this set of predefined packages. [This is version /01 of the
AI - ED.]
There is still some polishing to do with respect to, for example, pre/post
conditions for the various operations, but this should give us a more concrete
framework for discussion.
I gave up on making Bounded_Integer a discriminated type and instead defined
the enclosing package Bounded_Integers as a generic package. That eliminates
questions such as "what is the discriminant of the result of adding together
two values whose discriminants differ?".
It is intended that the bounded and unbounded Big_Integer types should be
streaming-compatible as with Vectors and Bounded_Vectors
As discussed last meeting, there is no Bounded_Big_Rationals package (or
generic package).
The types are tagged now and descended from a common interface type.
This means that most parameter subtypes have to be first named subtypes.
This should not be considered an endorsement of this idea; we might (as Bob
suggests) want these types to be untagged. It's just easier to see what this
looks like and then have to imagine what the untagged version might be than
the other way around. I agree with Randy's comments that taggedness by itself
doesn't imply performance problems.
Currently no declaration of a GCD function. Do we want that? If so, then in
which package? If we declare it in Big_Reals then it is not a primitive and so
parameters can be Big_Positives instead of Big_Integers. If we really want it
in Big_Integers package(s) then it would either need Big_Integer parameter
subtypes or it would need to be declared in a nested package to avoid
primitive status.
From: Randy Brukardt
Sent: Thursday, March 1, 2018 9:49 PM
> I'm attaching some preliminary specs that reflect the feedback from
> the last discussion of this set of predefined packages.
> Comments?
Shouldn't this be some set of children of Ada.Numerics? These kinda seem like
numbers. ;-)
You don't have the numeric literal definitions (see AI12-0249-1) -- that seems
necessary for usability. (One wonders if the literals should be defined on the
interface, which suggests that AI12-0249-1 needs a bit of extension.)
Otherwise, I didn't notice anything that I would change.
From: Steve Baird
Sent: Sunday, March 4, 2018 1:00 AM
> Shouldn't this be some set of children of Ada.Numerics? These kinda
> seem like numbers.;-)
Good point.
So the root package for this stuff becomes Ada.Numerics.Big_Numbers.
> You don't have the numeric literal definitions (see AI12-0249-1) --
> that seems necessary for usability. (One wonders if the literals
> should be defined on g interface, which suggests that AI12-0249-1
> needs a bit of
> extension.)
We decided earlier that we didn't want that inter-AI dependency.
But I agree that if we we are willing to introduce that dependency then of
course support for literals would make sense.
Should it be conditional, as in "if AI12-0249 is approved, then this AI also
includes blah, blah"?
From: Randy Brukardt
Sent: Sunday, March 4, 2018 1:16 AM
Yes, I'd write it that way. I'd probably stay away from other AI dependencies,
but literals are pretty fundamental - supporting them makes the package way
more usable.
From: Steve Baird
Sent: Wednesday, March 28, 2018 6:50 PM
Attached is proposed wording for this AI. [This is version /02 of this
AI - ED].
There are some TBDs interspersed.
From: Randy Brukardt
Sent: Thursday, March 29, 2018 7:48 PM
> Attached is proposed wording for this AI.
> There are some TBDs interspersed.
Here's a few thoughts:
>[TBD: aspects specified for this package? Pure, Nonblocking, others?
>Same question applies to other packages declared in later sections.
>Would these aspects constrain implementations in undesirable ways?]
All of the packages should be nonblocking. I don't think any reasonable
implementation would need access to delay statements. ;-)
The interface package should be Pure (why not, it doesn't have any
implementation). The bounded package also should be pure (we do that for all
of the bounded forms elsewhere.
The others probably should be preelaborated (and the types having
preelaborable_initialization), lest we make it too hard to use the needed
dynamic allocation.
>[TBD: It would be nice to use subtypes in parameter profiles (e.g.,
>a Nonzero_Number subtype for second argument of "/", but this requires
>AI12-0243 and the future of that AI is very uncertain.]
You can always use a Pre'Class as an alternative to a subtype. It's not
quite as convenient, but it makes the same check, and presuming that
AI12-0112-1 stays are currently envisioned, that check would be suppressible
with "pragma Suppress (Numerics_Check);".
>[TBD: Remove Integer_Literal aspect spec if AI12-0249-1 not approved.
>If Default_Initial_Condition AI12-0265-1 is approved and Integer_Literal AI
>not then replace "0" with "+0" in the condition and as needed in
>subsequent conditions.]
I put the AI number in here for Default_Initial_Condition.
>[TBD: In_Range formal parameter names. "Lo & Hi" vs. "Low & High"?]
Ada usually doesn't use abbreviations, and saving one or two characters this
way isn't appealing. Use Low and High.
> A.5.5.1.1 Bounded Big Integers
Umm, please, no 5 level subclauses. Since there are currently no four level
items in the RM, we need to discuss that explicitly. I had to add a fourth
level for ASIS, but Ada only uses three levels. And the ACATS only uses two
levels in the annexes, which is already a problem for the containers (there
being only one set of sequence numbers for all of the containers tests).
>AARM Note: Roughly speaking, behavior is as if the type invariant for
> Bounded_Big_Integer is
> In_Range (Bounded_Big_Integer, First, Last) or else (raise
>although that is not specified explicitly because that would
>require introducing some awkward code in order to avoid infinite
Awkward code? Please explain. Type invariants are explicitly not enforced on
'in' parameters of functions specifically to avoid infinite recursion in the
type invariant expression. You'd probably need a function for this purpose
(to avoid the functions First and Last -- is that what you meant??), say:
In_Base_Range (Bounded_Big_Integer) or else (raise Constraint_Error)
where In_Base_Range is equivalent to In_Range (Bounded_Big_Integer, First,
Last) with the expressions of First and Last substituted. One could also
make those expressions the defaults for Low and High.
>[TBD: This could be done differently by using a formal instance instead
>of declaring the Conversions package as a child of Bounded_Big_Integers.
>Would there be any advantage to this approach? The advantage of the
>proposed approach is visibility of the private part, but it does seem
>awkward to have a generic with no generic formals and no local state.]
Well, it would be easier to implement in Janus/Ada, where we never got
sprouting to work. But that's hardly a reason. I suspect it would be more
obvious what's going on than a generic child -- as a data point, all of the
extra operations of Ada.Strings.Bounded take formal packages rather than
being children -- but that may have been driven by other considerations.
One argument for making it a formal package is that this conversion package
really belongs to both big number packages -- it's somewhat artificial to
make it live in the hierarchy of one or the other.
>Any Big_Rational result R returned by any of these functions satisifies the
> (R = 0.0) or else
> (Greatest_Common_Denominator (Numerator (R), Denominator (R)) = 1).
Arguably, that should be a postcondition, since the implementation isn't
required to check it (it it required to *pass* it). Then a separate rule
isn't needed. You'd probably want to declare a function with this meaning,
'cause duplicating the above 2 dozen times would be annoying.
>AARM Note: No Bounded_Big_Rationals generic package is provided.
We've discussed why, but there needs to be a version of that discussion
either in this note or in the (sadly empty) !discussion section. Future
readers will be puzzled otherwise (including, most likely, us).
From: Steve Baird
Sent: Tuesday, June 5, 2018 3:08 PM
>> Attached is proposed wording for this AI.
>> There are some TBDs interspersed.
> Here's a few thoughts:
>> [TBD: aspects specified for this package? Pure, Nonblocking, others?
>> Same question applies to other packages declared in later sections.
>> Would these aspects constrain implementations in undesirable ways?]
> All of the packages should be nonblocking. I don't think any reasonable
> implementation would need access to delay statements. ;-)
Sounds good.
> The interface package should be Pure (why not, it doesn't have any
> implementation). The bounded package also should be pure (we do that for all
> of the bounded forms elsewhere.
> The others probably should be preelaborated (and the types having
> preelaborable_initialization), lest we make it too hard to use the needed
> dynamic allocation.
Also sounds good.
>> [TBD: It would be nice to use subtypes in parameter profiles (e.g.,
>> a Nonzero_Number subtype for second argument of "/", but this requires
>> AI12-0243 and the future of that AI is very uncertain.]
> You can always use a Pre'Class as an alternative to a subtype. It's not
> quite as convenient, but it makes the same check, and presuming that
> AI12-0112-1 stays are currently envisioned, that check would be suppressible
> with "pragma Suppress (Numerics_Check);".
Let's leave things as I originally proposed for now, with a possible
revision if AI12-0243 is approved.
>> [TBD: Remove Integer_Literal aspect spec if AI12-0249-1 not approved.
>> If Default_Initial_Condition AI12-0265-1 is approved and Integer_Literal AI
> is
>> not then replace "0" with "+0" in the condition and as needed in
>> subsequent conditions.]
> I put the AI number in here for Default_Initial_Condition.
>> [TBD: In_Range formal parameter names. "Lo & Hi" vs. "Low & High"?]
> Ada usually doesn't use abbreviations, and saving one or two characters this
> way isn't appealing. Use Low and High.
You convinced me. Sounds good.
>> A.5.5.1.1 Bounded Big Integers
> Umm, please, no 5 level subclauses. Since there are currently no four level
> items in the RM, we need to discuss that explicitly. I had to add a fourth
> level for ASIS, but Ada only uses three levels. And the ACATS only uses two
> levels in the annexes, which is already a problem for the containers (there
> being only one set of sequence numbers for all of the containers tests).
What would you suggest instead?
>> AARM Note: Roughly speaking, behavior is as if the type invariant for
>> Bounded_Big_Integer is
>> In_Range (Bounded_Big_Integer, First, Last) or else (raise
> Constraint_Error)
>> although that is not specified explicitly because that would
>> require introducing some awkward code in order to avoid infinite
>> recursion.
> Awkward code? Please explain. Type invariants are explicitly not enforced on
> 'in' parameters of functions specifically to avoid infinite recursion in the
> type invariant expression. You'd probably need a function for this purpose
> (to avoid the functions First and Last -- is that what you meant??), say:
> In_Base_Range (Bounded_Big_Integer) or else (raise Constraint_Error)
> where In_Base_Range is equivalent to In_Range (Bounded_Big_Integer, First,
> Last) with the expressions of First and Last substituted. One could also
> make those expressions the defaults for Low and High.
Ok, we can make this type invariant explicit.
>> [TBD: This could be done differently by using a formal instance instead
>> of declaring the Conversions package as a child of Bounded_Big_Integers.
>> Would there be any advantage to this approach? The advantage of the
>> proposed approach is visibility of the private part, but it does seem
>> awkward to have a generic with no generic formals and no local state.]
> Well, it would be easier to implement in Janus/Ada, where we never got
> sprouting to work. But that's hardly a reason. I suspect it would be more
> obvious what's going on than a generic child -- as a data point, all of the
> extra operations of Ada.Strings.Bounded take formal packages rather than
> being children -- but that may have been driven by other considerations.
> One argument for making it a formal package is that this conversion package
> really belongs to both big number packages -- it's somewhat artificial to
> make it live in the hierarchy of one or the other.
I don't see any real strong arguments one way or the other here
(it would probably be ok to settle this one with a coin-flip)
but I agree that it does seem odd to put it in one hierarchy or
the other. So let's do it with a formal instance.
>> Any Big_Rational result R returned by any of these functions satisfies the
>> condition
>> (R = 0.0) or else
>> (Greatest_Common_Denominator (Numerator (R), Denominator (R)) = 1).
> Arguably, that should be a postcondition, since the implementation isn't
> required to check it (it it required to *pass* it). Then a separate rule
> isn't needed. You'd probably want to declare a function with this meaning,
> 'cause duplicating the above 2 dozen times would be annoying.
Ok, let's make the postconditions explicit.
>> AARM Note: No Bounded_Big_Rationals generic package is provided.
> We've discussed why, but there needs to be a version of that discussion
> either in this note or in the (sadly empty) !discussion section. Future
> readers will be puzzled otherwise (including, most likely, us).
Agreed, more should be said about this in the !discussion section.
Thanks, as always, for the feedback.
Do we want revised wording incorporating what we have
discussed here for Lisbon?
From: Randy Brukardt
Sent: Tuesday, June 5, 2018 3:28 PM
> >> A.5.5.1.1 Bounded Big Integers
> >
> > Umm, please, no 5 level subclauses. Since there are currently no
> > four level items in the RM, we need to discuss that explicitly. I
> > had to add a fourth level for ASIS, but Ada only uses three levels.
> > And the ACATS only uses two levels in the annexes, which is already
> > a problem for the containers (there being only one set of sequence
> > numbers for all of the containers tests).
> What would you suggest instead?
There aren't any great answers here.
If you want to keep these in with the other numerics packages, then I think
you need a flat organization: A.5.5 Big Integer Interface, A.5.6 Unbounded
Big Integers A.5.7 Bounded Big Integers etc. That's how the queues are
organized, after all, they have the same issue.
Alternatively, given that this is a substantial subsystem unto itself, you
could just give them there own subclause in Annex A:
A.20 Big Numbers A.20.1 Big Integer Interface, A.20.2 Unbounded Big Integers
A.20.3 Bounded Big Integers etc.
> Do we want revised wording incorporating what we have discussed here
> for Lisbon?
Always best to have the latest into the actual AI. Otherwise, we end up saying
stuff like "ignore that part of the AI, we decided to change it in e-mail" and
people get all confused. And end up asking questions about "that part of the
AI" anyway.
From: Jeff Cousins
Sent: Tuesday, June 5, 2018 4:23 PM
> Do we want revised wording incorporating what we have discussed here
> for Lisbon?
Yes please, the more that can be tidied up beforehand the better.
From: John Barnes
Sent: Monday, June 11, 2018 2:00 AM
I thought I had recently is that it would be handy to have a subprogram that
did square root. Given input parameter x and two out parameters. For positive
x, It would return the largest integer whose square is not greater than the
parameter and a separate remainder.
Could be a procedure or a function with the remainder as an out parameter.
It would be useful for some number theory stuff.
From: Randy Brukardt
Sent: Monday, June 11, 2018 9:23 PM
Is there any particular reason that ought to be in the BigNum library rather
than a package implemented using BigNum? (I recall we have a bunch test
programs for the Janus/Ada UnivMath that do things like calculate square
roots and E [until you run out memory!].) There doesn't appear to be any
implementation that would take special advantage of the implementation
details of a BigNum.
One could imagine having the entire set of GEF operations available, but
those surely would want to be in a separate package to keep things
manageable. And defining that might be just too much work for Ada 2020
(only 9 months to go!)
From: John Barnes
Sent: Tuesday, June 12, 2018 3:52 AM
When I implemented sqrt for big integers I did indeed just use the normal
visible operations in the number package. However, it was slow and I just
wondered whether it could be speeded up by knowing the implementation details
and taking short cuts. However, I suppose it might apply to other maths ops as
well. Best solution might be to put such things in a child. We could leave
that to 2028 I suppose (I might not be around). I could ask the maths people
(NAG) at Oxford. One of the old Ada blokes from Alsys in Henley works for NAG.
See you in Lisbon.
From: Steve Baird
Sent: Wednesday, June 13, 2018 6:37 PM
>> Do we want revised wording incorporating what we have discussed here
>> for Lisbon?
> Always best to have the latest into the actual AI.
I think the attached is just the result of incorporating what we discussed.
[This is version /03 of the AI - Editor.]
The one exception is that I used a type invariant to express the idea that the
GCD of the numerator and denominator of a non-zero big real is always one.
From: Randy Brukardt
Sent: Thursday, June 14, 2018 7:13 PM
> I think the attached is just the result of incorporating what we
> discussed.
Looks good. I fixed a couple of typos (missing "-1" on AI numbers, missing
";" at the end of your new type invariant).
Do we still need the text:
Any Big_Rational result R returned by any of these functions satisifies the condition
(R = 0.0) or else
(Greatest_Common_Divisor (Numerator (R), Denominator (R)) = 1).
since it is essentially just restating the Type_Invariant. (The spelling of
"satisfies" is interesting, too. ;-)
> The one exception is that I used a type invariant to express the idea
> that the GCD of the numerator and denominator of a non-zero big real
> is always one.
You would use the least used of the contracts. ;-) I note that the new
ObjectAda doesn't include Type_Invariants (and I don't have any plans to
implement them in Janus/Ada, either). But that's hardly a good reason to not
use something when it is appropriate. (I'd probably turn it into a
postcondition for Janus/Ada.)
From: Steve Baird
Sent: Thursday, June 14, 2018 7:29 PM
> Do we still need the text:
> Any Big_Rational result R returned by any of these functions
> satisifies the condition
> (R = 0.0) or else
> (Greatest_Common_Divisor (Numerator (R), Denominator (R)) = 1).
> since it is essentially just restating the Type_Invariant. (The
> spelling of "satisfies" is interesting, too.;-)
Good point. I agree that we no longer need that text.
> You would use the least used of the contracts.
You should be thanking me for not using the "stable properties" stuff.
From: Randy Brukardt
Sent: Thursday, June 14, 2018 7:37 PM
> > You would use the least used of the contracts.
> You should be thanking me for not using the "stable properties" stuff.
I was trying to figure out how to implement your type invariant with a
stable property function, but that wouldn't work on the function results.
Anyway, stable properties is extensively used in the containers contracts
(AI12-0112-1), so I wouldn't mind in the least if you used it.
From: Randy Brukardt
Sent: Thursday, June 14, 2018 7:39 PM
> Good point. I agree that we no longer need that text.
OK, I deleted it (making a new AI version).
From: Steve Baird
Sent: Wednesday, November 21, 2018 7:23 PM
The attached is intended to incorporate the directions I received from the
group in Lisbon. [This is version /05 of the AI - Editor.]
The big numeric types are no longer tagged.
They default to an invalid value instead of to zero.
Conversions to and from String look a lot more like the corresponding Text_IO
operations (with Width, Base, Fore, Aft, and Exp parameters).
Each big number type gets a visible Put_Image aspect spec.
Some name changes (To_Unbounded => From_Bounded, etc.).
Also (not directed by group) the old To_String and From_String that
generated/consumed strings of the form
<numerator image> / <denominator image> are still around, but are now named
To_Quotient_String and From_Quotient_String These names, like any names,
are subject to discussion.
Finally, a Big_Number_Check argument for pragma Suppress.
As usual, thanks to Randy for preliminary review.
Hopefully, all the right rules are in place for deciding when you get a
leading blank on the image of a positive integer.
From: Jeff Cousins
Sent: Friday, November 23, 2018 1:06 PM
Thanks Steve. A few comments:
Big_Integers function To_String is missing the ; after the first parameter.
I think you need to define deferred constants for zero and unity.
Big_Integers spurious space after Wide_Wide_String’Write.
Big_Rationals – I think you need to define a Rationals version of In_Range for
use in Float_Conversions.
Big_Rationals From_String should have an Arg of String not Valid_Big_Rational,
and should return Valid_Big_Rational.
Text_Io -> Text_IO (two places).
From: Randy Brukardt
Sent: Monday, November 26, 2018 9:19 PM
>I think you need to define deferred constants for zero and unity.
Why? We have numeric literals in these packages (thanks to that new
mechanism), so just using 0 and 1 are many times more likely. I would expect
very few people would even think to use such constants. If you had to write:
V := Ada.Numerics.Big_Numbers.Big_Integers.From_String ("0"); or
V := Ada.Numerics.Big_Numbers.Big_Integers.From_Integer (0);
I could see that. Even with use clauses, these would be annoying. But with
V := 0;
is obvious. I don't see anyone wanting to write:
V := Ada.Numerics.Big_Numbers.Big_Integers.Zero;
instead. (Especially for those of us in the "use type" only camp.)
From: Jeff Cousins
Sent: Tuesday, November 27, 2018 6:51 AM
Ok, I suppose I’d never really digested what AI-0249 was for.
But without AI-0249, wouldn’t deferred constants have been needed to get
from universal type 0 and 1 to private type 0 and 1 for use in the contracts?
More justification for AI-0249 I suppose.
PS. (Chit chat) I became a grandad yesterday, Charlie John Cousins was born
From: Tucker Taft
Sent: Tuesday, November 27, 2018 9:00 AM
Good point. In fact, there is a bit of a delicate dance going on here in the
contracts. We are using the literals 0 and 1, as you point out. Name
resolution on aspect specifications isn't performed until the word "private."
The question is whether things are done in the right order. In particular the
aspect specification on Big_Integer saying "Integer_Literal => From_String"
needs to be processed to some degree before we try to interpret the aspect
specification "Dynamic_Predicate => Big_Positive > 0" since the implicit
conversion from Universal-integer "0" to Big_Integer depends on it. Perhaps
the "Integer_Literal =>" part can have an immediate effect, while the name
resolution of the "From_String" part can be postponed until the word
Steve? Others?
>PS. (Chit chat) I became a grandad yesterday, Charlie John Cousins was born
Great news, Gramps!
From: Randy Brukardt
Sent: Tuesday, November 27 2018 6:01 PM
I've thought about this some this afternoon, and I don't think there is a real
problem, but there definitely is a dance.
It's known as soon as the aspect is used that (integer, in this case) literals
can be used. (That sort of property seems like it would be useful for many
aspects, for instance generalized indexing.) That's clear because the name of
aspect is given in the type_declaration; that can't be altered by later
declarations. But we don't need to know the precise meaning of those literals
until an expression containing one is *evaluated*.
Remember that preconditions and default expressions and many other expressions
are *not* evaluated when they are seen. So the compiler can allow the use of
literals in those expressions without having to know how they are interpreted
in the future. Moreover, an expression that is evaluated (such as the
initializer of an object) freezes the type. And freezing the type evaluates
the aspects, so the details of evaluation are known before the evaluation
is actually done.
Combined with the rules that prevent a literal from having a different meaning
for the full type, means to me that there is no problem allowing the use of
literals as soon as aspect Integer_Literal is seen, even though the exact
meaning of the aspect is not yet known. If the aspect turns out to have
been illegal for some reason, then the compilation unit will be rejected
before anyone tries to evaluate the literals -- and otherwise, we'll known in
Whether the wording in 13.1.1 and elsewhere actually imply this model, or
whether we need some extra wording to get it, I'll leave to the
hair-splitters of the ARG. (Gotta get back to finishing the minutes in any
>>PS. (Chit chat) I became a grandad yesterday, Charlie John Cousins was born ??
>Great news, Gramps!
Second that!
From: Steve Baird
Sent: Sunday, December 2, 2018 1:08 PM
Thanks as always for the careful reading.
The attached [version /06 - Editor.] is intended to include the corrections
you suggested except for the stuff about deferred constants for zero and one,
which I think was determined to be ok as it is in subsequent discussions. We
may or may not (I haven't tried to determine this yet) need a separate AI to
support Randy's argument that it is ok to use user-defined literals in aspect
specifications as is done in this spec; I believe Randy's argument is correct
- it is just a question of whether it is supported by existing RM wording.
Incidentally, identifying that issue was a good catch!
Upon rereading, I wonder whether we should delete the Width parameters from
the From_String functions. Do those make sense for a function that is passed
a String (as opposed to reading characters from a file)?
If we decide to do this, then we should also delete the mention of Width in
the accompanying text for Big_Rationals (but not in the corresponding text
for Big_Integers) because with this change there will no longer be any
subprograms in the Big_Rationals spec that take a Width parameter.
From: Randy Brukardt
Sent: Thursday, December 6, 2018 9:36 PM
There's one necessary correction that no one suggested: the 11.5(23) text calls
the package "Ada.Big_Numbers", but it obviously is called
"Ada.Numerics.Big_Numbers". There are three occurrences. I've fixed them all.
(There also was a stray blank in here.)
Also the AI number in the preceding note is missing the '2': AI12-0112-1, not
"AI12-011". Also fixed.
From: Jeff Cousins
Sent: Monday, December 3, 2018 2:22 PM
Looks good to me.
From: Tucker Taft
Sent: Sunday, December 2, 2018 1:32 PM
> ...
> Upon rereading, I wonder whether we should delete the Width parameters
> from the From_String functions. Do those make sense for a function
> that is passed a String (as opposed to reading characters from a
> file)?
If you look at the "Get" functions in Text_IO that get from a string, they
do omit the Width parameters, so it would seem we would be justified in
omitting them here as well. In general, the length of the string provides
the value of the Width parameter for operations that operate on strings.
However, if the operation *returns* a String rather fills in a String provided
as an OUT parameter, there is no Length, for for conversion *to* a string,
some sort of Width might be useful, but note that 'Image doesn't provide that.
> If we decide to do this, then we should also delete the mention of
> Width in the accompanying text for Big_Rationals (but not in the
> corresponding text for Big_Integers) because with this change there
> will no longer be any subprograms in the Big_Rationals spec that take
> a Width parameter.
Makes sense.
From: John Barnes
Sent: Monday, December 3, 2018 2:09 PM
I should read this one carefully. But what should I read? The AI on the
database seems a bit old.
From: Jeff Cousins
Sent: Monday, December 3, 2018 2:25 PM
Look at the v4 attached to Steve’s last e-mail.
Hopefully I’ve attached it.
From: John Barnes
Sent: Tuesday, December 4, 2018 1:56 AM
OK Got it. I assumed the AI had been updated. This is the text for the
uncluttered AARM
From: Jean-Pierre Rosen
Sent: Monday, December 3, 2018 11:36 PM
> function "+" (Arg : Integer) return Valid_Big_Integer;
> function To_Big_Integer (Arg : Integer) return Valid_Big_Integer
> renames "+";
Wouldn't it be more logical to have these the other way round? i.e.:
function To_Big_Integer (Arg : Integer) return
function "+" (Arg : Integer) return Valid_Big_Integer
renames To_Big_Integer;
Better have "+" as a shorthand for To_Big_Integer rather than To_Big_Integer
as a "longhand" for "+"...
From: Bob Duff
Sent: Tuesday, December 4, 2018 6:14 AM
I agree. It doesn't make any difference to the compiler, but it might make
a difference to tools such as debuggers, and it does seem more logical.
From: Jeff Cousins
Sent: Tuesday, December 4, 2018 9:22 AM
That seems a good point to me, thanks JP.
From: Tucker Taft
Sent: Tuesday, December 4, 2018 6:43 AM
Good point, JP.
From: Steve Baird
Sent: Tuesday, December 4, 2018 11:11 AM
Sounds good to me.
From: John Barnes
Sent: Tuesday, December 4, 2018 11:33 AM
And to me.
From: John Barnes
Sent: Wednesday, December 5, 2018 4:30 AM
I just started to read this in detail. Gosh Ada has lots of things now that
I don't remember.
But a minor point first.
A5.5 says The package Ada.Numerics.Big_Numbers has the following declaration:
But A5.6 says The package Ada.Numerics.Big_Numbers.Big_Integers has the
following definition:
Why definition and not declaration?
Same in A.5.8 for Big_Integers.
From: Randy Brukardt
Sent: Wednesday, December 5, 2018 1:33 PM
The RM is not consistent about the wording introducing language-defined
I found 58 "has the following declaration" and 33 "language-defined package
exits". There's also some that don't use any words at all or aren't
OTOH, I didn't find any "has the following definition", except for the ones
I added last night. So that must have been a mis-read on my part. And clearly,
the ones in AI12-0208-1 are also wrong - should be "declaration".
From: Bob Duff
Sent: Thursday, December 6, 2018 2:05 PM
> The attached is intended to include the corrections you suggested
> except for the stuff about deferred
Thanks, Steve.
Comments on the attached AI12-0208.v4.txt: [Editor's note: This attachment
is /06 of the AI, despite the name.]
> package Ada.Numerics.Big_Numbers.Big_Integers
> with Preelaborate, Nonblocking
> is
> type Big_Integer is private with
> Default_Initial_Condition => not Is_Valid (Big_Integer),
> Integer_Literal => From_String,
> Put_Image => Put_Image;
> function Is_Valid (Arg : Big_Integer) return Boolean;
> subtype Valid_Big_Integer is Big_Integer
> with Dynamic_Predicate => Is_Valid (Valid_Big_Integer),
> Predicate_Failure => (raise Constraint_Error);
I expect Valid_Big_Integer will be used far more commonly than Big_Integer.
(That's true in the code in the package spec, and for client code.)
Furthermore, an invalid Big_Integer is not an integer at all. Therefore
I suggest name changes:
Big_Integer --> Optional_Big_Integer
Valid_Big_Integer --> Big_Integer
We don't say "Valid_Big_Positive" etc, so Valid_Big_Integer seems inconsistent
and a bit weird.
Same for Big_Rats.
> Implementation Requirements
> No storage associated with a Big_Integer object shall be lost upon
> assignment or scope exit.
The CodePeer implementation of big ints doesn't do that
-- it leaks very slowly in practice, and that has proven to work well. I fear
it's the only efficient way to do it, unless you have garbage collection.
> - Bounded_Big_Integers is a generic package and takes a generic formal:
> Capacity : Natural;
I question making Capacity a generic formal. I think it should be a
discriminant. It seems to me the primary use of Bounded_Big_Integers will be
to implement Big_Integers, and that only works if you can have various-sized
Bounded_Big_Integers all of the same type.
Yes, I know that means assignment statements won't work "right". Too bad.
We can provide a Copy procedure.
We made this mistake with Bounded_Strings, and they're a huge pain to use
because of that.
> with Ada.Numerics.Big_Numbers.Big_Integers;
> with Ada.Numerics.Big_Numbers.Bounded_Big_Integers;
> generic
> with package Bounded is new Bounded_Big_Integers (<>);
This seems pretty heavy, syntactically. See above about discrims.
> package Ada.Numerics.Big_Numbers.Conversions
> function "+" (Arg : Integer) return Valid_Big_Rational is
> ((+Arg) / 1);
Seems like you want a conversion from Big_Int to Big_Rat.
But probably not called "+".
> generic
> type Num is digits <>;
> package Float_Conversions is
> function To_Big_Rational (Arg : Num) return Valid_Big_Rational;
> function From_Big_Rational (Arg : Valid_Big_Rational) return Num
> with Pre => In_Range (Arg,
> Low => To_Big_Rational (Num'First),
> High => To_Big_Rational (Num'Last))
> or else (raise Constraint_Error);
> end Float_Conversions;
Should we have conversions to/from fixed point?
> function To_String (Arg : Valid_Big_Rational;
> Fore : Field := 2;
> Aft : Field := 3;
> Exp : Field := 0) return String
> with Post => To_String'Result'First = 1;
> function From_String (Arg : String;
> Width : Field := 0) return Valid_Big_Rational;
> function To_Quotient_String (Arg : Valid_Big_Rational) return String is
> (To_String (Numerator (Arg)) & " /" & To_String (Denominator (Arg)));
Why is there an extra blank before "/" but not after?
It says about that To_String for Big_Int doesn't put the annoying blank, by default.
> function "**" (L : Valid_Big_Rational; R : Integer)
> return Valid_Big_Rational;
How about another "**" that takes a Big_Int?
> [TBD: Is Constraint_Error the exception we want on the
> Predicate_Failure aspect specs for Valid_Big_Integer and
> Valid_Big_Rational?]
OK with me. I don't much caare.
> [TBD: do we want a Fixed_Conversions generic package analogous to
> Float_Conversions?]
Ah, I asked that above. I'd say probably yes.
> [TBD: the range check on From_Big_Rational is slightly too tight.
> For example,
> X : IEEE_Float32 :=
> IEEE_Float32 (IEEE_Float64'Succ (IEEE_Float64
> (IEEE_Float32'Last))); does not overflow but the corresponding
> conversion using From_Big_Rational would fail the range check. Do we
> care?]
I don't know.
> This section, or at least the AARM note, is intended to follow the
> structure of the analogous wording for AI12-011 (contracts for
> containers).
> Add after 11.5(23):
> Big_Number_Check
> Perform the checks associated with Pre, Static_Predicate,
> Dynamic_Predicate, or Type_Invariant aspect specifications occuring in
> the visible part of package Ada.Big_Numbers or of any of its descendants.
> [TBD: Include Static_Predicate in this list just for completeness,
> even though it happens that there are no Static_Predicate
> specifications in these units?]
Either way is OK with me.
From: Randy Brukardt
Sent: Thursday, December 6, 2018 9:21 PM
> We don't say "Valid_Big_Positive" etc, so Valid_Big_Integer seems
> inconsistent and a bit weird.
I note that we discussed these names in Lexington and decided on these
Tucker suggests that the names would be Big_Integer and Valid_Big_Integer. This
seems consistent with existing languages. Bob says he can live with that (and
he will complain about it).
I suppose this comment qualifies. :-)
> Same for Big_Rats.
> > Implementation Requirements
> > No storage associated with a Big_Integer object shall be lost upon
> > assignment or scope exit.
> The CodePeer implementation of big ints doesn't do that
> -- it leaks very slowly in practice, and that has proven to work well.
> I fear it's the only efficient way to do it, unless you have garbage
> collection.
I don't believe that there is any efficient implementation of unbounded
Big_Integers for Ada. Any implementation like the one you described in
Lexington (and above) would have to use a global level of indirection to deal
with oversize objects (modern Oses scramble address spaces, so no assumptions
can be made about the location of anything), and that would be a problem for
multitasking (since you'd be accessing a global data structure).
You'd have to use some form of locking (at a minimum via atomic objects), and
that would also sap performance. Moreover, for a 32-bit implementation like
Janus/Ada, you're going to have a lot of memory used that way -- it's not a
*slow* storage leak.
If you need critical performance, you'll have to use the bounded form.
> > - Bounded_Big_Integers is a generic package and takes a generic formal:
> > Capacity : Natural;
> I question making Capacity a generic formal. I think it should be a
> discriminant. It seems to me the primary use of Bounded_Big_Integers
> will be to implement Big_Integers, and that only works if you can have
> various-sized Bounded_Big_Integers all of the same type.
Agreed. This is a discriminant for the bounded containers for this reason.
> > [TBD: do we want a Fixed_Conversions generic package analogous to
> > Float_Conversions?]
> Ah, I asked that above. I'd say probably yes.
I'd say no, because accuracy requirements seem to be major problem for such
conversions. I know that Steve has shown that one can always get the right
answer using essentially a binary search of model intervals, but that seems to
be more of a thought experiment than a real implementation technique (it would
use thousands of big rational operations).
From: Tucker Taft
Sent: Thursday, December 6, 2018 9:33 PM
> I'd say no, because accuracy requirements seem to be major problem for such
> conversions. I know that Steve has shown that one can always get the right
> answer using essentially a binary search of model intervals, but that seems
> to be more of a thought experiment than a real implementation technique (it
> would use thousands of big rational operations).
I believe there is a well documented mechanism for doing this properly.
AdaMagic does the right thing here. I'd be happy to provide the algorithm.
It is based on the notion of "continued fractions" I believe. I recently
analyzed supporting fixed point in our code generator for Simulink, and wrote
up the algorithms in a short document.
From: Randy Brukardt
Sent: Thursday, December 6, 2018 9:51 PM
>I believe there is a well documented mechanism for doing this properly.
Maybe, but if I don't know it, it might as well not exist. (I'd have no idea
how to Google for such a thing, as one has no idea of what it is called.)
>AdaMagic does the right thing here. I'd be happy to provide the algorithm.
>It is based on the notion of "continued fractions" I believe. I
>recently analyzed supporting fixed point in our code generator for
>Simulink, and wrote up the algorithms in a short document.
I have the additional problem of having to do it in a shared generic. It is
completely impossible to make completely accurate conversions to universal
fixed in that environment, so the algorithm has to use only integers and the
(base of the) target fixed point type. (In general, universal fixed
conversions are inaccurate, see Annex G, so any truly usage algorithm for Ada
has to avoid anything like that; it's not 100% a problem with shared generics.
That rules out fixed-fixed multiplies and divides. Not sure if that is
Questions? Ask the ACAA Technical Agent
|
{"url":"http://www.ada-auth.org/cgi-bin/cvsweb.cgi/ai12s/ai12-0208-1.txt?rev=1.14&raw=N","timestamp":"2024-11-07T18:17:47Z","content_type":"text/html","content_length":"162152","record_id":"<urn:uuid:ca7cc12b-6e19-4e18-b643-68e813e7ce4f>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00519.warc.gz"}
|
[Solved] Consider the following classic portfolio | SolutionInn
Answered step by step
Verified Expert Solution
Consider the following classic portfolio choice problem. Two assets are available to an investor at time t. One is riskless, with simple return Rf from
Consider the following classic portfolio choice problem. Two assets are available to an investor at time t. One is riskless, with simple returnRffrom timettot+1, and the other is risky. The
risky asset has simple returnRfrom timettot+1. The investor puts a sharewof his portfolio into the risky asset. We assume that the investor trades off mean and variance in a linear fashion. That is,
investor maximizes a linear combination of mean and variance, with a positive weight on mean and a negative weight on variance:
represent and why?
1. a)(7 marks)What is the proportion of wealth invested in risky asset?
2. b)(8 marks) Explain two-fund separation in relation to your answer in part (a). What does k
There are 3 Steps involved in it
Step: 1
Lets break down the problem into parts and provide a detailed explanation for both questions Part a Proportion of Wealth Invested in Risky Asset To determine the proportion of wealth invested in the
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started
|
{"url":"https://www.solutioninn.com/study-help/questions/consider-the-following-classic-portfolio-choice-problem-two-assets-are-1345400","timestamp":"2024-11-11T21:22:18Z","content_type":"text/html","content_length":"107107","record_id":"<urn:uuid:98989825-bd26-4a61-ace7-54fc8a385933>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00562.warc.gz"}
|
Results on learnability and the Vapnik-Chervonenkis dimension
We consider the problem of learning a concept from examples in the distribution-free model by Valiant. (An essentially equivalent model, if one ignores issues of computational difficulty, was studied
by Vapnik and Chervonenkis.) We introduce the notion of dynamic sampling, wherein the number of examples examined may increase with the complexity of the target concept. This method is used to
establish the learnability of various concept classes with an infinite Vapnik-Chervonenkis dimension. We also discuss an important variation on the problem of learning from examples, called
approximating from examples. Here we do not assume that the target concept T is a member of the concept class C from which approximations are chosen. This problem takes on particular interest when
the VC dimension of C is infinite. Finally, we discuss the problem of computing the VC dimension of a finite concept set defined on a finite domain and consider the structure of classes of a fixed
small dimension.
Bibliographical note
Funding Information:
* This paper was prepared with support from NSF Grant DCR-8607494, AR0 Grant DAAL-03-86-K-0171, and the Siemens Corporation.
Dive into the research topics of 'Results on learnability and the Vapnik-Chervonenkis dimension'. Together they form a unique fingerprint.
|
{"url":"https://cris.huji.ac.il/en/publications/results-on-learnability-and-the-vapnik-chervonenkis-dimension-14","timestamp":"2024-11-13T09:51:42Z","content_type":"text/html","content_length":"48616","record_id":"<urn:uuid:8bbadca8-6859-4e23-a8cb-3501e4e26c7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00509.warc.gz"}
|
Add 3 Single Digit Numbers Worksheets (Second Grade, printable)
Printable “Add 3 1-digit Numbers” Worksheets:
Add 3 numbers (make 10) (eg. 5 + 3 + 5)
Add 3 single digit numbers (eg. 4 + 7 + 8)
3 Addends Word Problems
Online Worksheets
3 Addends, Sprint 1
3 Addends, Sprint 2
Add 3 Single Digit Numbers Worksheets
Adding three addends involves combining three numbers to find their sum. This can be done by adding the numbers in any order, and the result will be the same regardless of the order of addition.
Strategies for Adding Three Addends:
1. The way in which numbers are grouped in an addition problem does not affect the sum. The students can then regroup the numbers in any order they find convenient.
2. The order of the numbers being added does not change the sum. The students can then reorder the numbers in any order they find convenient.
3 + 11 + 7
= 3 + 7 + 11
= 10 + 11
= 21
Encourage students to practice adding three addends with different sets of numbers to build fluency and confidence in this skill. Emphasize that the order in which the numbers are added does not
change the final sum.
Second Grade math worksheets to help students practice adding 3 single digit numbers or 3 addends using strategies like counting up, number bonds, make ten, make doubles etc.
Have a look at this video if you need help adding 3 single digit numbers.
Click on the following worksheet to get a printable pdf document.
Scroll down the page for more Add 3 Single Digit Numbers Worksheets.
More Add 3 Single Digit Numbers Worksheets
(Answers on the second page)
Add 3 Single Digit Numbers Worksheet #1
Add 3 Single Digit Numbers Worksheet #2
Add 3 Single Digit Numbers Worksheet #3
Add 3 Single Digit Numbers Worksheet #4
Add 3 Single Digit Numbers Worksheet #1 (Interactive)
Add 3 Single Digit Numbers Worksheet #2 (Interactive)
3 Addends, Make 10 First (eg. 5 + 3 + 5 = __)
Add 1-digit Numbers, 3 addends
1-digit, 3 addends Sprint
Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
|
{"url":"https://www.onlinemathlearning.com/add-3-numbers-worksheet.html","timestamp":"2024-11-07T22:03:06Z","content_type":"text/html","content_length":"38763","record_id":"<urn:uuid:0d4c139e-1f29-475a-a123-9ec2e10a1cd0>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00740.warc.gz"}
|
Train Reinforcement Learning Agent in MDP Environment
This example shows how to train a Q-learning agent to solve a generic Markov decision process (MDP) environment. For more information on these agents, see Q-Learning Agent.
The example code may involve computation of random numbers at various stages. Fixing the random number stream preserves the sequence of the random numbers every time you run the code and improves
reproducibility of results. You will fix the random number stream at various locations in the example.
Fix the random number stream with the seed 0 and random number algorithm Mersenne Twister. For more information on random number generation see rng.
previousRngState = rng(0,"twister")
previousRngState = struct with fields:
Type: 'twister'
Seed: 0
State: [625x1 uint32]
The output previousRngState is a structure that contains information about the previous state of the stream. You will restore the state at the end of the example.
The MDP environment has the following graph.
1. Each circle represents a state.
2. At each state there is a decision to go up or down.
3. The agent begins from state 1.
4. The agent receives a reward equal to the value on each transition in the graph.
5. The training goal is to collect the maximum cumulative reward.
Create MDP Environment
Create an MDP model with eight states and two actions ("up" and "down").
MDP = createMDP(8,["up";"down"]);
To model the transitions from the above graph, modify the state transition matrix and reward matrix of the MDP. By default, these matrices contain zeros. For more information on creating an MDP model
and the properties of an MDP object, see createMDP.
Specify the state transition and reward matrices for the MDP. For example, in the following commands:
• The first two lines specify the transition from state 1 to state 2 by taking action 1 ("up") and a reward of +3 for this transition.
• The next two lines specify the transition from state 1 to state 3 by taking action 2 ("down") and a reward of +1 for this transition.
MDP.T(1,2,1) = 1;
MDP.R(1,2,1) = 3;
MDP.T(1,3,2) = 1;
MDP.R(1,3,2) = 1;
Similarly, specify the state transitions and rewards for the remaining rules in the graph.
% State 2 transition and reward
MDP.T(2,4,1) = 1;
MDP.R(2,4,1) = 2;
MDP.T(2,5,2) = 1;
MDP.R(2,5,2) = 1;
% State 3 transition and reward
MDP.T(3,5,1) = 1;
MDP.R(3,5,1) = 2;
MDP.T(3,6,2) = 1;
MDP.R(3,6,2) = 4;
% State 4 transition and reward
MDP.T(4,7,1) = 1;
MDP.R(4,7,1) = 3;
MDP.T(4,8,2) = 1;
MDP.R(4,8,2) = 2;
% State 5 transition and reward
MDP.T(5,7,1) = 1;
MDP.R(5,7,1) = 1;
MDP.T(5,8,2) = 1;
MDP.R(5,8,2) = 9;
% State 6 transition and reward
MDP.T(6,7,1) = 1;
MDP.R(6,7,1) = 5;
MDP.T(6,8,2) = 1;
MDP.R(6,8,2) = 1;
% State 7 transition and reward
MDP.T(7,7,1) = 1;
MDP.R(7,7,1) = 0;
MDP.T(7,7,2) = 1;
MDP.R(7,7,2) = 0;
% State 8 transition and reward
MDP.T(8,8,1) = 1;
MDP.R(8,8,1) = 0;
MDP.T(8,8,2) = 1;
MDP.R(8,8,2) = 0;
Specify states "s7" and "s8" as terminal states of the MDP.
MDP.TerminalStates = ["s7";"s8"];
Create the reinforcement learning MDP environment for this process model.
To specify that the initial state of the agent is always state 1, specify a reset function that returns the initial agent state. This function is called at the start of each training episode and
simulation. Create an anonymous function handle that sets the initial state to 1.
Create Q-Learning Agent
To create a Q-learning agent, first create a Q table model using the observation and action specifications from the MDP environment. Set the learning rate of the table model to 0.1.
obsInfo = getObservationInfo(env);
actInfo = getActionInfo(env);
qTable = rlTable(obsInfo, actInfo);
Next create a Q-value critic function from the table model.
qFunction = rlQValueFunction(qTable, obsInfo, actInfo);
Finally, create the Q-learning agent using this critic function. For this training:
• Specify a discount factor of 1.0 to favor undiscounted long term rewards.
• Specify the initial epsilon value 0.9 for the agent's epsilon greedy exploration model.
• Specify a decay rate of 1e-3 and the minimum value of 0.1 for the epsilon parameter. Decaying the exploration gradually enables the agent to exploit its greedy policy towards the latter stages of
• Use the stochastic gradient descent with momentum (sgdm) algorithm to update the table model with the learning rate of 0.1.
• Using the L2 regularization factor 0. For this example disabling regularization helps in better estimating the long term undiscounted rewards.
agentOpts = rlQAgentOptions;
agentOpts.DiscountFactor = 1;
agentOpts.EpsilonGreedyExploration.Epsilon = 0.9;
agentOpts.EpsilonGreedyExploration.EpsilonDecay = 1e-3;
agentOpts.EpsilonGreedyExploration.EpsilonMin = 0.1;
agentOpts.CriticOptimizerOptions = rlOptimizerOptions( ...
Algorithm="sgdm", ...
LearnRate=0.1, ...
qAgent = rlQAgent(qFunction,agentOpts);
For more information on creating Q-learning agents, see rlQAgent and rlQAgentOptions.
Train Q-Learning Agent
To train the agent, first specify the training options. For this example, use the following options:
• Train for 400 episodes, with each episode lasting at most 50 time steps.
• Specify a window length of 30 for averaging the episode rewards.
For more information, see rlTrainingOptions.
trainOpts = rlTrainingOptions;
trainOpts.MaxStepsPerEpisode = 50;
trainOpts.MaxEpisodes = 400;
trainOpts.ScoreAveragingWindowLength = 30;
trainOpts.StopTrainingCriteria = "none";
Fix the random stream for reproducibility.
Train the agent using the train function. This may take several minutes to complete. To save time while running this example, load a pretrained agent by setting doTraining to false. To train the
agent yourself, set doTraining to true.
doTraining = false;
if doTraining
% Train the agent.
trainingStats = train(qAgent,env,trainOpts); %#ok<UNRCH>
% Load pretrained agent for the example.
Validate Q-Learning Results
Fix the random stream for reproducibility.
To validate the training results, simulate the agent in the training environment using the sim function. The agent successfully finds the optimal path which results in cumulative reward of 13.
Data = sim(qAgent,env);
cumulativeReward = sum(Data.Reward)
Since the discount factor is set to 1, the values in the Q table of the trained agent are consistent with the undiscounted returns of the environment.
QTable = getLearnableParameters(getCritic(qAgent));
ans = 8x2 single matrix
13.0000 12.0000
5.0000 10.0000
11.0000 9.0000
3.0000 2.0000
1.0000 9.0000
5.0000 1.0000
TrueTableValues = [13,12;5,10;11,9;3,2;1,9;5,1;0,0;0,0]
TrueTableValues = 8×2
See Also
Related Examples
More About
|
{"url":"https://nl.mathworks.com/help/reinforcement-learning/ug/train-reinforcement-learning-agent-in-mdp-environment.html","timestamp":"2024-11-03T22:55:49Z","content_type":"text/html","content_length":"84977","record_id":"<urn:uuid:d3dd0e6c-da55-43d6-8cc6-fbd9fdba366d>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00168.warc.gz"}
|
are hardly
Books which are hardly to get.
This list is my recommended books for mathematics, science and computer.
Explanatory notes:
example works
Don Cohen
Changing Shapes with Matrices
I've read this in Japanese (Don Cohen and Noriko Arai).
Kimura, Yoshio
Daigaku ichinensei no tameno omosiro senkeisuugaku (interesting linear system for freshman) Do you know the meaning of eigenvalue or the meaning of matrix? This book describes these things.
Transnational college of lex: Hippo family club
Fourier no bouken.(The adventure of Fourier) I know why the first term of Fourier transfer has '1/2'a0 from this book. There is a English version, ``Who Is Fourier? : A Mathematical Adventure''
by Transnational College of LEX ISBN: 0964350408
Martin Gardner
Aha. Ambidextrous Universe. Even Feynman was glad to read these books.
Hayashi, Susumu
Kobari, Akihiro
Mathematics books (In Japanese)
Shiga, Kouji
Mathematics books (In Japanese)
□ Mathematics for high school students. It's good. What is manifold? At least you can catch what it is. Of course, that world is so deep and you should go father. However, you can stand at the
starting point.
Nozaki, Akihiro
Mathematics books (In Japanese)
Paul Hoffman
Archimedes' Revenge: This is a great book. It is somehow dangerous book because this book might be able to change my life.
The man who loved only numbers: This is also recommended.
Douglas R. Hofstadter
G\"ODEL, ESCHER, BACH: an Eternal Golden Braid
What a wonderful book! I can not believe that a human can write such a fantastic book.
Tooyama, Hiraku
Mathematics books (In Japanese)
Asari, Yoshitoo
Manga Saiensu (science in cartoon) If many of primary students read his books, I think there is no worry about decreasing scientist.
R.P. Feynmann, Ralph Leighton, Christopher Sykes
Surely you're joking, Mr. Feynmann! The Feynman Lectures on Physics and more... All books are recommended. If you want to hear his voice, you can get Six easy pieces and Feynman's lost lecture.
If there are other his CD, please inform me. 'The whole universe is in a glass of wine..'
J. Gleick
Genus, Chaos
Brian Green
The elegant universe.
On Being a scientist : Responsible conduct in research
Ikeda, Mitsuo and Ashizawa, Shouko
Why can we see the colors? : written in japanese `Doushite iroha mieru noka?'
Steven M. Casey
Set phasers on stun: and other true tales of design, technology, and human error
Donald A. Norman
Psychology of everyday thing
William Poundstone
The recursive universe. Computer scientists always thinks 'what is computation?'. You can catch a glimpse about the question from this book.
Carl J. Sindermann
`Winning the Games Scientists play,' `Survival Strategies for new scientists. '
Steven M. Casey
Set Phasers on Stun. And other true tales of design, technology, and human error
• Reading (for understanding computer itself)
Alan W. Bierman
Great Ideas in Computer Science : A Gentle Introduction, Second Edition; MIT Press; 04/1997; ISBN: 0262522233
Do you know how to make a flip-flop by relay? How parallel processing works? This is very easy book however, when you read this book, you get fundamental ideas of computer science. And there
are very interesting cartoons in this book. ``twice the programmer'' said the boss... (I read this book in Japanese, so I do not know exactly words.)
Tom DeMarco and Timothey Lister
Peopleware: The key of success is a human for a project. Yes, I thought I knew it. But, how easy to fail. I should have read this book earlier. If you read Robert Colwell's The Pentium
Chronicles,' I can easy find his project is also successful. 2007-3-6(Tue)
Hoshino, Tutomu
Who and how computers invented.
Donald E. Knuth
Literate Programming, etc. No need to explain about him to computer scientist as like as no need to explain about Spock to trekky.
William Poundstone
I do not know the English title of (Raifu geimu no ucyu -> may be `The universe of life games'?). This book describes Conwey's 2-dimensional celler automaton and other games.
Gerald M. Weinberg, Donald C. Gause
Are your lights on?, The secrets of consulting, and more... All books I read are exciting and interesting.
• Architecture
W. Daniel Hillis
The connection machine. This book gave me an opportunity to study about massively parallel processing. This book describes one of greatest idea I have ever seen. However, I think TM lacks
some kind of structures, hierarchy and thinking about gaps of problems and parallel computers. I think improving these problems makes a massively parallel computer useful tools.
John L. Hennessy/David A. Patterson
Computer Architecture : A Quantitative Approach.
Whan I decided to belong Nakamura Lab(Tohoku Univ.) of the senior year, Prof. Kobayashi said, first we will read the chapter 1, 2, 3, 5, 6, 7 of this book. This is beginning of my
researcher's life.
David A. Patterson/John L. Hennessy
Computer Organization and Design.
I have not read 2nd edition of this book, yet. But the 1st edition is good. Now I am waiting the 2nd edition from the USA.
Mike Johnson
Superscaler Microprocessor Design
If you finished `Computer Architecture : A Quantitative Approach, ' my next recommended book is this.
Stack Computers: the new wave
Please do not say ``Stack machine? That's too old.'' In the embedded area and Java use this architecture since its small code size and so on. PostScript and Forth is also based on the stack
machine model. The feature of the stack cache is still interesting for me. Of course, it is very hard to imagine for me that the stack architecture will replace with the register
architecture. However, I am still interested in this architecture.
• Language
Abelson, Sussman, and Sussman
Structure and Interpretation of Computer Programs If you can start your programming carrier with this book. I believe you will be happy.
``Design Patterns'', Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides, Addison Wesley (I read this in Japanese.)
I found that my programming style changes when I read this book. Once I read through whole book, now I read this book as a dictionary.
``How to write Parallel Programs (A First Cource), Nicholas Carriero and David Gelernter, M.I.T. Press (In Japanese, Heiretu program no tukurikata, Translation by MURAOKA Youichi, Kyouritu
This is a great book for writing Linda parallel program. There are a lot of suggestions and teachings. However, In Japanese edition, there are some meaningless parts. So, I ordered original
English book and waiting now.
Hidaka TOORU
Z80 Mashin go Hiden no sho (for people who like Z80)
Brian W. Kerninghan, P.J. Plauger
The elements of programming style, Software tools.
Scott Meyers
Effective C++
C++ is difficult. But, when you restrict yourself by some rules, you can write it much better. Large freedom is sometimes difficult for me. I think m4, perl and tcl have more freedom, but
they are difficult for me. Java and ruby are rather restricted language, but paradoxically, it is easier for me.
Martin Fowler
Because of test driven programming, you have working code all the time. Also talking about importance of refactoring. I think most of the programmers agree with importance of refactoring, but
before I read this book, it is somehow too abstract. Now we have a catalog and concrete procedures to do that. That makes the refactoring solid method instead of house of cards.
• System
``Distributed Operating System'', Andrew S. Tanenbaum, Prentice Hall, 1994, ISBN: 0132199084
If you want to know about distributed system, read this first.
By the way, at Figure 4.32 (Two schedules, including processing and communication) has a careless error? It is very slightly point and no one care about this?
``How Debuggers Work''(Japanese), Jonathan B. Rosenberg, YOSHIKAWA Kunio (Translation to Japanese), ASCII ISBN4-7561-1745-7 C3055, 3500 yen
Debugger is a very important technology. However, I think that technology is not evaluated validly by researchers (especially computer architect). This book is good introduction to who want
to know about debuggers implementation, supporting with OS and compiler and processor.
There are many good books, it might be a lot of. I could not put them all. It is a bit in detailed, but how to start with this page.
Copyright (C) 1997-2007 YAMAUCHI Hitoshi
|
{"url":"https://sundayresearch.eu/hitoshi/hobby/recommendedbooks.html","timestamp":"2024-11-04T10:22:08Z","content_type":"text/html","content_length":"12053","record_id":"<urn:uuid:c0726b55-8542-4433-b3ae-4bd709e3f0b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00590.warc.gz"}
|
A class of weighted Delannoy numbers
The weighted Delannoy numbers are defined by the recurrence relation $f_{m,n}=\alpha f_{m-1,n}+ \beta f_{m,n-1}+ \gamma f_{m-1,n-1}$ if $m n>0 $, with $f_{m,n}=\alpha^m \beta^n$ if $n m=0$.
In this work, we study a generalization of these numbers considering the same recurrence relation but with $f_{m,n}=A^m B^n$ if $n m=0$. In particular, we focus on the diagonal sequence $f_{n,n}$ for
which we find its asymptotic behavior and we study its P-recursivity.
• There are currently no refbacks.
|
{"url":"https://journal.pmf.ni.ac.rs/filomat/index.php/filomat/article/view/17363","timestamp":"2024-11-13T18:52:04Z","content_type":"application/xhtml+xml","content_length":"15139","record_id":"<urn:uuid:d86f24f0-f711-4fe6-8574-288e8f2b7748>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00247.warc.gz"}
|
Geometry Interior Angles Worksheet Answers - Angleworksheets.com
Geometry Angles Worksheet Answers – Angle worksheets are a great way to teach geometry, especially to children. These worksheets include 10 types of questions about angles. These include naming the
vertex and the arms of an angle, using a protractor to observe a figure, and identifying supplementary and complementary pairs of angles. Angle worksheets are … Read more
Geometry Worksheet Answers Angles
Geometry Worksheet Answers Angles – Angle worksheets can be helpful when teaching geometry, especially for children. These worksheets contain 10 types of questions on angles. These questions include
naming the vertex, arms, and location of an angle. Angle worksheets are an integral part of any student’s math curriculum. They teach the parts of an angle, … Read more
Geometry Interior Angles Worksheet Answers
Geometry Interior Angles Worksheet Answers – Angle worksheets are a great way to teach geometry, especially to children. These worksheets include 10 types of questions about angles. These questions
include naming the vertex, arms, and location of an angle. Angle worksheets are a key part of a student’s math curriculum. They help students understand the … Read more
Geometry Angle Worksheet Answers
Geometry Angle Worksheet Answers – Angle worksheets are a great way to teach geometry, especially to children. These worksheets contain 10 types of questions on angles. These include naming the
vertex and the arms of an angle, using a protractor to observe a figure, and identifying supplementary and complementary pairs of angles. Angle worksheets are … Read more
|
{"url":"https://www.angleworksheets.com/tag/geometry-interior-angles-worksheet-answers/","timestamp":"2024-11-15T04:32:30Z","content_type":"text/html","content_length":"67226","record_id":"<urn:uuid:af453801-efd2-46e6-8aab-2f3ed6d73928>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00192.warc.gz"}
|
Which Triangle will have a bigger area?Puzzles World: Which Triangle will have a bigger area?
You all must be familiar with triangles right?
Here is a question that you must figure out to put your skills to test.
Between the following two triangles,
which one do you think will have a bigger area?
A) A triangle with dimensions 300, 400 and 700
B) A triangle with dimensions 300, 400 and 500
B will have bigger area since Triangle A is not possible to draw.
|
{"url":"https://www.puzzles-world.com/2014/09/which-triangle-will-have-bigger-area.html","timestamp":"2024-11-12T00:59:36Z","content_type":"application/xhtml+xml","content_length":"52862","record_id":"<urn:uuid:cc9b9208-9582-4f60-87cf-89ecca6c1c40>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00589.warc.gz"}
|